5. Capacity and knowledge governance

In the OECD strategic education governance framework, the capacity of stakeholders, in terms of time available, resources and skills to take up their roles and responsibilities are of central importance. Such considerations relate to the maturity of the culture and readiness to adopt and integrate new approaches and how diverse this is within the system. With the introduction of standardised tests, the OECD team asked stakeholders to assess how they would integrate these and whether there would be specific needs within the system (Box 5.1).

The annual national sample-based tests (Peilingen) are developed and scored by the Centre for Test Development and Assessment (Catholic University Leuven and University of Antwerp). These tests are developed each year for the sixth grade of primary education and the second, fourth or sixth grade of full-time secondary education. The Centre also offers a set of parallel tests (Paralleltoetsen) for schools that are interested. These are offered as supports for school self-evaluation. The Centre provides guidance on how these should be administered, scores student performance on the tests and provides a feedback report to schools with results. All tests are paper-based, although the Centre has been exploring the feasibility of using a digital platform.

In primary education, schools must ensure that students are tested in the final year (Grade 6) as a support to verify student achievement in at least three learning domains. This requirement was introduced in 2018/19. Schools are free to choose from the government toolkit of validated tests (Table 5.1). The main tests used by schools are those developed by the Catholic Education Flanders (KOV) and Educational Secretariat of Flemish Cities and Municipalities (OVSG) umbrella organisations. Only few schools use the available central parallel tests (paralleltoetsen).

During discussions with the OECD team, representatives from both umbrella organisations currently developing validated tests expressed great pride in the high participation rates of schools opting to use tests within their networks. In 2015/16, 85% of KOV schools used its Grade 6 test and 98% of OVSG schools and 92% of the GO! Community schools used the OVSG tests (Janssen et al., 2017[1]). Representatives advised that this has created familiarity with tests and demonstrates high levels of trust in schools for the support provided by their networks. However, representatives from both networks underlined that they have limited resources and capacity for test development and that the ‘scientific process’ for test development is limited. Limited statistical capacity has consequences for reliability of the tests with respect to the difficulty of tests from year to year and the reporting back to schools on averages, notably for ‘comparable school groups’ which contextualise the results based on school composition data (Janssen et al., 2017[1]). Professionals take on test development tasks in addition to their other responsibilities and work with groups of volunteers. For OVSG this comprises about twenty pedagogical advisors. For KOV this comprises pedagogical advisors, policy supporters and teacher educators. Representatives advised the OECD team that capacity for test development is roughly around two to three full-time equivalent staff.

The initial mechanisms established by the Department of Education and Training seek to engage a broad range of technical expertise, as embodied in the University Centre, and a multi-stakeholder group to steer the development and administration of the tests. During the discussions with stakeholders, the OECD team noted strong confidence in the scientific expertise in the consortium of the University Centre. There is appreciation for the potential synergies that a consortium of expertise can bring for test development capacity. Stakeholders voiced expectations for scientific rigour and independence of standardised tests.

When introducing new assessments, many countries have chosen to establish specific bodies with responsibilities in this area. A series of reviews of evaluation and assessment in OECD countries pointed to the importance of ensuring adequate system capacity for the design, implementation and reporting and feedback of results in national assessment systems (OECD, 2013[2]). Considerable investment is required to develop capacity and expertise in standardised test development and it takes time to develop the necessary expertise. A central body with specific responsibility for educational assessment can provide technical autonomy from the education authorities with the necessary distance from political decision making. Central capacity, in whatever form, can both symbolise greater focus on the importance of assessment and influence perceptions throughout the system on the reliability of tests used. As the OECD reviews documented, to varying degrees among countries and over different political cycles, there may be different tensions put on national evaluation bodies, including limited resources available for their activities, restructuring and in some cases, closure.

OECD data collected as part of the PISA 2018 survey show Flemish secondary schools in a positive position in terms of perceived school capacity related to digital devices. At least four out of five Flemish students participating in PISA 2018 were in schools whose leader reported sufficient number of digital devices, with connection to the Internet and sufficient bandwidth or speed (Figure 5.1). Two-thirds of students were reportedly in schools with an effective online learning support platform. However, these data indicate that not all secondary school leaders thought digital capacity was sufficient at their school. Note that these reports predate the COVID19 health crisis, which put much greater focus on the use of such platforms. In the spring of 2021, the Flemish government provided schools with extra budget to buy digital devices and to strengthen connectivity. From Grade 5 onwards, schools receive funding to buy a device for every single students. For students in Grade 4 (and below) the school receives a budget to buy computers that students can share.

During some interviews, comments were made regarding the current capacity within Flemish schools to administer digital tests at scale. Parental representatives were doubtful that all schools would be ready to do so. Such concerns need to be addressed with authoritative data on the availability and adequacy of digital resources in Flemish schools. Notably, the umbrella organisation KOV administers some of its tests online and pays attention to capacity with a trial run in May before administering the real tests in June (Table 5.1) In June 2021, 40% of schools participating in the OVSG tests administered these online, as this option was offered for the first time. An OVSG press release points to feedback that barriers for other schools include limited ICT infrastructure, teachers’ fear of digital testing and trust in the digital system, but that participating schools will continue to administer online tests (OSVG, 2021[3]). AHOVOKS provided a concrete illustration of varying capacity in schools to run digital tests. Due to the health crisis, the entrance examination to study medicine or dentistry in 2020 had to be organised differently. Instead of being held centrally in Brussels, it was held in several secondary schools and the experience showed that not all schools had the requisite hardware and had to borrow computers to administer the examination. However, these are high stakes examinations for students and so the administration at the same time for all students is an important aspect. Regardless, these points indicate the need to have a careful review and feedback from schools on such points of logistical implementation.

During interviews, many stakeholders referred to the variation in school quality assurance processes. All school network representatives noted this from their work with schools and offer of support and development work within their respective network. Evidence from inspections in Flemish primary and secondary schools back this up and is obviously widely communicated among stakeholders.

The greater focus in the inspection approach on school quality assurance processes provides regular insight to the maturity of the evaluation culture in Flemish schools. Evidence from school inspections conducted in the school years 2018/19 and 2019/20 shows variation in school quality assurance processes at both primary and secondary level. The Quality reference framework includes a set of six criteria that collectively capture and set expectations for a school’s capacity to assure its quality (Table 5.2)

Among the six criteria used to evaluate school quality assurance processes, ‘K5 Reliable evaluation of the quality’ addresses the use of student test results. While serious concerns were identified in only a small number of schools inspected, the reliability of evaluations of school quality was judged to not fully meet expectations in 47.2% of the primary schools and in 37.0% of the secondary schools (Figure 5.2). This means that:

The school evaluated its quality in a limited and targeted way based on the available qualitative and quantitative sources. It misses out on opportunities to involve the expertise of relevant partners in its evaluations. It does not yet succeed in using the results and effects for the pupils in its evaluations. This puts the reliability of the evaluations at risk (Onderwijsinspectie, 2019).

Reliability here refers to a lack of externality or objectivity in evaluations. It also points out that the school does not make use of the results and does not benefit from the expertise of ‘relevant partners’, which would include the pedagogical advisory support services available in most networks.

The majority of secondary schools inspected do not fully meet expectations on the criteria ‘K4 Systematic evaluation of the quality’ and ‘K3 Educational policy’. Together these point to fragmented and unsystematic approaches to school evaluation and related improvement actions.

Student representatives commented that with the introduction of standardised tests there would need to be a review of the overall workload and time commitment for pupils. Concerns were raised in respect to the existing set of tests and whether standardised tests would simply add to these or replace some. The Flemish Student Association (VSK) underlines a concern on current workload for students and a request for greater co-ordination of teacher planning on homework and assessment (VSK, n.d.[6]). Student concerns about a lack of co-ordination are mirrored in the official inspection evaluations finding fragmented practices in many Flemish schools, particularly at the secondary level (Figure 5.2).

The VSK also appeals for less summative assessment (grading, sorting of students into specific groups, etc.). (VSK, n.d.[6]). This is directly linked with the notion of workload, as with higher stakes attached there is more preparation time for students. During the discussion with the OECD team, students reported anecdotally very different approaches to how existing tests (such as paralleltoetsen) are used in schools and that some teachers/schools attach stakes to these, which adds test preparation time for students. Data from PISA 2018 reflect the different emphases placed on academic performance and placement tests among Flemish schools, with 28 percent of participating 15-year-olds in schools where such information was always considered for admission to the school and conversely 43% where this was never considered (OECD, 2020[4]). An evaluation of the umbrella organisation tests by KOV and OVSG did not look into how schools administer and use results of tests, but noted that this is an area that would need to be looked into (Janssen et al., 2017[1]).

In the OECD strategic education governance framework, knowledge governance is closely linked with capacity. It goes beyond the supply side of knowledge governance to think about the more comprehensive and complex nature of knowledge and its flow and use within an education system. Clearly, the supply and access to knowledge is an important aspect, but so too are motivations and capabilities of stakeholders to consult and act on it. Taking this comprehensive approach, the OECD uses a research-based framework to promote the use of evidence by educational decision makers (Figure 5.3). This draws on the work of (Langer, Tripney and Gough, 2016[7]) where evidence pertains to the product of any “systematic investigative process employed to increase or revise current knowledge”. This includes formal research, for example as carried out by research institutions, government agencies or think tanks; systematically gathered understandings from education practice and the practice of policy making, implementation, and evaluation; as well as factual administrative and achievement data (Langer, Tripney and Gough, 2016[7]). Results from standardised tests constitute one important form of evidence.

Effective knowledge governance addresses three factors that promote the use of evidence in decision making: opportunity, capability and motivation (Figure 5.3). These are based on a theory of behavioural change that identifies opportunity as those factors external to the individual that may prompt a change in behaviour, including the availability of evidence, e.g. in forms of access to a data warehouse or indeed the results of standardised tests, and the time to consult and use the evidence. The two other aspects relate to the individual concerned: capability includes having the necessary knowledge and skills to engage in the activity, and motivation includes analytical decision making, habits and emotional responses (Michie, van Stralen and West, 2011[8]). At the core of this is a recognition that while making different forms of evidence available is a fundamental requirement, this is by no means sufficient to translate to its active use and integration in daily work and practices. It is of equal importance to consider the capabilities and motivations of those involved in the daily work of student learning and the organisational and support processes that surround them.

The OECD team asked stakeholders about the opportunities the standardised tests could bring for their work or learning (Box 5.1).This was also the focus of working group discussions in the stakeholder reflection seminar in June (see an overview of key points in Table 5.4).

There is already recognition of the importance of motivation at the school level to data collection within the Flemish system. Drawing on experience with other administrative data collection, representatives from AGODI commented that it is difficult to get good quality data from schools if schools do not see the value in this. Schools are more motivated and engaged when they know why the data are collected and get something in return, learning something from the process. This is entirely in line with the theory of behavioural change at the core of the OECD framework (Figure 5.3).The OECD team did not interview school leader representatives and understanding their motivations will be critical in the further development of standardised tests.

Based on stakeholder feedback, the OECD team identifies the major motivations below.

During the discussion with student representatives, the OECD team noted their expectations for “better quality tests with a scientific basis”. The greater reliability and objectivity of standardised tests and results was also pointed to by other stakeholders at the reflection seminar, specifically the networks and school leaders (Table 5.4). This echoes previous research to establish different scenarios and investigate social support for introducing standardised tests in Flanders that found strong consensus on the expectation that such tests would contribute to more reliable assessment of students (Penninckx et al., 2017[9]).

Student representatives expressed strong support for the role that standardised tests could play in promoting a more rigorous approach to grading in schools. They raised prominent concerns regarding current assessment practices and the reliability of many tests used in schools. Although the standardised tests would be limited to Dutch and mathematics, there is hope that these would promote a more rigorous approach to grading equivalency at the school level. This was echoed by academics in the stakeholder reflection seminar (Table 5.4).

Certainly, at the network level, discussions with the OECD team highlighted an understanding that standardised tests would bring additional value for schools and school development. Again, this echoes previous research finding stakeholder support for the introduction of standardised tests to support self-evaluation in schools (Penninckx et al., 2017[9]). All network representatives noted that the new standardised tests would provide reliable and comparative feedback to schools. Even if some schools had chosen to administer the existing national tests (paralleltoets), the advantage of the new standardised tests would be that all schools would get feedback, providing richer information for the network as a whole. At the stakeholder reflection seminar, network representatives also commented that the standardised tests would allow to rebalance their discussions with schools to include greater focus on learning results (Table 5.4). A degree of external challenge supports the developmental function of school self-evaluation (OECD, 2013[2]).

Among the networks currently developing and administering their own validated tests, representatives from Catholic Education Flanders used the analogy that standardised tests would provide ‘a reliable mirror’: schools will gain an idea of how they relate to other schools or the expected standard. Similarly, representatives from OVSG agreed that the value of standardised tests would be in pointing out strengths and weaknesses in a comparative light. Also, neither network currently develops or provides tests in secondary education so this is of particular added value to them and their work with schools.

Above all, students voiced their major motivation that the new tests would promote greater focus on feedback from teachers (“What to do and how to improve once the results are in?”). Their hope is that the new tests would provide useful feedback for teachers to work on with students and generally stimulate a more feedback-driven culture (Table 5.4). A series of OECD reviews found that teachers in several countries were positive about formative assessments as a tool to help decide the focus of improvement plans for individual students and also for greater collaboration with colleagues. However, the timeliness of results coming back to teachers and the granularity of feedback was critical to their perceived usefulness (OECD, 2013[2]).

Information gathered during the PISA 2018 survey reflects that a lack of feedback from teachers is a frustration for Flemish students. In fact, they report the second lowest levels of teacher feedback across OECD countries. This is captured with respect to receiving feedback from their teachers on their strengths and areas for improvement in Dutch language lessons, with a quarter of students reporting that teachers never or almost never give feedback on areas they need to improve and how to do so (Table 5.3).

A recent study looked at the relationship between different aspects reported by teachers and school leaders in TALIS 2018 and the performance of students in PISA 2018 in nine education systems (OECD, 2021[11]). It found a positive relationship between the time teachers reported spending on marking and correcting student work and both student performance and their educational expectations. The researchers note that this may reflect regularity of feedback to students and/or a culture or greater use of testing in higher performing schools. Feedback based on school and classroom results (e.g. performance, results, project results, test scores) was also associated with better performance in the PISA 2018 reading assessment.

Representatives from teacher unions see the introduction of standardised tests as an opportunity to stimulate professional development in data use (Table 5.4). In primary education, this can build on teacher experiences with using the current validated tests. In PISA 2018, Flemish school leaders report comparatively lower participation rates of teachers in professional development programmes (36% of teachers in Flanders had attended a programme in the past 3 months, compared to an OECD average of 53%) (OECD, 2020[4]). A recent OECD study found that the amount of time Flemish teachers spent engaged in continuous professional learning was ‘critically low’. The perception of the role of data in relation to teachers’ professional learning seemed to be often neutral or even negative, with few schools asking for the results of national tests and many considering that engaging with data remains challenging (OECD, 2021[12]). A small study found little or no systematic use of data in decisions on grade repetition. Among randomly selected first grade primary school teachers, a recent decision on grade repetition was ‘largely affected by intuitive expertise and feelings of knowing’ (Vanlommel et al., 2017[13]).

Networks with pedagogical advisory services providing support to schools see an opportunity to strengthen their collaborations and work with schools, based on the regular availability of results from standardised tests. During discussions with the OECD team, representatives from all networks underlined the importance of building capacity at the school level to work effectively with the results of standardised tests. They are motivated to mobilise support for schools to develop action plans for improvement, drawing on their established relationships with schools and familiarity with the different contexts. These points were echoed by network representatives in working groups at the stakeholder reflection seminar (Table 5.4). Feedback from Flemish school leaders in the TALIS 2018, in line with their counterparts in other countries, point to data use and teacher collaboration as priority professional development needs. However, the demand is comparatively high in Flemish lower secondary schools: 42% of school leaders report a high need for professional development to develop collaboration among teachers (compared to an average of 24%) and 40% to use data for improving the quality of the school (compared to an average of 26%) (OECD, 2019[14]).

Discussions with academics and officials underlined the benefit that system-wide information on outcomes would bring for research and policy. There are clear expectations that such information will provide an evidence base for better policy evaluation and inform more effective and efficient policy making. Comparable information for all Flemish schools at different educational levels would also strengthen the focus on the Flemish education system as a whole. At the stakeholder reflection seminar, the point was raised that feedback from the standardised tests could be used to evaluate the attainment targets (Table 5.4).

The Flemish Inspectorate sees the regular school-level data that standardised tests will provide as an opportunity to augment its evidence base for school inspections system wide and also to implement its more differentiated approach (Table 5.4).

At the stakeholder reflection seminar, participants were asked to think about and identify necessary preparations to support the effective development, introduction and use of standardised tests, as planned in May 2024. The following points were underlined as necessary to collectively build the engagement and motivation of schools (Table 5.5).

All stakeholders again repeated their pleas for clarity on the agreed purposes for the standardised tests. Academics require clarity in order to design and develop fit-for-purpose tests. For other stakeholders, their necessary preparations will need to align with the agreed purposes. Several stakeholders reiterated their wish to see the introduction of standardised tests as tools to support school development and to stimulate in earnest a professional culture of evidence use. They expressed the concern that the use of the standardised tests for accountability purposes would undermine their use for school development.

With these motivations in mind, stakeholders note the need for careful preparations and considerable attention to a communication strategy. Parental representatives call for clear and uniform communication on the nature and appropriate use of results from the standardised tests. Communication should be timely and accessible and address key questions such as:

How will the standardised tests improve student learning?

Will results of standardised tests provide better guidance?

What will change once we have the standardised tests?

Many of the above-mentioned motivations explicitly relate to expectations to improve capabilities at the school level for using data for development. Teacher Union representatives underline the need for careful planning of resources to allow adequate time to both administer standardised tests and analyse and use the results. In preparation for May 2024, paying attention to these planning aspects and allowing the space for educators to build the skills to work with these new tools will enhance their use. In a similar vein, students would wish to see more coherence and planning of student testing, with less reliance on summative tests.

Academics note the need to prepare explanatory materials on what the standardised tests can and cannot measure and examples of how to place the results in a broader perspective. School networks and the Inspectorate see a role in helping schools interpret the results in a proportionate and informative way for school development. School leaders underline the need for guidance for schools on how to use the results. This is in recognition of the important role they will play in introducing the tests in their schools, working with teachers to clarify the goals of the tests and to note and address any criticisms they may have.

Academics, the network pedagogical advisory services and the Inspectorate all see a role in supporting schools in using the results of standardised tests for school development. This can build on the established quality framework as a basis to ensure a proportionate and accurate interpretation of the results. The networks can also work with academics to build on their experiences with developing feedback reports for schools and how schools best interpret these in their context. It will be critical to work on the “data literacy” of teachers; otherwise the opportunities the standardised tests offer will be lost.

These preparatory points noted by stakeholders collectively recognise the reality of differing existing capacity within the system (see section on Capacity).


[1] Janssen, R. et al. (2017), Validering van IDP en de OVSG-toests : Eindrapport, Final report on validation of the IDP and OVSG tests, KU Leuven, https://data-onderwijs.vlaanderen.be/documenten/bestand.ashx?id=7760.

[7] Langer, L., J. Tripney and D. Gough (2016), The Science of Using Science - Researching the Use of Research Evidence in Decision-Making, EPPI-Centre, Social Science Research Unit, UCL Institute of Education, University College London, http://eppi.ioe.ac.uk/cms/Default. (accessed on 15 January 2018).

[8] Michie, S., M. van Stralen and R. West (2011), “The behaviour change wheel: A new method for characterising and designing behaviour change interventions”, Implementation Science, Vol. 6/1, http://dx.doi.org/10.1186/1748-5908-6-42.

[11] OECD (2021), Positive, High-achieving Students?: What Schools and Teachers Can Do, TALIS, OECD Publishing, Paris, https://dx.doi.org/10.1787/3b9551db-en.

[12] OECD (2021), “Teachers’ professional learning study: Diagnostic report for the Flemish Community of Belgium”, OECD Education Policy Perspectives, No. 31, OECD Publishing, Paris, https://dx.doi.org/10.1787/7a6d6736-en.

[4] OECD (2020), PISA 2018 Results (Volume V): Effective Policies, Successful Schools, PISA, OECD Publishing, Paris, https://dx.doi.org/10.1787/ca768d40-en.

[10] OECD (2019), PISA 2018 Results (Volume III): What School Life Means for Students’ Lives, PISA, OECD Publishing, Paris, https://dx.doi.org/10.1787/acd78851-en.

[14] OECD (2019), TALIS 2018 Results (Volume I): Teachers and School Leaders as Lifelong Learners, TALIS, OECD Publishing, Paris, https://dx.doi.org/10.1787/1d0bc92a-en.

[2] OECD (2013), Synergies for Better Learning: An International Perspective on Evaluation and Assessment, OECD Reviews of Evaluation and Assessment in Education, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264190658-en.

[5] Onderwijsinspectie (2019), Developmental scales for quality development, http://www.onderwijsinspectie.be.

[3] OSVG (2021), “OVSG-toets kent hoogste aantal deelnemers ooit (OVSG has highest number of participants ever)”, Press Release, https://www.ovsg.be/pers/ovsg-toets-kent-hoogste-aantal-deelnemers-ooit.

[9] Penninckx, M. et al. (2017), “Delphi study on standardized systems to monitor student learning outcomes in Flanders: mechanisms for building trust and/or control?”, Studia paedagogica, Vol. 22/2, pp. 9-31, http://dx.doi.org/10.5817/sp2017-2-2.

[13] Vanlommel, K. et al. (2017), “Teachers’ decision-making: Data based or intuition driven?”, International Journal of Educational Research, Vol. 83, pp. 75-83, http://dx.doi.org/10.1016/j.ijer.2017.02.013.

[6] VSK (n.d.), “Kick af van de puntenverslaving (Kick the addiction to grades)”, https://www.stemvanscholieren.be/talent-krijgt-alle-kansen.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2021

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at http://www.oecd.org/termsandconditions.