Chapter 2. How can we ensure an evidence-informed, self-improving initial teacher preparation system?

This chapter discusses three key challenges of ensuring evidence-informed, self-improving initial teacher preparation (ITP) systems. First, it notes the lack of rigorous research that could underpin ITP policies and practices by describing available evidence as well as major research gaps. Second, it explores the difficulties related to the use of evidence, in particular, to mediating knowledge, accessing and analysing available data, etc. Third, it discusses barriers to designing ITP in an evidence-based manner as a result of the often conflicting institutional contexts. The second and third sections of the chapter propose strategies to address these challenges. In particular, they discuss how different stakeholders can support building evidence, build continuous improvement in existing processes such as accreditation, and more effectively disseminate and use evidence across the system.

    

A key challenge in many initial teacher preparation (ITP) systems is the production and use of evidence to foster evidence-informed policymaking at the system level and evidence-informed design, delivery and improvement of ITP programmes. A lack of rigorous research on ITP practices and the implementation of policy create the opportunity for a myriad of approaches in ITP and little way to evaluate their potential. A 2014 review of the international research evidence on high quality teacher education found that the best programmes are underpinned by a clear understanding of how beginning teachers learn to teach and that programmes themselves are the subject of ongoing research and development for improvement (BERA, 2014[1]).

Effective knowledge production and use is an important part of a coherent, self-improving ITP system (Roberts-Hull, Jensen and Cooper, 2015[2]). Knowledge can consist of formal research knowledge, indicators, and the professional knowledge of teachers and practitioners as well as broader education stakeholders and policy makers. The production and use of knowledge is interconnected with a system’s governance mechanisms including policy design and implementation, accountability and priority setting (Burns, Köster and Fuster, 2016[3]).

Evidence – “the available knowledge and information indicating whether a belief or proposition is true or valid” 1 – is an important form of knowledge in an ITP system. What constitutes evidence is debated in many systems and is often subject to a country's research traditions. Specific research questions and robust research methodologies suited to that type of question can help convert data and knowledge into evidence (OECD, 2007[4]).

This chapter of the report discusses the production and utilisation of evidence about ITP policies, programmes and practices relevant for the overall design of ITP systems. Chapter 4 addresses the production and utilisation of evidence about teaching and learning that are used within ITP programme content and enacted by teachers in schools.

2.1. Why is this a challenge?

2.1.1. Building rigorous evidence about ITP policies and practices

Despite growing research interest in teacher preparation, there are very few systematic reviews on ITP programmes and practices (du Plooy et al., 2016[5]), only a small number of large-scale multi-programme research studies (Cochran-Smith and Zeichner, 2005[6]; Cochran-Smith et al., 2015[7]), and little research on policy implementation in ITP (Peck, Gallucci and Sloan, 2010[8]).

A review of more than 1 500 empirical, peer-reviewed studies published in the United States and in major international sources between 2000 and 2012 on teacher preparation categorised the research into three clusters: teacher preparation accountability, effectiveness, and policies; teacher preparation for the knowledge society and teacher preparation for diversity and equity (Cochran-Smith and Villegas, 2015[9]), as described in Table 2.1.

Table 2.1. Major programmes and clusters of research on teacher preparation

Cluster

Research Program A: Teacher Preparation Accountability, Effectiveness, and Policies

A1

Alternative certification and pathways

A2

Policy responses and trends

A3

Testing and assessment

A4

Program evaluation

Research Program B: Teacher Preparation for the Knowledge Society

B1

Preparing teachers to teach science subject matter

B2

The influence of coursework on learning to teach

B3

The influence of fieldwork on learning to teach

B4

Content, structures, and pedagogy of teacher preparation for the knowledge society

B5

Teacher educators as teachers and learners

B6

Teacher preparation and learning to teach over time

Research Program C: Teacher Preparation for Diversity and Equity

C1

The influence of coursework and fieldwork on learning to teach diverse student populations

C2

Recruiting and preparing a diverse teaching force

C3

Content, structures, and pedagogies of teacher preparation for diversity

C4

Teacher educator learning for/experiences with diversity

Source: Cochran-Smith, M. and A. Villegas (2015[9]), “Framing teacher preparation research: An overview of the field, part I”, Journal of Teacher Education, Vol. 66/1, pp. 7-20.

Two key – but separate – research spaces in teacher preparation have developed in recent years. One large research space generates knowledge on teacher candidate learning and involves primarily small-scale, single site studies conducted by researchers who are also teacher educators. Although fewer in number, there are also some large-scale comparative research projects that look into teacher candidates’ learning opportunities and how these relate to their knowledge (König et al., 2011[10]; König et al., 2017[11]). The second smaller space is related to research on teacher preparation policies such as human capital policies, personnel practices of school systems and teacher preparation providers (Cochran-Smith et al., 2015[7]). Both these research spaces produce knowledge that can inform teacher education policies including at the national and at the institutional levels.

Key gaps in the knowledge on teacher preparation include: effective practices across institutions; the relationship between specific ITP programme components and students’ learning as opposed to solely focussing on teacher learning; how teacher preparation influences candidates practice in relation to specific teaching tasks and techniques in the classroom as opposed to general teacher candidates beliefs, understandings and reflective practices; deep research on equity and access, and the underlying impact of social, cultural and institutional factors; evaluation measures that are sensitive to programme content and quality; and effects over time (longitudinal research) (Wilson, Floden and Ferrini-Mundy, 2001[12]; Cochran-Smith et al., 2015[7]). The challenge for ITP systems is to support increasingly rigorous research on emerging practices to understand the interaction of different factors that constitute effective practices.

Some systems implement accountability policies to collect and publish data about ITP programmes as a means to build evidence and support improvement across the system (Darling-Hammond and Lieberman, 2012[13]). These data can be input or process measures such as number of enrolments and number of courses offered by the university, or they can be output measures such as certification results, employment outcomes, and candidate and principal feedback surveys (Toon, Jensen and Cooper, 2017[14]).

While there is some research evidence to suggest that increased accountability measures may contribute to improving the quality and outcomes of initial teacher preparation, the conditions under which this happens are not straightforward (Tatto et al., 2016[15]). The ultimate assessment of the effectiveness of ITP is the impact that graduates have on learner outcomes. Some researchers have found links between ITP programme quality and learner achievement (Boyd et al., 2009[16]), but others have found that measuring programme effectiveness through learner achievement rarely produces enough variability to distinguish between programmes (Gansle, Noell and Burns, 2012[17]; Koedel et al., 2015[18]). Causality is difficult to ascertain for many ITP outcomes. Using data like employment outcomes, for example, may not be the best measure of programme quality because many factors beyond the programme influence the employment outcomes of initial teacher education (ITE) graduates (Tatto et al., 2016[15]). An effective and fair means to collect evidence on ITP programme impact is important to improve ITP systems.

2.1.2. Supporting the use of evidence across the ITP system

Evidence is of little worth if stakeholders do not use it in the system. A strong evidence ecosystem supports the creation of practice-based evidence and drives the generation of evidence-informed practice and policy making (Deeble and Vaughan, 2018[19]). This requires research that is relevant to challenges teachers, teacher educators and policy makers face and evidence that is shared in meaningful and practical ways.

The use of evidence about ITP policies and practices in ITP policy making and programmes is inconsistent in many systems (Burns, Köster and Fuster, 2016[3]). The key factors affecting the use of research in general are: the nature of the research including quality and timeliness; personal characteristics of the researchers and research users including attitudes towards change; access to research either directly or through knowledge brokers or contacts; and, the context for the use of research such as organisational culture (Davies, 2007[20]).

Some systems have set up knowledge mediators or brokerage agencies to aid in education knowledge dissemination, translation and ultimately utilisation (Burns, Köster and Fuster, 2016[3]). For example, the Evidence for Policy and Practice Information and Co-ordinating (EPPI)-Centre in the United Kingdom, the What Works Clearinghouse in the United States, the Knowledge Chamber of the Netherlands, the Danish Knowledge Clearinghouse, or New Zealand’s Best Evidence Synthesis Programme are different types of agencies which aim at facilitating information sharing and ensuring quality control (OECD, 2007[4]). These brokers can generate and source, synthesise, manage and promote utilisation of evidence to benefit researchers, practitioners, policy makers and commentators (Clinton, Aston and Quach, 2018[21]). Despite the need to improve knowledge dissemination and translation, there is little empirical record of the effectiveness and impact of knowledge mediation efforts (Burns, Köster and Fuster, 2016[3]). There is limited evidence from the ITP reviews that programmes are systematically using national knowledge brokers to inform their work.

Many ITP systems use accreditation processes to compel programmes to use evidence-based ITP practices, but systems often have a difficult time enforcing this requirement. Accreditation is traditionally focused on achieving compliance to a set of minimum standards. In systems that use this form of accreditation, most ITP programmes need only pass the standard and there are few incentives for them to improve beyond the minimum benchmark (Toon, Jensen and Cooper, 2017[14]).

Some systems fail to implement ongoing feedback mechanisms between policy makers, researchers, teacher educators, school leaders and teachers. Not all systems in the OECD ITP study, for example, routinely survey graduate teachers for feedback on their preparation, or evaluate the implementation of all major ITP policy reforms.

A survey on education information systems conducted as part of the OECD Centre for Educational Research and Innovation (CERI) Innovation Strategy for Education and Training project (OECD, 2018[22]) in 64 systems in 30 countries, identified a number of challenges for leveraging data for educational innovation and improvement (González-Sancho and Vincent-Lancrin, 2016[23]). While collecting and using data to improve education systems is becoming a prominent strategy in many countries, the limited integration of data management tools often constitutes a barrier to accessing and analysing the wealth of data generated in educational institutions. Regarding ITP, systems are rarely capable of linking data on teacher candidates, new teachers, ITE and school-based teacher educators collected in teacher education institutions, school boards and schools over time. A major challenge for many countries is to develop national or regional longitudinal education information systems that can facilitate data sharing and integration across institutions and levels of education and are capable of providing an easy use of information to different stakeholders. A central feature of advanced longitudinal information systems is to provide visualisation, analysis and reporting tools that facilitate their use for purposes such as forecasting workforce needs.

Findings of this study also suggests that while many countries are building such longitudinal information systems, most of these still lack key features to effectively exploit data. It is still a challenge to integrate flexible tools that allow faster feedback, and provide suggestions to take action. Moreover such systems are not yet accessible for a large number of stakeholders, partly due to lack of training opportunities in their use (González-Sancho and Vincent-Lancrin, 2016[23]).

2.1.3. Designing ITP in an evidence-informed and effective way

ITP programmes that use research as part of their approach to learning to teach and for programme improvement are generally more effective than those that do not (Tatto, 2015[24]).

The design of ITP must draw deeply on the specialist knowledge domains that underpin teacher education. This involves the growing evidence on the effectiveness of teacher education and continuous development programmes (Cordingley, 2015[25]; Cordingley and Bell, 2012[26]; Timperley et al., 2007[27]), research on curriculum review and refinement, and so on. Evidence and scholarship-based development in any given field of knowledge proceeds gradually and incrementally through research and testing. However, teacher education institutions often have limited space to test new designs.

Designing ITP in an evidence-informed way is also challenging because it requires accommodating a range of very different timescales and organisational priorities. Governments work to demanding political timescales and their rapid reforms can impose strong demands on ITP institutions. Changes have to be implemented across complex policy boundaries and responsibilities, which each have their own, often rather slower cycles (Burns and Köster, 2016[28]).

Developments in accredited, degree level academic programmes often have to work through higher education accreditation and quality assurance arrangements, which in many education systems work to longer term – across three to five year programme review and accreditation – cycles (OECD, 2008[29]). Teacher education design, which has to be fit for the purpose of educating large numbers of teachers, thus has to be accommodated within often tri-annual cycles of teacher preparation, the rhythm of the university or college degree, and higher education quality assurance cycles.

2.2. What strategies can address the challenge?

2.2.1. Supporting rigorous and relevant research on ITP

An evidence-informed, self-improving ITP system supports the production of rigorous and relevant knowledge on ITP policies and practices. Knowledge can originate from research, i.e. a rigorous analysis of data and implications based on specific research questions and methodologies, or from system data sources such as quantitative data and qualitative information from teacher candidates, teachers and teacher educators (European Commission/EACEA/Eurydice, 2017[30]). Knowledge may be produced by higher education institutes, government bodies or other organisations via research projects, comprehensive evaluations and stakeholder consultations. Systems can facilitate knowledge production by others as well as directly produce knowledge through the state.

Systems can steer the type of knowledge produced by others through establishing grants, creating government-affiliated or independent research centres, and directly commissioning research (Burns, Köster and Fuster, 2016[3]).

A national research strategy is a key component to steer the effective production of knowledge about ITP policies and practices. National reviews on ITP in countries such as Australia and Wales have raised concerns about the lack of a co-ordinated national research strategy to build research on teacher preparation where it is currently lacking (BERA, 2014[1]; TEMAG, 2014[31]). A national research strategy helps support teacher preparation research across every level of the system from the individual school, through local and regional networks, to the wider research community based in universities and other research organisations.

Systems can directly produce knowledge through government organisations such as statistical offices, oversight committees or independent evaluations of government-initiated pilots in policy implementation (Burns, Köster and Fuster, 2016[3]). The OECD ITP study found that OECD countries are increasingly collecting ITE programme data across their systems and making this publicly available for stakeholders. National data strategies are being developed in countries such as Australia (Australian Teacher Workforce Data Strategy) and the United States (Deans for Impact, Table 2.3/7) to collect ITE programme data for workforce planning, policy and programme evaluation and research. National data strategies are being developed in countries such as Australia (Australian Teacher Workforce Data Strategy) and the United States (Deans for Impact) to collect ITE programme data for workforce planning, policy and programme evaluation and research. Other OECD countries, such as the Netherlands, implement national surveys to seek feedback from beginning teachers and publish reports analysing the findings from the survey to identify system-wide strengths and areas for improvement in relation to ITE. Some countries undertake and publish comprehensive evaluations of selected ITP policy implementation.

2.2.2. Introducing accreditation that incentivises ITP institutions to build their own evidence and implement a continuous improvement approach

With an increasing need for monitoring and controlling the quality of services, accountability, systemic evaluation and assessment, and different forms of audit have been on the rise in OECD countries (OECD, 2008[29]; OECD, 2013[32]). Quality assurance in higher education has two main, sometimes conflicting, purposes: 1) accountability to provide an objective measurement to demonstrate quality, and 2) improvement, i.e. a formative approach to understand how performance can be improved in the future (OECD, 2008[29]). For example, many countries introduced accreditation systems for higher education programmes to monitor and ensure their quality (OECD, 2008[29]). Where teacher education takes place in universities, ITE programmes often fall under the general accreditation processes such as in the Netherlands, Norway and Japan among countries participating in the ITP study. Other countries, such as Australia and some states in the US, have additional accreditation processes specifically for ITE programmes (Table 2.3/1,8). While positioning requirements for ITE within the overall higher education quality assurance frameworks creates coherence in the system, addressing the challenges described in this section also require processes specific to ITE programmes in addition to generic higher education requirements.

Systems can compel ITE programmes to use evidence through compliance-focussed accountability mechanisms. However, this often results in accreditation and quality assurance processes that mostly focus on outcome measures and ensuring minimal benchmarks (Toon, Jensen and Cooper, 2017[14]), which was also noted in the OECD ITP study. Moreover, too much central control and strongly prescribed processes can stifle innovation and the ability for institutions to act on feedback from schools and teacher candidates (Peck, Gallucci and Sloan, 2010[8]). Part of the challenge is to incorporate the latest evidence on effective ITP practices in a timely manner, when accreditation and quality assurance arrangements in many education systems function in longer term cycles (across three to five years).

A more productive approach is to encourage the development of organisational policies and practices related to continuous programme improvement (Peck and Davis, 2018[33]). To achieve this and accommodate innovation in ITP programmes through accountability, quality assurance systems should also focus on processes and improvement (Toon, Jensen and Cooper, 2017[14]). It is therefore important for any quality assurance system to not only allow a certain degree of flexibility for ITE institutions, but also specifically incentivise them to continuously update and adapt their programme to integrate emerging evidence on ITP practices.

Figure 2.1. Massachusetts continuous improvement cycle
Figure 2.1. Massachusetts continuous improvement cycle

Source: MA DESE (2016[34]), Program Approval Guidelines.

In addition, quality assurance processes in some systems impose burdensome administration on institutions, and there is a risk that satisfying criteria becomes mostly an administrative procedure rather than real improvement (OECD, 2008[29]). Also, institutions may not be willing to share data in a way needed for continuous improvement if they feel it will be used to judge them. Such processes should therefore be designed in a way to encourage institutions to promote self-reflection, and implement a continuous improvement approach. The questions underlying such an approach focus on what and how an ITE institution is learning to improve their programmes and less on the production of artefacts that document data collection and improvement processes (Peck and Davis, 2018[33]).

An example for an approach to promoting the use of evidence in continuous improvement is outlined in the programme approval guidelines for ITE providers in the state of Massachusetts (US) (Table 2.3/8). ITE providers assess their programmes on a yearly basis following a set of pre-defined criteria (MA DESE, 2016[35]). They identify areas for improvement based on the assessment, set annual goals, and develop and implement an action plan for achieving them (Figure 2.1) (MA DESE, 2016[34]).

2.2.3. Fostering the dissemination and utilisation of evidence throughout the system

An evidence-informed, self-improving ITP system supports the dissemination and utilisation of knowledge about ITP policies and practices to stakeholders across the system.

Individuals, organisations and processes are key in knowledge dissemination. Strategies for knowledge mediation at the level of the individual include training, interactions with other stakeholders, and personal movement around a system (Burns, Köster and Fuster, 2016[3]). Teacher educators, in particular, play a key role in mediating research evidence for candidate teachers (Sonmark et al., 2017[36]).

Strategies at the organisational and process levels can be more easily spread and sustained than strategies that target individuals. Knowledge brokers are organisations that are created specifically to disseminate knowledge across a system (Burns, Köster and Fuster, 2016[3]). Stakeholders interviewed for the OECD ITP Study rarely mentioned the use of knowledge brokers, despite these intermediaries existing in several review countries including Australia (Evidence for Learning), the United Kingdom (EPPI-Centre, Centre for the Use of Research and Evidence in Education [CUREE] and the Sutton Trust-Education Endowment Foundation and its Teaching and Learning Toolkit), Norway (the Knowledge Centre) and the United States (The Institute of Education Science and its What Works Clearinghouse). While this does not mean that systems do not benefit from such organisations, it might be worth exploring how teacher education institutions and ITP programmes can better draw on knowledge brokers. At the same time, systems could also use other processes to help support stakeholders to use knowledge. Processes of knowledge mediation may involve interaction with key stakeholders, stakeholder involvement in the production of knowledge, and technology platforms and communication channels to regularly disseminate knowledge (Burns, Köster and Fuster, 2016[3]).

The use of research and evidence to inform policy is of increasing focus in OECD countries (OECD, 2007[4]). A 2014 British review of research in teacher education argued that in a “research-rich, self-improving education system”, policy makers of all persuasions – and those who seek to influence policy – encourage, and are responsive to the findings of educational research, both in policy formulation and in implementation strategies” (BERA, 2014, p. 25[1]). Systems that use evidence in their policy design, implementation and evaluation collect extensive policy and programme data, conduct numerous stakeholder interviews and review international literature to inform ITP policies.

Embedding continually evolving knowledge of effective ITP practice into programmes is a challenging, yet important aspect of utilising evidence in a system. Evidence-informed review and refinement of programme design, structure and pedagogies should be incorporated into the programme improvement processes. Both the design and the review process for ITP programmes should be underpinned by strong partnerships among all stakeholders, including accreditation authorities, ITP leaders, etc. Such principles help ensuring rapid flows of newly emerging knowledge and evidence.

2.3. How can the different actors apply these strategies?

2.3.1. What can policy makers do?

Creating a national research strategy and supporting research partnerships and centres of excellence

Policy makers can work with ITP stakeholders to create a national strategy on priority areas to research in teacher education and help coordinate research activities and funding across the system. For example, as part of the education reform programme, the Welsh Government has been investing in strengthening the relationship between research and teacher education (Table 2.3/10) to improve Welsh education and meeting the aspirations of the new Welsh Curriculum (Welsh government, 2017[37]). Similarly, the 2014 review of teacher preparation in Australia recommended that the national teaching and school leadership body (the Australian Institute for Teaching and School Leadership - AITSL) extend its functions to provide a national focus on research into teacher education including into the effectiveness of teacher preparation and the promotion of innovative practice (Table 2.3/1) (TEMAG, 2014[38]).

Policy makers should involve various ITP and related system stakeholders in the development and implementation of their national research strategy, including researchers, teacher educators, school leaders and teachers. The 2014 review of teacher preparation in Australia recommended that the Australian Government work closely with higher education institutions and other agencies such as the Australian Research Council to ensure research grants related to teacher preparation support the development of a strong evidence base (TEMAG, 2014[38]). The same report also highlighted the opportunity for schools and ITE providers to establish mutually beneficial partnerships on research that can increase the quality of initial teacher education (Table 2.3/1) (TEMAG, 2014[38]).In other systems, policy makers have established policies to support research partnerships and centres of excellence to build and share research evidence on teacher preparation. The Norwegian Government established the Centre for Professional Learning in Teacher Education (ProTed), which is a partnership between two universities. In addition to running innovative teacher preparation programmes, ProTed conducts research projects and disseminates research findings on what constitutes excellent teacher education (see Table 2.3/5).

Box 2.1. Norway’s Centre of Excellence for Professional Learning in Teacher Education

Best The Norwegian Ministry of Education and Research established “Centres of Excellence in Higher Education” (SFU) in 2010 as a prestige arrangement for educational activities in higher education (Table 2.3/5).

ProTed, Norway's Centre for Professional Learning in Teacher Education, is a joint venture between the universities in Oslo and Tromso to develop modes of collaboration between universities and schools, carry out systematic experiences of teaching, learning and supervision and contribute to the knowledge base about what constitutes excellent teacher education.

ProTed's research and development activities on teacher education are organised into five areas:

1. (Innovations) Progression and coherence

2. (Innovations) University schools and professional practice

3. (Innovations)Teacher education for the digital future

4. (Dissemination implementation) Building teacher education communities

5. (Dissemination implementation) Knowledge base for integrated study design

Creating a national ITP data strategy and supporting the collection and use of ITP programme data across the system

OECD member countries in the ITP study are increasingly using ITP programme data across their systems. These data can be input or process measures such as number of enrolments and number of courses offered by the university, or it can be output measures such as certification results, employment outcomes, and candidate and principal feedback surveys. The effective collection and use of data by stakeholders in an ITP system can support continuous improvement of ITP programmes and practices – though it is not often straightforward (Tatto et al., 2016[15]).

Countries use various mechanisms to support the collection and use of data, including national data strategies, accountability mechanisms that require the publication of programme effectiveness data, and candidate performance assessments.

Australia is implementing the collection of a national teaching workforce dataset (Australian Teacher Workforce Data - ATWD) to help understand the teacher workforce on a national scale and to facilitate robust modelling about Australia's approximately 400 000 practicing and preservice teachers for use by employers, policy makers and providers (see also section 3.3.1. in Chapter 3). Other initiatives in Australia such as the National Schools Interoperability Programme (NSIP) and the Learning Services Architecture (LSA) also have the potential to support integrated data for different stakeholders over time.

All universities in Japan are required to publish an annual report that contains data about ITE programmes, financial information, student enrolment and graduates’ employment destinations for each ITE programme (Table 2.3/3).

A number of states in the US are developing state-wide data systems for accountability and evidence-informed programme improvement (Table 2.3/7). For example, Louisiana, Massachusetts, and Rhode Island collect and report data on ITE graduates (Rhode Island Department of Education, n.d.[39]; Louisiana Board of Regents, n.d.[40]), such as their persistence in teaching in public school and their students’ performance on state-wide examinations.

Establishing flexible accreditation systems and guidelines that focus on continuous improvement

Effective ITE programmes collect and analyse data not as a form of compliance but as part of internal improvement (Peck, Gallucci and Sloan, 2010[8]). These programmes use data and evidence in structured improvement processes – to identify areas for improvement, create and execute a plan informed by evidence to address those areas, then evaluate the impact of their actions (Toon, Jensen and Cooper, 2017[14]). A system encourages evidence use in all programmes by supporting and recognising ITP institutions that implement formal improvement processes and foster a culture of improvement that involves the ongoing collection and analysis of data and evidence.

Policy makers need to make sure that accreditation systems for ITE programmes have a clear focus on improvement and processes, and allow for flexibility. An accreditation system exists in almost all of the countries participating in the OECD ITP study: Australia, Japan, the Netherlands, Norway, the US and Wales. The Norwegian Agency for Quality Assurance in Education – an independent expert body under the Ministry of Education and Research – implements an evaluation and accreditation approach that focuses on continuous improvement, self-accreditation and building capacity, while at the same time entails tough consequences for non-compliance (Table 2.3/6). The Massachusetts accreditation system builds on the principle of continuous improvement. This principle is manifested in a regular review and reaccreditation of all programmes (see Section 2.2.2 and Table 2.3/8).

Australia has also made efforts to consolidate the multiple purposes of quality assurance by including continuous improvement, flexibility, diversity and innovation in its accreditation principles (Table 2.2 and Table 2.3/1).

Table 2.2. Principles for national accreditation of teacher education programmes in Australia

1. Impact

The accreditation process relies on evidence about the programme’s impact. Evidence of impact is drawn from both pre-service teacher performance and graduate outcomes.

2. Evidence-based

Evidence must underpin all elements of initial teacher education, from the design and delivery of programmes to the teaching practices taught within programmes. Evidence is the basis on which panels make accreditation recommendations.

3. Rigour

A relentless focus on rigour across all elements of the accreditation process is vital in assuring robust and nationally consistent decisions, as well as the quality of programmes and their graduates.

4. Continuous improvement

Accreditation contributes to the improvement of the quality of initial teacher education and consequently of teaching and learning in Australia. The ongoing cycle of review and reaccreditation will provide assurance of graduate teacher quality and building public confidence in the profession.

5. Flexibility, diversity and innovation

Accreditation encourages the capacity of providers to be innovative in the delivery of programmes to meet the diverse needs of students and the profession, as long as the programme can demonstrate a positive impact.

6. Partnerships

National accreditation is built around partnerships involving shared responsibilities and obligations among initial teacher education providers, education settings, teachers, employers, and Authorities and a shared commitment to improve initial teacher education and work in partnership to positively affect student learning and graduate outcomes.

7. Transparency

The accreditation process requires transparency across all elements of initial teacher education, from entrant selection to programme outcomes. This results in publically available data that is valid and comparable, as well as clarity for pre-service teachers about what to expect from initial teacher education and, in turn, what is expected of them throughout their course.

8. Research

Accreditation generates and relies upon a strong research base that informs programme design and delivery, and informs the continual improvement of teacher education programmes by providers.

Source: Adapted from AITSL (2015[41]), Accreditation of Initial Teacher Education Programs: Standards and Procedures.

Accreditation is however not the only way to ensure continuous improvement in ITE. Softer measures can include for example guidelines and peer-learning processes. The Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT) is preparing guidelines for an “integrative teacher curriculum reform” to help teachers meet the demands of the national school curriculum and adapt modern approaches to teaching, such as active learning (Table 2.3/2). In Norway, national teacher preparation guidelines are used by teacher educators to frame the teacher education curriculum and its delivery (Table 2.3/5).

Box 2.2. The Role of the Norwegian Agency for Quality Assurance in Education

The Norwegian Agency for Quality Assurance in Education (NOKUT) reviews institutions every eight years as part of higher education quality assurance process. All evaluations and accreditation are conducted by expert panels appointed and organised by NOKUT, with tailored rules and composition depending on the kind of audit, accreditation or evaluation activity. When evaluating institutions, NOKUT reviews the institutions’ internal quality assurance work and culture and aims at finding a good balance between accountability and improvement. To this extent, NOKUT can provide recommendations for how the institution should enhance the quality of its educational provision and quality assurance system, or sanction poor-performing institutions by rescinding accreditation for specific programmes, de-accredit an entire institution or take self-accreditation powers away from an institution (Table 2.3/6).

Supporting the use of evidence across the system through capability building, networks and convening

Policy makers can use various means to support the dissemination and use of evidence across the system. Evidence summaries and policy networks are two examples of how countries in the OECD ITP study support the dissemination and use of ITP evidence.

AITSL compiles and publishes evidence summaries on important topics for education professionals, including teacher educators. These are, for example, available on attrition rates for early career teachers and on what early career teachers say about induction.

In the US, the Council of Chief State School Officers (CCSSO) – composed of the highest ranking education official in each state – provides space where senior state policy makers can exchange ideas to improve education (Table 2.3/7). CCSSO convened a Network for Transforming Educator Preparation consisting of representatives from nine states. The network aimed to mobilise stakeholders, build a shared understanding of the system and key issues to address, develop consensus around a common vision, design and align transformation strategies, and provide support to implement the strategies. The network focussed on teacher certification policies and systems; programme approval policies, systems and standards; data systems to support continuous improvement; and stakeholder engagement. While support to the network concluded in 2017, the nine participating states are now sharing lessons learned to help other states prepare their teachers (CCSSO, 2017[42]).

Monitoring and evaluating ITP policy implementation

Regular data collection, monitoring and evaluation is a key component of effective policy implementation (Viennet and Pont, 2017[43]). Data collected throughout the policy implementation process, for example, allows policy makers to update their policy or implementation strategy if needed, or better tailor the implementation to local needs. Feedback loops are an important part of monitoring and evaluation – often the weakest link in the policy cycle and frequently skipped – and should involve a diverse set of stakeholders in the system (Burns, Köster and Fuster, 2016[3]).

The Dutch Ministry of Education, for example, conducts a survey of all newly qualified teachers, collects other information from schools and reviews this information with various stakeholders to identify national trends and make policy recommendations (Nusche et al., 2014[44]).

2.3.2. What can teacher education institutions and the teacher educator profession do?

Conducting large-scale research studies

There is not enough large-scale, longitudinal, cross-institutional research in teacher preparation (Wilson, Floden and Ferrini-Mundy, 2001[12]; Cochran-Smith et al., 2015[7]). Teacher education institutions and teacher educators can build this kind of evidence by participating in large research projects that span institutions, years and countries. Researchers in Norway and the United States, for example, are collaborating on a study on student teacher experience that involves five different programmes based in five different countries (Canrinus et al., 2017[45]).

Collecting, sharing and using evidence from ITP practice across institutions

Some teacher education institutions collaborate with each other to collect, share and use data and evidence for improvement. Deans for Impact in the US is currently developing a Common Indicators System (CIS) to gather evidence of teacher-candidate knowledge and skills, and programme performance across institutions (see Box 2.3 and Table 2.3/7). In another US example, the University of Michigan has established TeachingWorks to identify and share high-impact practices in teacher education. TeachingWorks collaborates with researchers, practitioners, policy makers, schools and teacher preparation providers across the US and offers professional development, training, seminars and consultations to support teacher educators (Table 2.3/9). In Japan, a consortium of four ITP institutions – the Centre to Support Partnership in the Advancement of Teacher Education – is working on a model ITE programme to share best practice for coursework and practical training across the system (Table 2.3/4).

Box 2.3. Deans for Impact’s Common Indicators System

Deans for Impact is a relatively new organisation, established in 2014. It addresses the core issues in ITP, such as large diversity in ITE programmes, validity in data collection of ITE programmes and the low status of Colleges of Education. The organisation works on three major initiatives: empowering leaders through a year-long fellowship for deans of ITE programmes; gathering common evidence and data through their Common Indicators System; and, influencing policy through research and advocacy.

A network of thirteen diverse ITP institutions (as part of Deans for Impact) is participating in a prototype to gather common evidence and data on teacher candidates’ knowledge and skills, and programme performance. Data collected through the tool enables the institutions to engage in cross-institutional learning and contribute to the evidence base on teacher preparation (Table 2.3/7).

References

[41] AITSL (2015), Accreditation of Initial Teacher Education Programs: Standards and Procedures, https://www.aitsl.edu.au/docs/default-source/initial-teacher-education-resources/accreditation-of-ite-programs-in-australia.pdf (accessed on 30 December 2017).

[1] BERA, B. (2014), Research and the teaching profession: Building the capacity for a self-improving education system. Final Report of the BERA-RSA Inquiry into the Role of Research in Teacher Education, British Education Research Association, https://www.bera.ac.uk/wp-content/uploads/2013/12/BERA-RSA-Research-Teaching-Profession-FULL-REPORT-for-web.pdf.

[16] Boyd, D. et al. (2009), “Teacher Preparation and Student Achievement”, Educational Evaluation and Policy Analysis, Vol. 31/4, pp. 416-440, http://dx.doi.org/10.3102/0162373709353129.

[28] Burns, T. and F. Köster (eds.) (2016), Governing Education in a Complex World, Educational Research and Innovation, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264255364-en.

[3] Burns, T., F. Köster and M. Fuster (2016), Education Governance in Action: Lessons from Case Studies, Educational Research and Innovation, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264262829-en.

[45] Canrinus, E. et al. (2017), “Coherent teacher education programmes: Taking a student perspective”, Journal of Curriculum Studies, Vol. 49/3, pp. 313-333.

[42] CCSSO (2017), Transforming Educator Preparation: Lessons Learned from Leading States, Council of Chief State School Officers, http://www.ccsso.org. (accessed on 5 February 2019).

[21] Clinton, J., R. Aston and J. Quach (2018), Promoting evidence uptake in schools: A review of the key features of research and evidence institutions, University of Melbourne, http://dx.doi.org/10.4225/49/5aa61c6c75a9e.

[9] Cochran-Smith, M. and A. Villegas (2015), “Framing teacher preparation research: An overview of the field, part I”, Journal of Teacher Education, Vol. 66/1, pp. 7-20.

[7] Cochran-Smith, M. et al. (2015), “Critiquing teacher preparation research: An overview of the field, part II”, Journal of Teacher Education, Vol. 66/2, pp. 109-121, https://doi.org/10.1177/0022487114558268.

[6] Cochran-Smith, M. and K. Zeichner (2005), Studying Teacher Education: The report of the AERA Panel on Research and Teacher Education, American Educational Research Association, published by Lawrence Erlbaum Associates, https://books.google.fr/books?hl=en&lr=&id=hbiLAgAAQBAJ&oi=fnd&pg=PP1&dq=Studying+Teacher+Education:+The+Report+of+the+AERA+Panel+on+Research+and+Teacher+Education.+Lawrence+Erlbaum+Associates.+&ots=kvcbq8M6DT&sig=hrAHfWD_o5TPEUKJAicby2UxLwE (accessed on 1 October 2018).

[25] Cordingley, P. (2015), “The contribution of research to teachers’ professional learning and development”, Oxford Review of Education, Vol. 41/2, pp. 234-252, http://dx.doi.org/10.1080/03054985.2015.1020105.

[26] Cordingley, P. and M. Bell (2012), Understanding What Enables High Quality Professional Learning, CUREE and Pearson School Improvement, http://www.curee.co.uk (accessed on 18 February 2018).

[13] Darling-Hammond, L. and A. Lieberman (2012), Teacher Education Around the World : Changing Policies and Practices, Routledge, Abingdon, https://www.routledge.com/Teacher-Education-Around-the-World-Changing-Policies-and-Practices/Darling-Hammond-LIEBERMAN/p/book/9780415577014 (accessed on 4 October 2018).

[20] Davies, H. (2007), “Academic advice to practitioners—the role and use of research-based evidence”, Public Money and Management, Vol. 27/4, pp. 232-235.

[19] Deeble, M. and T. Vaughan (2018), “An evidence broker for Australian schools”, Centre for Strategic Education, Vol. 150.

[5] du Plooy, L. et al. (2016), “Searching for research results to inform the design of an initial professional teacher education programme for the foundation phase: A systematic review”, South African Journal of Childhood Education, Vol. 6/1, pp. 1-8, http://dx.doi.org/10.4102/sajce.v6i1.285.

[30] European Commission/EACEA/Eurydice, E. (2017), Support Mechanisms for Evidence-based PolicyMaking in Education (Eurydice Report), Publications Office of the European Union.

[17] Gansle, K., G. Noell and J. Burns (2012), “Do Student Achievement Outcomes Differ Across Teacher Preparation Programs? An Analysis of Teacher Education in Louisiana”, Journal of Teacher Education, Vol. 63/5, pp. 304-317, http://dx.doi.org/10.1177/0022487112439894.

[23] González-Sancho, C. and S. Vincent-Lancrin (2016), “Transforming education by using a new generation of information systems”, Policy Futures in Education, Vol. 14/6, pp. 741-758, http://dx.doi.org/10.1177/1478210316649287.

[18] Koedel, C. et al. (2015), “Teacher Preparation Programs and Teacher Quality: Are There Real Differences Across Programs”, Education Finance and Policy, Vol. 10/4, pp. 508-534, http://dx.doi.org/10.1162/edfp_a_00172.

[10] König, J. et al. (2011), “General pedagogical knowledge of future middle school teachers: on the complex ecology of teacher education in the United States, Germany, and Taiwan”, Journal of Teacher Education, Vol. 62/2, pp. 188-201, http://dx.doi.org/10.1177/0022487110388664.

[11] König, J. et al. (2017), “Effects of opportunities to learn in teacher preparation on future teachers’ general pedagogical knowledge: Analyzing program characteristics and outcomes”, Studies in Educational Evaluation, Based on the EMW data collection.<br>There was nothing on OTL quality, only content., pp. 122-133, http://dx.doi.org/10.1016/J.STUEDUC.2017.03.001.

[40] Louisiana Board of Regents (n.d.), Teacher Preparation Data Dashboards &amp; Fact Book, https://regents.la.gov/divisions/planning-research-and-academic-affairs/academic-affairs/teacher-education-initiatives/teacher-preparation-data-dashboards-fact-book/ (accessed on 5 February 2019).

[35] MA DESE, M. (2016), Educator Preparation: Program Approval Criteria List, http://www.doe.mass.edu/edprep/evaltool/2017CriteriaList.pdf.

[34] MA DESE, M. (2016), Program Approval Guidelines, http://www.doe.mass.edu/edprep/ProgramApproval.pdf.

[33] Mandinach, E. and E. Gummer (eds.) (2018), Building Capacity and Commitment for Data Use in Teacher Education Programs, Routledge.

[44] Nusche, D. et al. (2014), OECD Reviews of Evaluation and Assessment in Education: Netherlands 2014, OECD Reviews of Evaluation and Assessment in Education, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264211940-en.

[22] OECD (2018), Innovation Strategy for Education and Training, http://www.oecd.org/education/ceri/innovationstrategyforeducationandtraining.htm (accessed on 5 December 2018).

[32] OECD (2013), Synergies for Better Learning: An International Perspective on Evaluation and Assessment, OECD Reviews of Evaluation and Assessment in Education, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264190658-en.

[29] OECD (2008), Tertiary Education for the Knowledge Society: Volume 1 and Volume 2, OECD Reviews of Tertiary Education, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264046535-en.

[4] OECD (2007), Evidence in Education: Linking Research and Policy, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264033672-en.

[8] Peck, C., C. Gallucci and T. Sloan (2010), “Negotiating implementation of high-stakes performance assessment policies in teacher education: from compliance to inquiry”, Journal of Teacher Education, Vol. 61/5, pp. 451-463, http://dx.doi.org/10.1177/0022487109354520.

[39] Rhode Island Department of Education (n.d.), Rhode Island Educator Preparation Index, http://www3.ride.ri.gov/RIEdPrepIndex/ (accessed on 5 February 2019).

[2] Roberts-Hull, K., B. Jensen and S. Cooper (2015), A New Approach: Reforming Teacher Education, Learning First, Melbourne, Australia, http://www.learningfirst.org.au. (accessed on 2 October 2018).

[36] Sonmark, K. et al. (2017), “Understanding teachers’ pedagogical knowledge: report on an international pilot study”, OECD Education Working Papers, No. 159, OECD Publishing, Paris, https://dx.doi.org/10.1787/43332ebd-en.

[24] Tatto, M. (2015), “The role of research in the policy and practice of quality teacher education: an international review”, Oxford Review of Education, Vol. 41/2, pp. 171-201, http://dx.doi.org/10.1080/03054985.2015.1017405.

[15] Tatto, M. et al. (2016), “The emergence of high-stakes accountability policies in teacher preparation: an examination of the U.S. Department of Education’s proposed regulations”, Education Policy Analysis Archives, Vol. 24/0, p. 21, http://dx.doi.org/10.14507/epaa.24.2322.

[38] TEMAG (2014), Action Now: Classroom Ready Teachers, Teacher Education Ministerial Advisory Group, https://docs.education.gov.au/system/files/doc/other/action_now_classroom_ready_teachers_print.pdf.

[31] TEMAG, T. (2014), Action Now: Classroom Ready Teachers (Teacher Education Ministerial Advisory Group Final Report), Australian Department of Education, https://docs.education.gov.au/system/files/doc/other/action_now_classroom_ready_teachers_accessible.pdf.

[27] Timperley, H. et al. (2007), Teacher Professional Learning and Development Best Evidence Synthesis Iteration [BES], New Zealand: Ministry of Education, Wellington, http://educationcounts.edcentre.govt.nz/goto/BES (accessed on 11 October 2018).

[14] Toon, D., B. Jensen and S. Cooper (2017), Teaching Our Teachers: a Better Way - Continuous Improvement in Teacher Preparation, Learning First, Melbourne, http://www.learningfirst.com. (accessed on 4 October 2018).

[43] Viennet, R. and B. Pont (2017), “Education policy implementation: A literature review and proposed framework”, OECD Education Working Papers, No. 162, OECD Publishing, Paris, https://dx.doi.org/10.1787/fc467a64-en.

[37] Welsh government (2017), Connecting Research and Teacher Education: Quality Enhancement for ITE Partnerships, Education Directorate, Welsh Government,, http://learning.gov.wales/resources/browse-all/connecting-research-and-teacher-education/?lang=en.

[12] Wilson, S., R. Floden and J. Ferrini-Mundy (2001), Teacher Preparation Research: Current Knowledge, Gaps, and Recommendations, Center for the Study of Teaching and Policy, Washington DC, https://www.education.uw.edu/ctp/content/teacher-preparation-research-current-knowledge-gaps-and-recommendations (accessed on 4 October 2018).

Note

← 1. en.oxforddictionaries.com/definition/evidence

End of the section – Back to iLibrary publication page