1. The rationale of the Study

V. Darleen Opfer

Teaching quality is the most important school level variable in determining the success of an education system (OECD, 2005[1]; OECD, 2019[2]). Over many decades, a rich body of evidence has developed showing the significant and unparalleled influence that teachers have on students in schools (Darling- Hammond, 1999[3]; Day et al., 2006[4]; Hattie, 2008[5]; Hanushek, 2011[6]). Understanding the relationship between teaching and learning can provide insights to help raise student outcomes and to promote high quality, impactful teaching practices that ensure every child has the learning tools needed to succeed in school and life.

Empirical research has long tried to understand how to improve student outcomes. The relationship between student learning and a host of school and home variables has been investigated, with debate centred for many decades on disentangling the effect that schools play in student achievement compared to students’ home environment and interaction with their peers (Coleman, 1966[7]; Good, Wiley and Florez, 2009[8]; Hanushek, 1972[9]; Muijs et al., 2014[10]; Murnane, 1975[11]; Reynolds et al., 2014[12]; Summers and Wolfe, 1977[13]). It has taken time for the central role of teachers to emerge (Good, Wiley and Florez, 2009[8]; Hanushek and Rivkin, 2010[14]; Rivkin, Hanushek and Kain, 2005[15]) and to shift the focus to the malleable characteristics of successful teaching practices.

A particularly important contribution to the debate has been the examination of teachers’ impacts on students’ test scores through so-called “value-added models” (Hanushek, 1972[9]; Rockoff, 2004[16]; Rivkin, Hanushek and Kain, 2005[15]; Kane and Staiger, 2012[17]; Kane and Staiger, 2008[18]). These models use statistical methods to measure changes over time in student achievement in standardised assessments, taking into consideration student characteristics and other variables that could influence student outcomes.

Value-added models have provided strong evidence that substantial variations in student outcomes are partly due to differences in teaching quality, with some teachers, so-called “high-quality teachers”, contributing substantially more to student achievement than others (Chetty, Friedman and Rockoff, 2014[19]; Hanushek, 1992[20]; Rockoff, 2004[16]). For example, they have found that a high-quality teacher may yield as much as 1.5 years of gain in student achievement in a single academic year compared to those with lower quality teaching (Hanushek, 1992[20]; Hanushek and Rivkin, 2010[14]).

However, these models also have limitations. One limitation is that they do not fully or sufficiently capture the impact teachers have on students. Researchers have suggested that looking at teachers’ effects using only test scores may be too narrow and that the full contribution of teachers to students may be partly missed (Chamberlain, 2013[21]; Jackson, 2018[22]). For example, more recent research has considered how teaching is associated with students’ acquisition of valuable non-cognitive outcomes such as social social-emotional competencies (Jackson, 2018[22]).

A further and even more important limitation is that value-added models do not explain the processes by which value was added (Goe and Stickler, 2008[23]). They indicate which teachers yielded higher-than-predicted increases in student achievement scores, but do not provide the education community with information on what to improve (Fenstermacher and Richardson, 2005[24]; Pianta and Hamre, 2009[25]). Finally, both educational researchers and statisticians continue to urge caution about the use of such measures given the ongoing precision and validity concerns.

Moreover, research has not been able to identify a single, clear and conclusive profile for a high-quality teacher. Teacher’s observable characteristics – education, experience and certification - have not proven strong predictors of student outcomes. Rivkin et al. (2005[15]), for example, find strong evidence that characteristics such as a master's degree seem to have no relationship with student outcomes. Even though experience has been widely shown to be important, studies also consistently find that the returns from additional years of experience reduce significantly after the first few years of teaching (Boonen et al., 2014[26]; Chetty, Friedman and Rockoff, 2014[19]; Clotfelter, Ladd and Vigdor, 2007[27]; Hanushek, 2011[6]; Rivkin, Hanushek and Kain, 2005[15]). Teacher certification is associated with higher student achievement (Carr, 2006[28]; Darling-Hammond and Bransford, 2005[29]), yet the relationship between the level of qualification and teaching is not simply cumulative. Also, these limitations are compounded with the fact that transforming teachers’ attributes and characteristics remains, at best, difficult and costly.

Students enter education with a host of background characteristics that will shape their learning trajectory, but when they leave, some students will have higher learning outcomes than others because their schools, teachers and classroom experience have enhanced these learning trajectories in different ways (Chetty, Friedman and Rockoff, 2014[19]; Rivkin, Hanushek and Kain, 2005[15]). While earlier research failed to identify teacher attributes and characteristics which sufficiently explain this effect, an increasing body of research is looking into what occurs in the classroom, thus shifting the research question from why a particular teacher is a high-quality one to how teachers interact with students in high-quality classrooms.

The complex and context-based nature of teaching makes the understanding of what works in the classroom challenging. Teaching is dynamic and complex. A teacher must make a host of rapid decisions in the classroom, analysing and evaluating each specific learning episode and connecting their interpretation with their technical knowledge of how learning unfolds (OECD, 2019[30]). Teaching is never a linear process; instead, many teaching practices typically occur simultaneously, each of which is hard to disentangle and isolate individually (Leinhardt and Greeno, 1986[31]; Pollard, 2010[32]; Reynolds, 1999[33]).

The intrinsic complexity of teaching is further compounded by the context in which it takes place. Teaching is situated in a specific temporal, social and cultural context. In the classroom, the process of instruction can vary tremendously from day to day (Rowan and Correnti, 2009[34]; Schweig, Kaufman and Opfer, 2020[35]) and the quality of student-teacher interactions can vary within groups of students, from student to student (Reinholz and Shah, 2018[36]; Schweig, 2016[37]). Teachers need to accommodate the diversity of the class in terms of student learning pace, prior achievement and socio-economic background, among other factors. Similarly, their teaching choices need to align with a range of system-level factors such as curriculum content, instructional goals and pedagogical models (Guerriero, 2017[38]; Shulman, 1987[39]).

Efforts thus far to explain teaching have often rested upon indirect measures of teaching, such as questionnaires where teachers and students report on the presence or frequency of different teaching practices (Goldhaber, Gratz and Theobald, 2017[40]; Hill, Kapitula and Umland, 2011[41]). These have a distinct practical value in terms of implementation and cost-effectiveness. For example, large-scale surveys such as the Programme for International Student Assessment (PISA) or the OECD’s Teaching and Learning International Survey (TALIS) have allowed policymakers and researchers to develop a reasonable sense of what teaching practices are typically deployed in different classrooms (OECD, 2019[42]; OECD, 2019[2]).

A growing consensus on what constitutes ‘quality’ or ‘effective’ teaching has steadily emerged over the years (Ball and Forzani, 2011[43]; Kyriakides and Creemers, 2008[44]; Good and Lavigne, 2017[45]; Good, Wiley and Florez, 2009[8]). For instance, researchers have shown using large-scale international assessments of student achievement and teacher reports of classroom practice that there is a positive relationship, in several countries between an orderly environment and student achievement, the clarity of instruction, formative assessment and other teaching practices (Le Donné, Fraser and Bousquet, 2016[46]; Martin, 2013[47]; Kyriakides and Creemers, 2008[44]; Wang and Degol, 2016[48]). Similarly, meta-analyses have attempted to identify patterns in terms of what teaching practices may be consistently impactful and thus worth increasing in terms of frequency and quality (Hattie, 2008[5]).

Another line of inquiry to understand teaching has been analysing the frequency and quality of learning experiences that are presented to students in the classroom. Learning takes place through interactions between teachers and students, but also when there is interaction between students and the learning content. Often referred to as students’ opportunity to learn , studies have highlighted such opportunities as having large impacts on student achievement, both within and between countries (Cogan and Schmidt, 2015[49]; Kurz, 2011[50]; OECD, 2010[51]; Schmidt, Cogan and Houang, 2011[52]; Schmidt, Zoido and Cogan, 2014[53]). Furthermore, following measures developed in PISA, the presentation of content in the classroom has also been a valuable lens for examining equity in education (OECD, 2010[51]). Researchers have shown that unequal opportunities to learn subject matter may exacerbate achievement gaps between students, for instance (Kuger, 2016[54]; Patall, Cooper and Allen, 2010[55]; OECD, 2010[51]).

Nevertheless, indirect classroom measures can only go so far towards building a deep understanding of teaching and learning. Researchers’ efforts to generalise them conclusively across a range of educational contexts and settings have continued to encounter challenges, such as the lack of a shared technical language, the difficulty of finding an appropriate and consistent grain size when articulating practices, and questions over the contextual specificity of particular practices.

Additionally, indirect measures do not yield a complete picture of classroom teaching. Using teachers’ self-reports to measure teaching can be challenging because frequently these reports reflect responses that teachers consider socially desirable (Little, Goe and Bell, 2009[56]; Van de Vijver and He, 2014[57]; OECD, 2018[58]). It is also not possible to guarantee how teaching practices will be interpreted in questionnaires between different teachers and students, particularly when working across cultures or contexts (Braeken and Blömeke, 2016[59]; Fischer, Praetorius and Klieme, 2019[60]; Scherer, Nilsen and Jansen, 2016[61]). Vieluf et al. (2012[62]), for example, have found that questionnaire scales were non-equivalent across some countries and settings. Goe and Stickler (2008[23]) also note that there may be variation in how some teaching practices are defined as a measure on paper, how teachers interpret them conceptually and how they are actually operationalised in classrooms. More generally, teachers find it difficult to talk about pedagogies, methods and practices (Mesiti and Clarke, 2017[63]; Pollard, 2010[32]). Teaching is a complex act that requires both conscious and unconscious actions and thus offering a detailed and accurate account of the processes of teaching is not straightforward (OECD, 2019[2]).

Exploring direct measures of teaching has a lot of potential for building a more detailed understanding of teaching and learning. Researchers have drawn upon classroom observation as a measurement tool for decades, with a particular focus on specific teacher behaviours (Brophy, 1986[64]; Brophy and Good, 1986[65]; Cochran-Smith and Lytle, 1990[66]) and qualitative analysis of interactions (Gudmundsdottir, 1991[67]).

A particularly strong interest in recent years in the potential of observation to support teacher evaluations (Darling-Hammond et al., 2012[68]; Isoré, 2009[69]) and to facilitate deeper professional learning (Allen et al., 2011[70]; Archer et al., 2016[71]; Blazar, 2015[72]; Downer et al., 2012[73]; Guerriero, 2017[38]; OECD, 2019[30]) has led to a significant growth in research studies and the codification of new observational methodologies (Pianta and Hamre, 2009[25]; Taut and Rakoczy, 2016[74]).

Researchers have developed different frameworks that facilitate the observation of teaching and evaluation of its quality (Danielson, 2007[75]; Pianta and Hamre, 2009[25]; Praetorius et al., 2018[76]; Taut et al., 2016[77]). Empirical studies have taken these frameworks and investigated their effectiveness, with a reasonable consensus emerging that three broad dimensions covering classroom management, social-emotional support, and processes for engaging and supporting learners can be helpful lenses for understanding the quality of classroom teaching (Klieme, Pauli and Reusser, 2009[78]; Lipowsky et al., 2009[79]; Pianta and Hamre, 2009[80]; Tschannen-Moran and Woolfolk Hoy, 2001[81]).

One area of particular interest for researchers has been trying to develop standardised classroom observation protocols that can yield a more accurate and complete picture of teaching whilst avoiding other sources of variance. Driven by the widespread interest in observational measures for policy decisions, the validity, reliability and generalisability of different frameworks of teaching quality has been subject to much debate (Pianta and Hamre, 2009[80]). Whilst it is challenging to isolate the quality of teaching from other sources of variation, such as the observer, researchers have identified strategies that can help to ascertain reliable measurements of teaching quality (Taut and Rakoczy, 2016[74]). In particular, the Measures of Effective Teaching (MET) project demonstrated that it was possible to use a combination of measures to identify more effective teaching (Kane et al., 2013[82]).

Researchers have drawn particular attention to the benefits of observation by video. Video recordings are detailed in what they capture but also flexible in terms of analysis, facilitating multiple viewings or slow-motion analysis, for instance (Fischer and Neumann, 2012[83]; Hiebert et al., 2003[84]; Janik and Seidel, 2009[85]; Tschida, 2017[86]). At the national level, several video studies have been conducted in different forms. A research study in Chile using lessons teachers had submitted for their teacher evaluations showed moderate effect sizes for correlations between observed variables and growth of student outcomes (Taut and Sun, 2014[87]). Norway implemented a video study linked to PISA, analysing lessons in reading, mathematics and science (Klette, 2009[88]), whilst several video studies have been implemented in Germany, including studies with a focus on specific units of content in mathematics and science (Klieme, Pauli and Reusser, 2009[89]; Seidel, Prenzel and Kobarg, 2005[90]). In the United States, video-based research has gained increasing attention as well, as exemplified by the aforementioned Measures of Effective Teaching (MET) project, which worked with over 3 000 teachers (Kane et al., 2013[82]; Bill & Melinda Gates Foundation, 2010[91]).

Nevertheless, looking directly into classrooms remains generally complex, costly and intrusive. Research methods to refine observation systems and to isolate teacher variation from other sources are hindered by how difficult it is to gain access to the classroom. While there is variation between countries, there is not an international culture of observation in education. Even with the advent of new technologies, filming the classroom remains costly and logistically challenging. Further, researchers must be sensitive to working with children as well as concerns around data protection. The classroom space remains primarily unseen and thus unknown on an international scale.

Teaching materials (e.g. lesson plans, in-class assignments and student homework assignments) can provide a window into classroom practice without being as intrusive and labour-intensive as observation. There has been promising recent research on the possibilities of making reliable judgments about instructional quality in several subjects based on these materials (Borko, Stecher and Kuffner, 2007[92]; Martínez, Borko and Stecher, 2012[93]; Matsumura et al., 2002[94]; Matsumura et al., 2006[95]; Stein and Lane, 1996[96]).

If measuring something as complex and context-based as teaching is challenging, doing so through direct classroom measures at an international scale is even harder. There is evidence that a great deal of variation in teaching is likely to exist across countries (Hanushek, 2011[97]; Rozman and Klieme, 2017[98]; OECD, 2014[99]; OECD, 2016[100]; OECD, 2019[2]). For instance, there is considerable variation in the way formative assessment is implemented in mathematics classrooms and how it relates to teacher-directed and student-oriented instruction (Fischer, He. and Klieme, 2020[101]).

The most extensive effort to understand international variation in teaching through direct measures has been the Trends in International Mathematics and Science Video Study (TIMSS). The initial TIMSS Video Study, conducted in 1995, was the first time teaching was investigated and compared across countries using video technology (Stigler et al., 1999[102]). The second study in 1999 built on the first, expanding to seven countries and working with randomly selected and nationally representative classrooms (Hiebert et al., 2003[84]).

The direct measures of classroom practice in the first and second TIMSS Video Studies gave researchers a unique window into the classroom. They allowed researchers to observe commonalities across different classrooms, such as the instructional organisation of the classroom for seatwork or discussions, as well as differences in terms of how time was allocated to new content or to reviewing content (Roth et al., 2006[103]). In turn, the TIMSS studies stimulated a rich body of research exploring patterns of teaching at a national and international level (Givvin et al., 2005[104]). Researchers have also debated different mechanisms that may exist behind patterns, such as national cultures that produce “scripts” (Stigler and Hiebert, 2000[105]) which were questioned in later studies or global dynamics that cut across countries and promote a degree of uniformity in teaching (LeTendre et al., 2001[106]).

Video methodology has also received some continued interest at an international level (Praetorius et al., 2019[107]). For instance, the Learner Perspectives Study (LPS), combined video recordings with extensive interviews of teachers and students and pointed to considerable cross-country differences in mathematics teaching practices (Clarke, Emanuelsson and Jablonka, 2006[108]). The LPS also explored teaching practices, many of which are also considered in the present Study, with more depth than the TIMSS Video Study or other video studies. However, the LPS looked at a very small, select sample of mathematics classrooms and is thus unable to provide the valuable comparative breadth that TIMSS offered. Indeed, most studies that have drawn upon video measures tend to have the limitation of being either confined to one country or small in scale (Table 1.1).

How classroom teaching around the world varies remains largely unexplored. Looking outwards can, therefore, allow education systems to consider themselves in the light of classrooms elsewhere, showing what is possible and fostering a better understanding of how different teachers address similar problems. Most importantly, by providing an opportunity for policy makers and practitioners to look beyond classroom practices that are evident, and thus reflect on some of the paradigms and beliefs underlying them, they hold out the promise of facilitating educational improvement.

The OECD Global Teaching InSights: A video study of teaching report (hereafter referred to as “the Study” or “GTI”) aims to move the education community towards a more detailed and robust understanding of teaching and learning. The Study’s overarching goal and rationale is to trial new methodologies to deepen understanding of teaching and learning at an international scale. Concretely, the Study is designed to:

  • understand which aspects of teaching are related to student learning and student non-cognitive outcomes

  • observe and document how the teachers from participating countries and economies teach

  • explore how various teaching practices are inter-related, and how contextual aspects of teaching are related to the student and teacher characteristics.

The Study covers eight countries and economies and involves around 700 teachers and 17 500 students from across these participating systems. The Study offers a window into the classrooms of Biobío, Metropolitana and Valparaíso (Chile), Colombia, England (United Kingdom), Germany*1, Kumagaya, Shizuoka and Toda (Japan), Madrid (Spain), Mexico and Shanghai (China). These eight countries/economies feature a rich variety of classroom settings, pedagogical traditions, system-level policies and student achievement levels that contribute to the Study helping to build a deeper understanding of teaching at a global level.

This innovative OECD study is a significant international effort. It involves national experts in each participating system from different fields, including pedagogy, mathematics, survey methods and video observation. Moreover, an international consortium of research organisations has implemented the Study, led by the RAND Corporation, and including Educational Testing Services (ETS) and the Leibniz Institute for Research and Information in Education (DIPF). A Technical Advisory Group of 16 leading international experts in the measurement of teaching has also supported the work throughout its four-year span.

The Study serves as a unique opportunity for policymakers, researchers and teachers themselves to understand teaching within different countries, and compare patterns between them. It is able to describe classroom teaching in considerable detail across a range of settings thanks to its design and methodology, providing a rich learning opportunity for the education community at scale.

Through its detailed and multi-faceted analysis of teaching across these participating countries and economies, the Study is able to identify both common and differing patterns in teaching and learning. It offers a point of comparison for the findings from international surveys such as TALIS and PISA. The Study’s findings can stimulate an increasingly nuanced discussion around teaching practices and student outcomes among educators, policymakers, researchers and the general public.

The Study is not a global assessment of teachers or a ranking of countries’ teaching quality. Nor is it a comprehensive study of the “state of teaching”. Rather, the Study is focused on capturing, through new research methods, the richness, the complexity and the variety of teaching around the world to better understand teaching and learning. Its design and application of new ways of measuring teaching and learning will allow the Study to make a valuable contribution to research across multiple contexts at scale.

Opening up the “black box” of teaching, demands innovative and pioneering methods to navigate its innate complexity. The Study therefore builds on the important work of previous video studies and, as illustrated in Table 1.1, at the same time also breaks new ground in its methodology.

A particularly important element of the Study is a pre/post design aimed to measure the impact of the teaching practices observed on student outcomes. International studies of teaching and learning such as the TIMSS Video Studies and the LPS were unable to connect teaching practices to student learning as they did not measure student outcomes (Table 1.1). Before and after the focal content is taught, the Study captures student outcome measures - including non-cognitive outcomes, such as students’ self-efficacy and interest in mathematics - and surveys both teachers and students on their contexts and perceptions.

The Study’s use of a focal content is also significant. A limitation of the TIMSS Video Studies was that teachers were free to follow any goal and curricula, meaning that content varied greatly and thus comprehending differences in teaching methods was far harder (Table 1.1). The Study focuses on a single common secondary mathematics topic (quadratic equations) to enhance the comparability across countries and its potential to capture the relationship between teaching and learning.

Notably, the Study also draws on multiple measures of teaching, such as the collection and scoring of classroom teaching materials as well as videos, to help yield a more rounded, detailed picture of teaching. In particular, common observation codes and common teaching material codes are used across participating countries/economies. Each participating country/economy was responsible for training raters through a train-the-trainer model and scoring videos and materials. In doing so, the Study showed that an observational study can be implemented at a truly international scale.

Chapter 2 provides further details on the design and methodological features of the Study. Overall, it contributes to the research community’s understanding of how to use multiple measures at an international level. It investigates the feasibility of various procedures for capturing teaching practices, such as video and teaching material-based observations, across a diverse range of countries and economies. In addition, the Study can provide important perspectives on the validity of self-reported data because it collects both survey and video data, offering a rich opportunity for triangulation.


[70] Allen, J. et al. (2011), “An Interaction-Based Approach to Enhancing Secondary School Instruction and Student Achievement”, Science, Vol. 333, pp. 1034-7, http://dx.doi.org/10.1126/science.1207998.

[71] Archer, J. et al. (2016), Better Feedback for Better Teaching: A Practical Guide to Improving Classroom Observations.

[43] Ball, D. and F. Forzani (2011), “Building a common core for learning to teach, and connecting professional learning to practice”, American Educator, Vol. 35/2, pp. 17-21, 38-39, https://files.eric.ed.gov/fulltext/EJ931211.pdf.

[91] Bill & Melinda Gates Foundation (2010), Learning About Teaching: Initial Findings from the Measures of Effective Teaching Project, https://docs.gatesfoundation.org/documents/preliminary-findings-research-paper.pdf.

[72] Blazar, D. (2015), “Effective teaching in elementary mathematics: Identifying classroom practices that support student achievement”, Economics of Education Review, Vol. 48, pp. 16-29.

[26] Boonen, A. et al. (2014), “The role of visual representation type, spatial ability, and reading comprehension in word problem solving: An item-level analysis in elementary school children.”, International Journal of Educational Research, Vol. 68, pp. 15-26, http://dx.doi.org/10.1016/j.ijer.2014.08.001.

[92] Borko, H., B. Stecher and K. Kuffner (2007), “Using artifacts to characterize reform-oriented instruction: The scoop notebook and rating guide”, in CSE Technical Report 707, National Center for Research on Evaluation, Standards, and Student Testing (CRESST), Los Angeles, CA., https://eric.ed.gov/?id=ED495853 (accessed on 4 November 2019).

[59] Braeken, J. and S. Blömeke (2016), “Comparing future teachers’ beliefs across countries: approximate measurement invariance with Bayesian elastic constraints for local item dependence and differential item functioning”, Assessment and Evaluation in Higher Education, Vol. 41/5, http://dx.doi.org/10.1080/02602938.2016.1161005.

[64] Brophy, J. (1986), “Teacher influences on student achievement”, American Psychologist, Vol. 41/10, pp. 1069–1077, https://doi.org/10.1037/0003-066X.41.10.1069.

[65] Brophy, J. and T. Good (1986), “Teacher behavior and student achievement”, in Wittrock, M. (ed.), Handbook of research on teaching, McMillan, New York, NY.

[28] Carr, W. (2006), “Education without theory”, British Journal of Educational Studies, Vol. 54/2, pp. 136–159, http://dx.doi.org/10.1111/j.1467-8527.2006.00344.x.

[21] Chamberlain, G. (2013), “Predictive effects of teachers and schools on test scores, college attendance, and earnings”, Proceedings of the National Academy of Sciences, Vol. 110/43, pp. 17176-17182.

[19] Chetty, R., J. Friedman and J. Rockoff (2014), “Measuring the Impacts of Teachers I: Evaluating Bias in Teacher Value-Added Estimates”, American Economic Review, Vol. 104/9, pp. 2593-2632.

[108] Clarke, D., J. Emanuelsson and E. Jablonka (eds.) (2006), Making Connections: Comparing Mathematics Classrooms Around the World.

[27] Clotfelter, C., H. Ladd and J. Vigdor (2007), “Teacher Credentials and Student Achievement in High School: A Cross-Subject Analysis with Student Fixed Effects”, Journal of Human Resources, Vol. 45, http://dx.doi.org/10.1353/jhr.2010.0023.

[66] Cochran-Smith, M. and S. Lytle (1990), “Research on Teaching and Teacher Research: The Issues That Divide”, Educational Researcher, Vol. 19/2-11, http://dx.doi.org/10.2307/1176596.

[49] Cogan, L. and W. Schmidt (2015), The Concept of Opportunity to Learn (OTL) in International Comparisons of Education, http://dx.doi.org/10.1007/978-3-319-10121-7.

[7] Coleman, J. (1966), Equality of educational opportunity, U. S. Government Printing Office, Washington DC.

[75] Danielson, C. (2007), “Enhancing professional practice: A framework for teaching”, in Association for Supervision and Curriculum Development.

[3] Darling- Hammond, L. (1999), Teacher quality and student achievement, Centre for the study of teaching and policy, University of Washington.

[68] Darling-Hammond, L. et al. (2012), “Evaluating Teacher Evaluation”, Phi Delta Kappan, Vol. 93/6, pp. 8-15, https://doi.org/10.1177/003172171209300603.

[29] Darling-Hammond, L. and J. Bransford (2005), Preparing teachers for a changing world: What teachers should learn and be able to do, John Wiley & Sons., San Francisco, CA.

[4] Day, C. et al. (2006), “Variations in the work and lives of teachers: relative and relational effectiveness”, Teachers and Teaching, Vol. 12/2, pp. 169-192, http://dx.doi.org/10.1080/13450600500467381.

[73] Downer, J. et al. (2012), “Observations of teacher-child interactions in classrooms serving Latinos and dual language learners: Applicability of the Classroom Assessment Scoring System in diverse settings.”, Early Childhood Research Quarterly, Vol. 27/21-32, http://dx.doi.org/doi:10.1016/j.ecresq.2011.07.005.

[24] Fenstermacher, G. and V. Richardson (2005), “On Making Determinations of Quality in Teaching”, The Teachers College Record, Vol. 107, pp. 186–213, http://dx.doi.org/10.1111/j.1467-9620.2005.00462.x.

[83] Fischer, H. and K. Neumann (2012), “Video analysis as a tool for understanding science instruction”, in Science Education Research and Practice in Europe: Retrosspective and Prospecctive, http://dx.doi.org/10.1007/978-94-6091-900-8.

[101] Fischer, J., J. He. and E. Klieme (2020), “The Structure of Teaching Practices across Countries: A Combination of Factor Analysis and Network Analysis”, Studies in Educational Evaluation, Vol. 65, https://doi.org/10.1016/j.stueduc.2020.100861.

[60] Fischer, J., A. Praetorius and E. Klieme (2019), “The impact of linguistic similarity on cross-cultural comparability of students‘ perceptions of teaching quality”, Educational Assessment, Evaluation and Accountability, Vol. 2/31, pp. 201-220.

[104] Givvin, K. et al. (2005), “Are There National Patterns of Teaching? Evidence from the TIMSS 1999 Video Study.”, Comparative Education Review, Vol. 49, pp. 311-343, http://dx.doi.org/10.1086/430260.

[23] Goe, L. and L. Stickler (2008), Teacher Quality and Student Achievement: Making the Most of Recent Research., https://files.eric.ed.gov/fulltext/ED520769.pdf.

[40] Goldhaber, D., T. Gratz and R. Theobald (2017), “What’s in a teacher test? Assessing the relationship between teacher licensure test scores and student STEM achievement and course-taking”, Economics of Education Review, Elsevier, Vol. 61/C, pp. 112-129.

[45] Good, T. and A. Lavigne (2017), Looking in Classrooms., Routledge, New York, https://doi.org/10.4324/9781315627519.

[8] Good, T., C. Wiley and I. Florez (2009), “Effective Teaching: an Emerging Synthesis”, in Saha L.J. and Dworkin A.G. (eds.), International Handbook of Research on Teachers and Teaching, Springer, Boston, MA, https://doi.org/10.1007/978-0-387-73317-3_51.

[67] Gudmundsdottir, S. (1991), “Ways of seeing are ways of knowing. The pedagogical content knowledge of an expert English teacher”, Journal of Curriculum Studies, Vol. 23/5, pp. 409-421, http://dx.doi.org/10.1080/0022027910230503.

[38] Guerriero, S. (ed.) (2017), Pedagogical Knowledge and the Changing Nature of the Teaching Profession, Educational Research and Innovation, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264270695-en.

[97] Hanushek, E. (2011), “The economic value of higher teacher quality”, Economics of Education Review, Vol. 30/3, pp. 466-479.

[6] Hanushek, E. (2011), “The economic value of higher teacher quality”, Economics of Education Review, Vol. 30, pp. 466-479, https://hanushek.stanford.edu/sites/default/files/publications/Hanushek%202011%20EER%2030%283%29.pdf.

[20] Hanushek, E. (1992), “The trade-off between child quantity and quality”, Journal of Political Economy, Vol. 100/1, pp. 84-117.

[9] Hanushek, E. (1972), Education and Race: An Analysis of the Educational Production Process, D.C. Heath, Lexington, MA.

[14] Hanushek, E. and S. Rivkin (2010), “Generalizations about Using Value-Added Measures of Teacher Quality”, American Economic Review, Vol. 100/2, pp. 267-271.

[5] Hattie, J. (2008), Visible learning: A synthesis of over 800 meta-analyses relating to achievement, http://dx.doi.org/10.4324/9780203887332.

[84] Hiebert, J. et al. (2003), “Teaching mathematics in seven countries: Results from the TIMSS 1999 video study.”, Education Statistics Quarterly, Vol. 5/1, pp. 7-15, Washington, DC.

[41] Hill, H., L. Kapitula and K. Umland (2011), “A Validity Argument Approach to Evaluating Teacher Value-Added Scores”, American Educational Research Journal, Vol. 48/3, pp. 794–831, https://doi.org/10.3102/0002831210387916.

[69] Isoré, M. (2009), “Teacher Evaluation: Current Practices in OECD Countries and a Literature Review”, OECD Education Working Papers, No. 23, OECD Publishing, Paris, https://dx.doi.org/10.1787/223283631428.

[22] Jackson, C. (2018), “What Do Test Scores Miss? The Importance of Teacher Effects on Non-Test Score Outcomes”, Journal of Political Economy, Vol. 126/5, pp. 2072-2107.

[85] Janik, T. and T. Seidel (2009), The power of Video Studies in Investigating Teaching and Learning in the Classroom..

[82] Kane, T. et al. (2013), Have We Identified Effective Teachers? Validating Measures of Effective Teaching Using Random Assignment, Bill & Melinda Gates Foundation.

[17] Kane, T. and D. Staiger (2012), “Gathering Feedback for Teaching: Combining High-Quality Observations with Student Surveys and Achievement Gains. Research Paper. MET Project.”, Bill & Melinda Gates Foundation.

[18] Kane, T. and D. Staiger (2008), “Estimating Teacher Impacts on Student Achievement: An Experimental Evaluation”, No. 14607, The National Bureau of Economic Research, http://dx.doi.org/10.3386/w14607.

[88] Klette, K. (2009), “Challenges in Strategies for Complexity Reduction in Video Studies. Experiences from the PISA+ Study: A video study of teaching and learning in Norway”, in The Power of Video Studies in Investigating Teaching and Learning in the CLassroom, Waxmann, Mübster.

[89] Klieme, E., C. Pauli and K. Reusser (2009), “The Pythagoras study: Investigating effects of teaching and learning in Swiss and German mathematics classrooms”, in Janik, T. (ed.), The Power of Video Studies in Investigating Teaching and Learning in the Classroom, Waxmann, Münster, Germany, https://www.recherche-portal.ch/zbz/action/display.do?fn=display&vid=ZAD&doc=ebi01_prod006351395.

[78] Klieme, E., C. Pauli and K. Reusser (2009), “The Pythagoras Study: Investigating effects of teaching and learning in Swiss and German mathematics classrooms”, in Janik, T. and T. Seider (eds.), The power of video studies in investigating teaching and learning in the classroom, Waxmann, Münster.

[54] Kuger, S. (2016), “Curriculum and learning time in international school achievement studies”, in Kuger, S. et al. (eds.), Assessing contexts of learning. An international perspective.

[50] Kurz, A. (2011), “Access to what should be taught and will be tested: Students’ opportunity to learn the intended curriculum”, in Elliott, S. et al. (eds.), Handbook of accessible achievement tests for all students: Bridging the gaps between research, practice, and policy, Springer, New York, NY.

[44] Kyriakides, L. and B. Creemers (2008), “Using a multidimensional approach to measure the impact of classroom-level factors upon student achievement: a study testing the validity of the dynamic model”, School Effectiveness and School Improvement, Vol. 19/2, pp. 183-205, http://dx.doi.org/10.1080/09243450802047873.

[46] Le Donné, N., P. Fraser and G. Bousquet (2016), “Teaching Strategies for Instructional Quality: Insights from the TALIS-PISA Link Data”, OECD Education Working Papers, No. 148, OECD Publishing, Paris, https://dx.doi.org/10.1787/5jln1hlsr0lr-en.

[31] Leinhardt, G. and J. Greeno (1986), “The Cognitive Skill of Teaching”, Journal of Educational Psychology, Vol. 78/2.

[106] LeTendre, G. et al. (2001), “Teachers’ Work: Institutional Isomorphism and Cultural Variation in the U.S., Germany, and Japan”, Educational Researcher, Vol. 30/6, pp. 3-15, https://doi.org/10.3102/0013189X030006003.

[79] Lipowsky, F. et al. (2009), “Quality of geometry instruction and its short-term impact on students’ understanding of the Pythagorean Theorem”, Learning and Instruction, http://dx.doi.org/10.1016/j.learninstruc.2008.11.001.

[56] Little, O., L. Goe and C. Bell (2009), “A practical guide to evaluating teacher effectiveness”, National Comprehensive Center for Teacher Quality, Washington, DC., https://eric.ed.gov/?id=ED543776 (accessed on 4 November 2019).

[93] Martínez, J., H. Borko and B. Stecher (2012), “Measuring instructional practice in science using classroom artifacts: Lessons learned from two validation studies”, Journal of Research in Science Teaching, Vol. 49/1, http://dx.doi.org/10.1002/tea.20447.

[47] Martin, M. (2013), “Effective schools in reading, mathematics, and science at the fourth grade”, in Martin, M. and I. Mullis (eds.), TIMSS and PIRLS 2011: Relationships Among Reading, Mathematics, and Science Achievement at the Fourth Grade - Implications for Early Learning, TIMSS & PIRLS International Study Center, Lynch School of Education, Boston College and International Association for the Evaluation of Educational Achievement (IEA), https://timssandpirls.bc.edu/timsspirls2011/downloads/TP11_Chapter_3.pdf.

[94] Matsumura, L. et al. (2002), “Teacher feedback, writing assignment quality, and third-grade students’ revision in lower-and higher-achieving urban schools”, The Elementary School Journal, Vol. 103/1, pp. 3-25, http://dx.doi.org/10.1086/499713.

[95] Matsumura, L. et al. (2006), Measuring Reading Comprehension and Mathematics Instruction in Urban Middle Schools: A Pilot Study of the Instructional Quality Assessment. CSE Technical Report 681, https://eric.ed.gov/?id=ED492885.

[63] Mesiti, C. and D. Clarke (2017), “The international lexicon project: Giving a name to what we do.”, in R. Seah, M. (ed.), Proceedings of the Mathematical Association of Victoria annual conference, Brunswick, Australia.

[10] Muijs, D. et al. (2014), “State of the art – teacher effectiveness and professional learning”, School Effectiveness and School Improvement: An International Journal of Research, Policy and Practice, Vol. 25/2, pp. 231-256.

[11] Murnane, R. (1975), The impact of school resources on the learning of inner city children, Ballinger Pub. Co, Cambridge, Mass.

[30] OECD (2019), A Flying Start: Improving Initial Teacher Preparation Systems, OECD Publishing, Paris, https://dx.doi.org/10.1787/cf74e549-en.

[42] OECD (2019), PISA 2018 Results (Volume I): What Students Know and Can Do, PISA, OECD Publishing, Paris, https://dx.doi.org/10.1787/5f07c754-en.

[2] OECD (2019), TALIS 2018 Results (Volume I): Teachers and School Leaders as Lifelong Learners, TALIS, OECD Publishing, Paris, https://dx.doi.org/10.1787/1d0bc92a-en.

[58] OECD (2018), “The Future We Want”, The Future of Education and Skills 2030, https://www.oecd.org/education/2030/E2030%20Position%20Paper%20(05.04.2018).pdf.

[100] OECD (2016), PISA 2015 Results (Volume II): Policies and Practices for Successful Schools, PISA, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264267510-en.

[99] OECD (2014), New Insights from TALIS 2013: Teaching and Learning in Primary and Upper Secondary Education, TALIS, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264226319-en.

[51] OECD (2010), PISA 2009 Results: Overcoming Social Background: Equity in Learning Opportunities and Outcomes (Volume II), PISA, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264091504-en.

[1] OECD (2005), Teachers Matter: Attracting, Developing and Retaining Effective Teachers, OECD Publishing, Paris, http://dx.doi.org/10.1787/19901496.

[55] Patall, E., H. Cooper and A. Allen (2010), “Extending the school day or school year: A systematic review of research (1985-2009)”, Review of Educational Research, http://dx.doi.org/10.3102/0034654310377086.

[25] Pianta, R. and B. Hamre (2009), “Conceptualization, measurement, and improvement of classroom processes: Standardized observation can leverage capacity”, Educational Researcher, Vol. 38/2, http://dx.doi.org/10.3102/0013189X09332374.

[80] Pianta, R. and B. Hamre (2009), “Conceptualization, measurement, and improvement of classroom processes: Standardized observation can leverage capacity”, Educational Researcher, Vol. 38/2, pp. 109-119, http://dx.doi.org/10.3102/0013189X09332374.

[32] Pollard, A. (2010), Professionalism and Pedagogy: a contemporary opportunity, Teaching and Learning Research Programme, London.

[76] Praetorius, A. et al. (2018), “Generic dimensions of teaching quality”, ZDM Mathematics Education, Vol. 50/3, pp. 407-426.

[107] Praetorius, A. et al. (2019), “Methodological challenges in conducting international research on teaching quality using standardized observations”, in Suter, L., E. Smith and B. Denman (eds.), The SAGE Handbook of Comparative Studies in Education, SAGE Publishing, Thousand Oaks, CA.

[36] Reinholz, D. and N. Shah (2018), “Equity analytics: A methodological approach for quantifying participation patterns in mathematics classroom discourse”, Journal for Research in Mathematics Education, Vol. 49/2.

[12] Reynolds, D. et al. (2014), “Educational effectiveness research (EER): a state-of-the-art review”, School Effectiveness and School Improvement: An International Journal of Research, Policy and Practice, Vol. 25/2, pp. 197-230.

[33] Reynolds, M. (1999), “Critical Reflection and Management Education: Rehabilitating Less Hierarchical Approaches”, Journal of Management Education, Vol. 23/5, pp. 537–553, https://doi.org/10.1177/105256299902300506.

[15] Rivkin, S., E. Hanushek and J. Kain (2005), “Teachers, Schools, and Academic Achievement”, Econometrica, Vol. 73/2, pp. 417-458.

[16] Rockoff, J. (2004), “The Impact of Individual Teachers on Student Achievement: Evidence from Panel Data”, American Economic Review,, Vol. 94/2, pp. 247-252, http://dx.doi.org/10.1257/0002828041302244.

[103] Roth, K. et al. (2006), Teaching Science in Five Countries: Results From the TIMSS 1999 Video Study Statistical Analysis Report, US Department of Education, National Center for Education Statistics.

[34] Rowan, B. and R. Correnti (2009), “Studying reading instruction with teacher logs: Lessons from the study of instructional improvement”, Educational Researcher, Vol. 38/2, http://dx.doi.org/10.3102/0013189X09332375.

[98] Rozman, M. and E. Klieme (2017), Exploring cross-national changes in instructional practices: Evidence from four cycles of TIMSS. Policy Brief Vol. 13., International Association for the Evaluation of Educational Achievement., Amsterdam.

[61] Scherer, R., T. Nilsen and M. Jansen (2016), “Evaluating Individual Students’ Perceptions of Instructional Quality: An Investigation of their Factor Structure, Measurement Invariance, and Relations to Educational Outcomes”, Frontiers in Psychology, Vol. 7, http://dx.doi.org/10.3389/fpsyg.2016.00110.

[52] Schmidt, W., L. Cogan and R. Houang (2011), “The Role of Opportunity to Learn in Teacher Preparation: An International Context”, Journal of Teacher Education, Vol. 62/2, pp. 138-153, https://doi.org/10.1177/0022487110391987.

[53] Schmidt, W., P. Zoido and L. Cogan (2014), “Schooling Matters: Opportunity to Learn in PISA 2012”, OECD Education Working Papers, No. 95, OECD Publishing, Paris, https://dx.doi.org/10.1787/5k3v0hldmchl-en.

[37] Schweig, J. (2016), “Moving beyond means: revealing features of the learning environment by investigating the consensus among student ratings”, Learning Environments Research, Vol. 19/3, http://dx.doi.org/10.1007/s10984-016-9216-7.

[35] Schweig, J., J. Kaufman and D. Opfer (2020), “Day by Day: Investigating Variation in Elementary Mathematics Instruction That Supports the Common Core.”, Educational Researcher, Vol. 49/3, pp. 176–187, http://dx.doi.org/10.3102/0013189X20909812.

[90] Seidel, T., M. Prenzel and M. Kobarg (2005), How to run a video study: Technical report of the IPN Video Study.

[39] Shulman, L. (1987), “Knowledge and Teaching: Foundations of the New Reform”, Harvard Educational Review, Vol. 57/1, http://dx.doi.org/10.17763/haer.57.1.j463w79r56455411.

[96] Stein, M. and S. Lane (1996), “Instructional tasks and the development of student capacity to think and reason: An analysis of the relationship between teaching and learning in a reform mathematics project”, Educational Research and Evaluation, Vol. 2/1, pp. 50-80, http://dx.doi.org/10.1080/1380361960020103.

[102] Stigler, J. et al. (1999), The TIMSS Videotape Classroom Study: Methods and Findings from an Exploratory Research Project on Eighth-Grade Mathematics Instruction in Germany, Japan, and the United States., National Center for Education Statistics (NCES), Washington DC.

[105] Stigler, J. and J. Hiebert (2000), “The Teaching Gap: Best Ideas from the World’s Teachers for Improving Education in the Classroom”, Journal of Curriculum Studies, http://dx.doi.org/10.1080/00220270050167215.

[13] Summers, A. and B. Wolfe (1977), “Do Schools Make a Difference?”, The American Economic Review, Vol. 67/4, pp. 639-652.

[74] Taut, S. and K. Rakoczy (2016), “Observing instructional quality in the context of school evaluation”, Learning and Instruction, Vol. 46, pp. 45-60, http://dx.doi.org/10.1016/j.learninstruc.2016.08.003.

[87] Taut, S. and Y. Sun (2014), “The Development and Implementation of a National, Standards-based, Multi-method Teacher Performance Assessment System in Chile”, Education Policy Analysis Archives, Vol. 22/71, http://dx.doi.org/10.14507/epaa.v22n71.2014.

[77] Taut, S. et al. (2016), “Teacher performance and student learning: Linking evidence from two national assessment programs.”, Assessment in Education: Principles, Policy & Practice, Vol. 23/1, pp. 53-76, http://dx.doi.org/10.1080/0969594X.2014.961406.

[81] Tschannen-Moran, M. and A. Woolfolk Hoy (2001), “Teacher Efficacy: Capturing an Elusive Construct”, Teaching and Teacher Education, Vol. 17, pp. 783-805, http://dx.doi.org/10.1016/S0742-051X(01)00036-1.

[86] Tschida, C. (2017), “Partnering Principal and Teacher Candidates: A Virtual Coaching Model”, Journal of Technology and Teacher Education, Vol. 25/4, pp. 495-519.

[57] Van de Vijver, F. and J. He (2014), “Report on Social Desirability, Midpoint and Extreme Responding in TALIS 2013”, OECD Education Working Papers, No. 107, OECD Publishing, Paris, https://dx.doi.org/10.1787/5jxswcfwt76h-en.

[62] Vieluf, S. et al. (2012), Profiles of Teaching Practices and Insights and Innovation: Results from TAILS 2008.

[48] Wang, M. and J. Degol (2016), School Climate: a Review of the Construct, Measurement, and Impact on Student Outcomes, http://dx.doi.org/10.1007/s10648-015-9319-1.


← 1. Germany* refers to a convenience sample of volunteer schools.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2020

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at http://www.oecd.org/termsandconditions.