# 5. Instruction

Courtney A. Bell
Jonathan Schweig
Katherine E. Castellano
Eckhard Klieme
Brian M. Stecher

Instruction is critical to students’ academic learning and the development of positive dispositions toward mathematics and themselves. Instruction is made up of the practices teachers use to help their students learn the curriculum, building from students’ current understandings of mathematics to the instructional goals in the quadratic equations unit.1 Using instructional practices to help students learn is at the heart of the teaching job.

Instruction is a rich domain of teaching. In this study it was broken into sub-domain practices related to subject matter (e.g. clarity of learning goals, types of mathematical representations, connections and patterns), the depth of students’ cognitive engagement (e.g. level of cognitive demand, understanding of rationales, opportunities to practice, metacognition), teachers’ assessments of and responses to students’ understandings (e.g. eliciting of student thinking, providing feedback to students on their thinking and aligning instruction to that thinking), and students’ participation in the classroom’s discourse (e.g. who speaks, the nature of questions, the type of explanations).

This chapter reports findings on the quality of instruction from four perspectives: observations of lessons, analyses of lesson materials (including lesson plans, visual aids, handouts and student assignments), teacher questionnaires and student questionnaires.

To measure the overall quality of instruction, observers holistically rated the quality of 2-3 teaching practices for each of the four sub-domains of instruction on a four-point scale. Those practices included:

• Subject matter: explicit connections and explicit patterns and generalisations.

• Assessment of and response to students: eliciting student thinking, teacher feedback and aligning instruction to present student thinking.

• Cognitive engagement of students: engagement in cognitively demanding subject matter, multiple approaches to/perspectives on reasoning, understanding of subject matter procedures and processes.

• Discourse: nature of discourse, questioning and explanations.

Ratings were aggregated into an overall instructional sub-domain score on the 1-4 scale (see Annex 5.A, Tables 5.A.1 and 5.A.2 for details about each sub-domain’s aggregated scores).

With a range of mean scores from 1.74 to 2.24 on a four-point scale, the overall quality of instruction observed was lower than that of classroom management (range of 3.49 to 3.81) and social-emotional support (range of 2.62 to 3.26) in all countries/economies (Figure 5.1). Instruction mean domain scores suggest there is room for improvement in some classrooms in every country/economy (K-S-T [Japan] [2.24], England [UK] [2.23], Germany* [2.20], Shanghai [China] [2.15], Madrid [Spain] [1.96], Mexico [1.92], B-M-V [Chile] [1.85] and Colombia [1.74]) (see Annex 5.A, Tables 5.A.1 and 5.A.2).

In B-M-V (Chile), England (UK), Germany*, Madrid (Spain) and Shanghai (China) the overall quality of instruction was roughly similar across classrooms as many classroom means were concentrated within a score range (Figure 5.1). In contrast, Colombia, K-S-T (Japan) and Mexico have flatter distributions, suggesting that there are larger differences across classrooms in the quality of instruction within these countries/economies.

A closer look at the practices that compose the overall instructional domain provides a richer and more complex picture. See Chapter 2 for the specific practices and aggregation strategy used for the instruction domain and sub-domains. Classrooms’ mean instructional sub-domain scores are plotted in Figure 5.2 in a density curve, ranging from the lowest score of 1 to the highest score of 4. The lowest rated practices were quality of subject matter (ranging from 1.36 to 1.97) and cognitive engagement (ranging from 1.48 to 2.07) in every country/economy. The mean classroom scores for discourse (ranging from 1.85 to 2.54) and assessment of and response to students’ thinking (ranging from 2.11 to 2.70) were around the middle of the four-point scale but still well below those for classroom management and below the social-emotional support scores.2

The differences of the sub-domain scores within countries highlight important insights. Figure 5.2 shows that some classrooms’ mean scores were above the midpoint of the four-point scale. This occurred most frequently in either the discourse and/or the assessment practices. This means that observers in every country saw some classrooms with moderate or strong practices for at least one sub-domain.

It is worthwhile to highlight that the quality of practices observed in Shanghai (China) classrooms was quite similar. Classrooms were concentrated around the mean score for each sub-domain in Figure 5.2. This suggests that teachers and students engaged in similar practices at a similar level of quality no matter the classroom in which they participated.

This is not the case in other countries/economies as revealed by the variety of shapes of density curves in Figure 5.2. In England (UK), many classrooms were quite similar to one another on assessment and discourse, but differed considerably on cognitive engagement and quality of subject matter. In Colombia and Germany*, classrooms were similar to one another on quality of subject matter and discourse respectively, but differed on the other two sub-domains. The lack of clear patterns suggests significant differences within countries.

The foundation of learning quadratic equations is the quality of the mathematics itself. It is possible to teach most mathematic topics as a series of disconnected facts, theorems, procedures and processes. When this occurs, students may simply memorise how to do problems or memorise which procedures and processes are and are not allowed. But in this portrayal of mathematics, few students will have the opportunity to see the patterns and connections within and across topics in mathematics. Thus, a foundational aspect of high-quality instruction is providing students with access to high quality mathematics that has explicit learning goals and systematic opportunities to learn how to solve mathematical problems and how to think mathematically.

High-quality subject matter is not solely the province of teachers and students. Each country/economy’s curriculum plays an important role in the nature and quality of quadratic equations taught and learnt (see Chapter 6 for more details). Depending on the country/economy’s policies, teachers may have some freedom to modify and/or enrich the curriculum provided to them.

The quality of subject matter sub-domain is made up of two components – explicit connections and explicit patterns and generalisations. Observers also rated the explicitness of learning goals, the degree to which students made real-world connections to quadratic equations and the use of different types of representations (e.g. pictures, graphs, etc.). As Figure 5.2 shows, observers found that in general the quality of subject matter was the lowest or second lowest scoring sub-domain, with mean classroom scores lower than 2 on the four-point scale for all countries/economies (Shanghai [China] [1.97], England [UK] [1.76], K-S-T [Japan], [1.70], Madrid [Spain] [1.53], Mexico [1.53], Germany* [1.51], Colombia [1.41] and B-M-V [Chile] [1.36]). This section provides additional detail regarding the specific teaching practices measured in the quality of subject matter sub-domain.

It is helpful for students to know what they are expected to learn. Teachers can make explicit the activity or lesson’s learning goals. If a teacher states “Today you will learn to solve quadratic equations using factorisation”, the teacher is describing a specific learning goal for the lesson. If a teacher says, “Today we will work together in pairs on the homework from yesterday” or “Today we will continue our study of quadratic equations”, she is specifying an activity, but not a specific mathematics learning goal. A learning activity and a learning goal are not the same thing. Activities are used to help students achieve goals.

Observers documented whether there was no explicit learning goal or activity stated (score 1), an explicit statement of a topic or activity (score 2) or an explicit statement of a learning goal (score 3). This was noted every 8 minutes of the lesson. Once a learning activity or goal was made explicit – whenever this occurred in the lesson –the observer reassigned the same rating as the previous segment unless there was a new explicitly stated activity or goal. Classroom averages were calculated as described in Chapter 2.

All teachers in B-M-V (Chile), K-S-T (Japan) and Shanghai (China), and over 90% of teachers in the other countries/economies stated an explicit learning activity or goal, scores 2 and 3 (Figure 5.3). There was wide variation across countries in the degree to which an explicit learning goal – not just an activity – was stated. In B-M-V (Chile) and Shanghai (China), more than three-quarters of teachers identified a clear learning goal, while in Colombia, K-S-T (Japan) and Madrid (Spain), less than one-quarter did (see Annex 5.A, Tables 5.A.5 and 5.A.6 for additional details).

Teachers can also make learning goals explicit in teaching materials (including lesson plans, visual aids, handouts and student assignments). These were assigned a score of 3 if they provided evidence of explicit learning goals, a score of 2 if they only provided evidence of topics to be covered or activities to be undertaken, and a score of 1 if neither of these goal statements were present. Classroom averages were calculated as described in Chapter 2. Additionally, classroom maximum scores were calculated as the highest score achieved for each teacher on any given lesson.

In Shanghai (China), teachers had specified the learning goals for nearly all of the materials collected (see Annex 5.B, Teaching Materials [TM] Table 5.B.1). In other countries/economies, most teaching materials only indicated topics to be covered or activities to be undertaken, with mean scores ranging from 1.7 in Madrid (Spain) to 2.4 in Mexico. Teachers did not consistently specify learning goals in materials across lessons. The majority of teachers in B-M-V (Chile), Colombia, England (UK), Mexico and Shanghai (China) made learning goals explicit in at least one of the four lessons for which teaching materials were collected. The majority of teachers in K-S-T (Japan) made learning goals explicit in at least one of the two lessons for which teaching materials were collected (see Annex 5.B, TM Table 5.B.2).

Within the limitations of the curricula, teachers made choices about what mathematical representations would be taught and how they would be taught.3 Of course, equations can be expected to be the most prominent type of representation used in teaching the focal topic of quadratic equations. However, other types of representations can be used either sequentially or at the same time, to enhance students’ learning opportunities (see Chapter 6).

For example, when helping students understand the meaning of , the teacher might ask students to graph $y=2x².$ This means that the graph and the equation would have been used at the same time. In order to determine what representations were available to students, observers identified the representations that occurred in each eight-minute segment of the lesson. The number of segments that had a certain representation – e.g. a graph, table, etc. – was then counted. That number of segments was then divided by the total number of segments in the lesson. This resulted in a percentage of lesson segments in which that type of representation was used. These percentages were averaged over observers and lessons to obtain a percentage of segments for which a representation was used on average across observers and lessons for a classroom.

Equations were present in over 88% of lesson segments of the average classroom in all participating countries/economies (see Annex 5.A, Table 5.A.7).

Equations were used with graphs in a smaller proportion of lesson segments in Germany* (34%), Colombia (28%), England (UK) (22%), B-M-V (Chile) (15%) and Mexico (11%). Graphs were almost never used during quadratic equations lessons in K-S-T (Japan) (1%) and Shanghai (China) (0%).

Drawings, representations that provide information required to solve the problem, were used in a considerable proportion of lesson segments in K-S-T (Japan) (38%) and Mexico (24%), but more rarely used in other countries/economies (ranging from 5 to 14% of lesson segments).

Other types of representations such as objects (physical objects such as a replica Eiffel Tower or a sheet of paper or dice) and tables (an arrangement of numbers, signs, or words that exhibit a set of facts or relations in a definite, compact and comprehensive form) were rarely used in any participating countries/economies.

When students are able to make clear, explicit and specific connections between different aspects of the mathematics, they can develop deeper understanding (Stigler and Hiebert, 1999[1]). This does not mean simply learning different types of representations (e.g. equations, graphs, drawings) but rather, it means learning how to connect representations and understand how they are related. Other kinds of connections are also important, such as connections between types of equations and ways of solving those equations. An example is students’ making a connection between the nature of a, b and c in the quadratic equation ax2 + bx +c = 0 and the solution method that is most efficient (e.g. factoring, completing the square). Observers noted connections between two representations or any other two aspects of the mathematics, such as a connection between a mathematical rule and an equation or between two procedures.

Observers rated the holistic quality of connections – their explicitness, the number present and the nature of the connections on a 1 to 4 scale, aggregated as described in Chapter 2. A rating of 1 indicated that the observers saw no connections and a rating of 4 represented frequent conceptually complex connections.

Connections were not common, and often implicit and/or vague when present (Figure 5.4). The average country/economy’s classroom scored between 1.5 and 2 (England [UK] [1.93], K-S-T [Japan] [1.91], Germany* [1.77], Mexico [1.76], Madrid [Spain] [1.72], Colombia [1.57], B-M-V [Chile] [1.54] and Shanghai [China] [1.52]. Scores in this range mean that on average, lessons had zero or one and/or vague connection. This is noteworthy, given the fact multiple representations were frequently present and there are many potential connections between aspects of the mathematics.

Only teachers and students of a few classrooms in Mexico (14%), England (UK) (13%), Madrid (Spain) (6%) and K-S-T (Japan) (6%) were observed making at least two instructional connections between ideas, procedures, perspectives, representations or equations.

Teaching materials can provide opportunities to make connections between the different representations (including textual, symbolic, visual and physical representations) of the same mathematical idea. The scoring of teaching materials was based on whether students were responsible for making these connections (score 3), the teacher or teaching materials made the connections explicit (e.g. an activity provides both an equation and a graph of the same quadratic function) (score 2), or each mathematical idea is represented in a single manner (e.g. a table, graph, diagram or equation) or multiple representations are present but they never connected (score 1). Classroom averages were calculated as described in Chapter 2. Classroom maximum scores were calculated as the highest score achieved for each teacher on any given lesson.

Teachers tended to use materials that make explicit connections among multiple representations as indicated by mean scores around 2 in Germany* (2.40), K-S-T (Japan) (2.28), Mexico (2.28), England (UK) (2.14), B-M-V (Chile) (1.86), Colombia (1.81) and Madrid (Spain) (1.65) (see Annex 5.B, TM Table 5.B.3). In Shanghai (China) (1.29), students did not typically have opportunities to explore connections among representations; most teaching materials had mathematical ideas represented in a single manner or multiple representations were unconnected.

There was variation in the extent to which teaching materials asked students to make connections between different representations within country/economies and across lessons. The majority of teachers in in B-M-V (Chile), Colombia, England (UK), Germany*, K-S-T (Japan), Madrid (Spain) and Mexico used materials with evidence that students were provided opportunities to make connections by themselves in at least one lesson (see Annex 5.B, TM Table 5.B.4).

Connections between mathematical representations might be easier to foster in the context of certain subtopics. For example, in Germany*, though opportunities to make connections are rare, these opportunities show strong positive associations with the exploration of graphs and functions (r = 0.68). When mathematical connections were made in Shanghai (China), these were often related to the use of real-word situations to translate verbal, textual, or figural information (other than graphs of functions) into equations or appropriate mathematical expressions (r = 0.88).

Opportunities for students to connect classroom mathematics to real-world contexts are important, as they can enhance conceptual understanding (Blum, 2002[2]; De Lange, 1996[3]; Gravemeijer et al., 2000[4]; Perry and Dockett, 2015[5]; Boaler, 2000[6]).The ability to apply mathematics to problems arising in everyday life and the workplace is considered fundamental to mathematical literacy and proficiency (CCSSI, 2010[7]; National Council Of Teachers Of Mathematics, 2000[8]; OECD, 2019[9]). All mathematics topics have some potential connections to students’ lives but the ease with which these may be made will vary by the age of the students and the specifics topic at hand. Quadratic equations, for example, can be used to maximise profit, to find the area of a room or a field, to model the motion of a ball or another projectile, or to find the stopping distance of a car traveling at a given velocity.

There are many real-world connections to quadratic functions, but fewer to quadratic equations. Even though some country/economies taught equations and functions in an integrated way, students across countries had few or no opportunities to work with real-world connections in the lessons observed (see Global Teaching InSights Technical Report).

Teaching materials showed differences across participating countries/economies in fostering connections to real-world contexts outside of mathematics such as classic “word problems", real data analysis (e.g. finding the maximum height achieved by a projectile), “hands-on” representations using props or physical materials (e.g. using circles of varying diameter to measure pasta) and modelling real phenomena (e.g. modelling the elevation angle of the sun over the course of a day). The highest rating (3) occurred if students are responsible for making the connection and the lowest (1) if there was no evidence of real-world connections. Classroom averages were calculated as described in Chapter 2.

The teaching materials tended to show that either the real-world connections were made explicit to the students, or that the real-world context of a task might not have been necessary to carry out the activity on average in Mexico (1.92), K-S-T (Japan) (1.91), Germany* (1.73) and B-M-V (Chile) (1.51) (see Annex 5.B, TM Table 5.B.22). In contrast, teaching materials contained on average no connections among mathematics and real-world contexts, including students' life experience, in England (UK) (1.31), Shanghai (China) (1.36), Colombia (1.45) and Madrid (Spain) (1.45). In spite of this, the majority of teachers in all country/economies used materials that had some connections among mathematics and real-world contexts in at least one lesson (see Annex 5.B, TM Table 5.B.23).

There was variation in the extent to which teaching materials had evidence of students playing a role in making real-world connections (see Annex 5.B, TM Table 5.B.23). Students in the majority of classrooms in Germany*, K-S-T (Japan), Madrid (Spain) and Mexico were asked to figure out how to connect or apply mathematics to a real-world context, develop a mathematical model that is appropriate to a situation or explain mathematical relationships using contextual information in at least one lesson.

One way for students to develop strong understandings of mathematics is for them to look for and notice patterns in or to make generalisations across aspects of the mathematics. A teacher might have the class work on six problems, share their solution strategies (e.g. completing the square, using the quadratic formula, or factorising) and then ask them to try and notice when and why factoring works on some problems but not on others.

Observers holistically rated the quality of patterns and generalisations on a four-point scale (see Chapter 2 for aggregation method), noting who was noticing the pattern or generalisation (i.e. the teacher or the students) and whether the pattern concerned surface or deeper features of the mathematics. A 1 indicates students were not asked to look for patterns or make generalisations and a 4 indicates that either the students or teachers looked for mathematical patterns about deeper features of the mathematics or made explicit generalisations about deeper mathematics. The example of noticing why factoring works on some problems and not others would be considered a deeper feature of the mathematics.

Neither the teacher nor students engaged considerably in looking for patterns and making generalisations in the mathematical work on average in England (UK) (1.59), K-S-T (Japan) (1.49), Madrid (Spain) (1.34), Mexico (1.29), Germany* (1.25), Colombia (1.24) and B-M-V (Chile) (1.18). In contrast, their peers in Shanghai (China) (2.41) did look for patterns on surface aspects of mathematics and developed generalisations focused on nomenclature or algorithmic processes (Figure 5.5).

It is worthwhile to note that the type of patterns or generalisations that can support the development of strong understandings of mathematics were not observed in any classroom. They would enable students to notice patterns in the deeper features of the mathematics and to develop generalisations focused on foundational concepts, ideas, and/or definitions in a clear and explicit way. The contrary was actually observed in most classrooms in B-M-V (Chile) (96%), Germany* (88%), Colombia (87%), Madrid (Spain) (74%) and Mexico (71%) where almost no patterns or generalisations were engaged by students at all (score below 1.5). Fewer classrooms in K-S-T (Japan) (58%) and England (UK) (41%), and no Shanghai (China) (0%) classrooms fell in this lowest category.

Teaching materials can also provide students with opportunities to notice explicit patterns and make generalisations. Evidence that students were asked to use patterns or repeated reasoning to understand quantitative relationships, make conjectures, make predictions, or derive general methods or rules, was rated more highly (score 3) than evidence that patterns and generalisations were already presented to students in the teaching materials (score 2). No attention to patterns and generalisations was rated 1. Classroom averages were calculated as described in Chapter 2 and classroom maximum scores were calculated as the highest score achieved for each teacher on any submitted lesson.

There were generally few opportunities to notice patterns and make generalisations in teaching materials. The majority of teachers in B-M-V (Chile), Colombia and Madrid (Spain) had no such opportunities in any lessons (see Annex 5.B, TM Table 5.B.14). In K-S-T (Japan), the majority of teachers typically used materials where patterns and generalisations were presented to students, and in Shanghai (China), nine out of ten teachers typically provided such opportunities (see Annex 5.B, TM Table 5.B.13). The majority of the teachers in England (UK), Germany* and Mexico used materials where patterns and generalisations were presented to students in at least one lesson (see Annex 5.B, TM Table 5.B.14), although such opportunities were not systematic as indicated by mean scores across lessons of 1.32, 1.44 and 1.33 respectively.

Much has been written about the importance of engaging students cognitively. When students are cognitively engaged, they tend to be more interested (Fauth et al., 2014[10]) and their learning outcomes improve (Baumert et al., 2010[11]; Lipowsky et al., 2009[12]). While researchers agree that students must be cognitively engaged for them to develop strong understandings of mathematics, it is often challenging to discern from observations of students’ behaviour whether or not students are cognitively engaged. By the time students reach secondary school, students have generally learnt how to behave so that our methods for discerning students’ cognitive engagement must go beyond observing whether students are moving their pencils or listening attentively to whoever is speaking. Observers drew on two aspects of classroom interactions in order to judge students’ cognitive engagement – the nature of the mathematics at hand and the actions students took with that mathematics.

As Figure 5.2 shows, the cognitive demand on students was low and generally as low as the sub-domain scores on the quality of subject matter. It is sensible that if subject matter provides few learning opportunities to develop understanding, the frequency and depth of students’ cognitive engagement may also be low. Country/economy mean student cognitive engagement scores were as follows: K-S-T (Japan) (2.07), England (UK) (1.86), Germany* (1.81), Shanghai (China) (1.71), Mexico (1.61), Madrid (Spain) (1.53), Colombia (1.49) B-M-V (Chile) (1.48). This section provides additional detail regarding the specific teaching practices measured in the cognitive engagement sub-domain.

One way to engage students cognitively is to ask them to carry out cognitively demanding work. Many common tasks in the teaching of quadratic equations can be taught in ways requiring little cognitive demand and would not be rated highly on this scale. For example, the task below would have been rated low because it does not require analysis, creation or evaluation.

Observers rated the frequency of cognitively demanding work holistically on a four-point scale and aggregated as described in Chapter 2. Tasks that required thoughtful analysis, creation or evaluation were considered more cognitively demanding. If work with such tasks was frequent, observers assigned a rating of 4, if such work did not occur, observers assigned that segment of the lesson a rating of 1. Figure 5.7 shows the percentage of classrooms that had a mean cognitive engagement score in the specified score range.

Students were not given cognitively challenging work on a regular basis. The average level of cognitive demand observed was generally low, with a mean score ranging from 2.52 in K-S-T (Japan) to 1.36 in B-M-V (Chile) (see Annex 5.A, Tables 5.A.8 and 5.A.9). On average, students only engaged occasionally in analyses, creation or evaluation work that is cognitively rich and requires thoughtfulness (K-S-T [Japan] [2.52], England [UK] [1.96], Germany* [1.93], Mexico [1.83]) Madrid [Spain] [1.63], Shanghai [China] [1.63], Colombia [1.50] and B-M-V [Chile] [1.36]).

There was also variation within countries/economies. In K-S-T (Japan), students in 53% of classrooms sometimes or frequently were asked to do cognitively demanding tasks whilst 46% were only asked to do so occasionally (Figure 5.7). Students in a small number of classrooms in Germany* (12%), Mexico (9%) and England (UK) (8%) were also sometimes or frequently cognitively engaged.

Students in varying proportions of classrooms were never engaged in cognitively demanding tasks, B-M-V (Chile) (71%), Colombia (51%), Madrid (Spain) (37%), Mexico (25%), Germany* (24%) and England (UK) (9%). This low level of cognitive challenge raises the need to consider whether and how students can engage with the subject matter in more cognitively demanding ways, more frequently. The example of the Japanese classrooms regularly engaging students in cognitively demanding work suggests that it is possible to have consistently high levels of cognitive demand with the topic of quadratic equations.

Cognitive demand can be thought of as the cognitive requirements – its richness or difficulty – of a task. Cognitive engagement – how students and teachers interact cognitively with the task – is a different aspect of teaching and learning. Students may be asked to work on cognitively demanding tasks however, when the tasks are enacted, students’ cognitive engagement may not match the level of demand in the task (Henningsen and Stein, 1997[13]). One way to improve cognitive engagement is to require students to engage in sense-making strategies when carrying out tasks of any level of cognitive demand. Sense-making strategies support students in understanding why procedures and processes are logical. On average, students did this type of sense-making more often than they engaged with cognitively demanding tasks.

Observers rated on a four point scale the degree to which students tried to understand the rationale for procedures and processes – one way to make sense of (or understand) mathematical procedures or processes – by stating the goals or properties of the procedure, stating why a procedure or solution is correct or incorrect, or visually designating the steps or elements of a procedure. A score of 1 meant that students never engaged with the rationales for procedures and processes, while a score of 4 meant that they frequently did so. Chapter 2 details the aggregation method used to create classroom mean scores.

On average, students did not engage or occasionally tried to understand the rationale for procedures and processes as shown by mean scores around 2 in K-S-T (Japan) (2.22), England (UK) (2.18), Germany* (2.03), Shanghai (China) (1.95), B-M-V (Chile) (1.87), Mexico (1.81), Colombia (1.76) and Madrid (Spain) (1.69) (see Annex 5.A, Tables 5.A.8 and 5.A.9). This means that students may frequently follow procedures and processes without really understanding the underlying mathematics.

Understanding rationales for procedures and processes is not enough, students need to be able to apply them. When students go over and over the same computations and processes through repetitive practice, they develop fluency to perform them quickly and effectively.

Teachers can devote lesson time to provide students with opportunities to practice. This can be useful when teachers want to make sure that students practice certain procedures or computations, support those who might not be able to complete them on their own or want to build upon them to explain further their rationales. To estimate the time spent on practicing in a lesson, observers noted the amount of time spent practicing in each 8 minute segment of the lesson: ranging from more than half of the time observed (rating 3), only half of the time (rating 2) or no time at all was spent on practice (rating 1). The highest assigned rating across all lesson segments was then assigned as the lesson score. A rating of 3 can be interpreted as there was at least four to eight minutes per lesson spent practicing. It is worth noting that because the highest score for the lesson was taken, there could have been more practice time.

On average, students had at least four minutes of practice time per lesson in England (UK) (2.96), Shanghai (China) (2.69), B-M-V (Chile) (2.59), Germany* (2.45), Madrid (Spain) (2.41), Mexico (2.25), K-S-T (Japan) (2.05) and Colombia (1.96).

There are important differences, however, both between and within countries that are shown in Figure 5.8. In England (UK), virtually all classrooms had between four and eight minutes of practice time in every lesson. In contrast, students in some classrooms of five countries/economies spent less than four minutes practicing per lesson on average (Colombia [24%], K-S-T [Japan] [17%], Mexico [7%] and B-M-V [Chile] [4%]).

The teaching materials can also provide an indication of the number of opportunities students had to practice and develop fluency with specific skills or procedures (refer to Chapter 2 for details on the calculation of classroom averages). These materials include opportunities to practice outside the lesson through homework assignments. The number of opportunities to practice is divided into three levels: more than five opportunities (score 3), one to five (score 2), or no opportunities at all (score 1).

Teaching materials contain evidence of opportunities to practice in all eight countries/economies (see Annex 5.B, TM Table 5.B.17). The mean scores suggest that materials typically provide at least one opportunity to practice (e.g. all means are greater than or equal to two). In B-M-V (Chile), England (UK), Germany*, Madrid (Spain) and Shanghai (China), the majority of teachers typically provide more than five opportunities to practice.

Another way to engage students cognitively is to ask them to use more than one approach to solving the same problem. Sometimes teachers do this sequentially – in one lesson they teach how to factorise a quadratic equation and in the next lesson they teach how to use the quadratic formula to solve the same problem. Other times teachers may ask students to use multiple approaches in the same lesson or to choose an efficient problem-solving approach from the approaches they have learnt.

Observers noted the nature and frequency of solving problems with multiple approaches holistically on a four-point scale (see Chapter 2 for the aggregation method). Score 1 indicates that students did not use multiple solution strategies, while score 4 means that at least one student used more than two procedures or reasoning approaches to solve the problem or type of problem in some depth.

Across countries/economies, students in the average classroom used multiple solution strategies rarely, if at all, with means at the bottom of the four-point scale, between 1.2 to 1.5 (see Annex 5.A, Tables 5.A.8 and 5.A.9). There was a brief use of a second solution strategy by at least one student in one in ten classrooms in England (UK), Germany*, K-S-T (Japan) and Shanghai (China).

Students can also be asked to use more than one method or approach to complete a single mathematical task or activity and to understand the relationships between different methods in the teaching materials. A score of 1 was assigned if students were not asked to use more than one method or approach, a score of 2 was assigned if students could use different methods and a score of 3 was assigned if students were required to use more than one method or to compare and contrast different approaches. In addition to the classroom averages (described in Chapter 2), classroom maximum scores were calculated as the highest score achieved for each teacher on any given lesson.

When considering all teaching materials collected, students typically were not required to use more than one mathematical approach to complete a task or compare and contrast different approaches as shown by mean scores ranging 1 to 2 in all participating countries/economies (see Annex 5.B, TM Table 5.B.18). In at least one lesson, however, students in the majority of classrooms were able to use multiple approaches and the majority of those in Shanghai (China) were required to do so (see Annex 5.B, TM Table 5.B.19).

The education research community has been looking into and advocating for specific teaching strategies that support cognitive engagement for a long time. But, do teachers use these strategies in the classroom? The Study measured the presence of three of these well-researched strategies in classrooms; namely idea-based discussion, metacognitive prompting and self-assessment. In short, there was little use of any of these three strategies in lessons.

Objective-focused discussions – the type of discussion strategy measured in the Study – centre on a learning objective and make extensive use of students’ ideas publicly. Such discussions have been used to deepen students’ understandings and improve cognitive engagement (Leinhardt and Steele, 2005[14]). There were very few or no classrooms where the teacher guided the discussion towards a learning goal and students did much of the talking based on their own ideas (see Annex 5.A, Tables 5.A.10 and 5.A.11).

Metacognition is often understood to refer to a broad category of thinking in which students think about their thinking. There are many teaching strategies that can engage students in this type of thought; asking students to find the step in which they made a mistake or selecting the most efficient solution strategy from among alternative strategies are just two of the many strategies observers noted.

The focus was on whether and how frequently students reflected on their own thinking. A teacher might ask “Can you look at question 7 and think about why you did what you did?”, “Why did you think it was a good idea to use the complete the square approach?” or “Can you think about which approach would be easier – using the quadratic formula or factorising?” Teachers asking the question was not enough to earn a high score, students had to respond to the question. This is a narrowly defined version of metacognition.

While clearly an important type of metacognition for students to practice, this type of metacognition was observed in very few classrooms in the eight country/economy’s classrooms (see Annex 5.A, Tables 5.A.12 and 5.A.13). The exception to this was that one in five teachers in K-S-T (Japan) asked their students to engage in metacognition briefly and/or superficially at least once in a lesson.

Self-assessment is another well-researched strategy that was largely unseen. In all country/economies, the typical teacher used materials that did not ask students to assess their own understanding of the content or to reflect on their own learning (see Annex 5.B, TM Table 5.B.20).

Yet, one in five teachers in K-S-T (Japan) and Shanghai (China) used teaching materials with self-assessment opportunities regularly. When such opportunities were recorded, they occurred in broad and general ways (e.g. “how confident are you about your understanding of this topic?”) rather than in specific terms that were connected to the lesson content.

Technology holds the promise of engaging students in higher order thinking and cognitively demanding tasks. Yet, more technology use does not necessarily translate into better teaching and learning. Studies suggest it matters less whether students and teachers use technology and more what students and teachers do with the technology (Fishman and Dede, 2016[15]).

Students almost never used technology (i.e. non-graphing calculators, graphing calculators, tablets, cell phones and computers) in the average classroom of the Study. Students did not use technology during the lessons observed in four out of five classrooms in all countries/economies but Germany* (56%) (see Annex 5.A, Table 5.A.14). This includes software designed to carry out simulations, instructional games or interactive graphing tasks. Only a small proportion of classrooms in Germany* (20%), Colombia (10%), B-M-V (Chile) (8%), England (UK) (8%) and Madrid (Spain) (8%) ever used such software (see Annex 5.A, Tables 5.A.12 and 5.A.13). It may be the case that such software was unavailable in other classrooms.

Most teachers used technology, but frequently for communication purposes -- as a whiteboard or blackboard might be used. Only a small proportion of classrooms used technology not just for communication purposes, but also to help students develop conceptual understanding (England [UK] [21%], Mexico [19%], Colombia [14%], K-S-T [Japan] [12%], Madrid [Spain] [11%], Germany* [10%], Shanghai [China] [8%] and B M V [Chile] [5%]) (Table 5.1).

Similarly, there was very little evidence of the use of technology as a tool to develop understanding of mathematical concepts and relationships in the teaching materials (see Annex 5.B, TM Table 5.B.21). In B-M-V (Chile), Colombia, England (UK), K-S-T (Japan), Mexico and Shanghai (China), nine in ten teachers used materials that did not include technology or used technology only to make communication more efficient (e.g. students view projected slides). However, one in five teachers in Germany* and one in ten teachers in Madrid (Spain) make some use of technology in teaching materials as a tool to make computation or graphing more efficient (e.g. calculators), to reinforce teaching (e.g. internet instructional videos), for practice, assessment or feedback to the teacher (e.g. online practice problems, quizzes and/or reporting), or for checking correctness (e.g. students are told to use a calculator to confirm their solutions).

Many studies have shown the importance of determining what students know and are able to do and then connecting that to the learning goal (Black and Wiliam, 2009[16]; Rakoczy et al., 2019[17]). This is known as formative assessment. Teachers do this repeatedly minute-by-minute and day-by-day in order to provide instruction that is sensitive to the specific needs and ideas in the classroom.

The strategies teachers use to elicit student thinking, provide feedback and align instruction with that thinking vary, but the goal is the same – to provide appropriate instruction that guides students to new understandings. While one teacher may use the strategy of asking students to raise their hands with questions and another teacher might look over students’ shoulders and review the students’ work as it occurs, all teachers have strategies for learning what students are thinking and providing feedback on that thinking.

As Figure 5.2 shows, teachers engaged in assessing and responding to student thinking regularly. Mean scores fell in the middle of the four-point scoring scale in England (UK) (2.70), Germany* (2.70), Shanghai (China) (2.62), K-S-T (Japan) (2.49), Madrid (Spain) (2.38), Mexico (2.29), B-M-V (Chile) (2.29) and Colombia (2.11) (see Annex 5.A, Tables 5.A.1 and 5.A.2). This section provides additional detail regarding the specific teaching practices measured in the assessment of and response to student thinking sub-domain.

In order to know what students are thinking, teachers must elicit that thinking in some way. For example, they might assign students a problem set to work on, ask questions of them or invite them to explain the procedure they used. It is difficult for teachers to know and address student understandings if students are not required to show their thinking – in writing or by speaking. Being unaware of student understandings may result in less tailored instruction than is needed for all students to learn.

The amount and type of student thinking elicited – e.g. answers, procedures, ideas – was rated holistically on a four-point scale (see Chapter 2 for aggregation method). A rating of 1 meant that there was no student thinking present. Teachers may have been at the front of the room going through a new problem type and only the teacher was talking and writing. Rating 4 meant that there were many student contributions regarding answers, procedures and steps to solve problems but also contributions that included students thinking about ideas and concepts.

In each country/economy, on average, teachers used questions, tasks and prompts to elicit a moderate amount of student thinking that concerned the answers, procedures and the steps necessary for solving a problem. The average classroom was rated in the middle of the scale on eliciting student thinking: Shanghai (China) (3.12), Germany* (2.90), England (UK) (2.83), K-S-T (Japan) (2.68), Madrid (Spain) (2.49), Mexico (2.45), B-M-V (Chile) (2.40) and Colombia (2.28) (see Annex 5.A, Tables 5.A.8 and 5.A.9). This means that on average, students provided contributions that ranged from perfunctory (rating 2) to more detailed (rating 3) regarding answers, procedures and steps to solve problems.

In virtually all classrooms in Shanghai (China) (100%), England (UK) (93%) and Germany* (90%), teachers frequently elicited moderate to large amounts of student thinking concerning answers, procedures, the steps necessary for solving a problem, ideas and/or concepts (i.e. teachers had average classroom scores between 2.5 and 4.0). Many of K-S-T (Japan)’s teachers (71%) also carried out this type of eliciting.

This amount and type of student thinking was elicited in half or fewer classrooms in Madrid (Spain) (52%), Mexico (46%), B-M-V (Chile) (43%) and Colombia (28%). This means that on average, the prompts, questions and tasks given in many classrooms provided little access to student thinking.

Student thinking can be a powerful instructional resource. After eliciting students’ thinking, teachers can choose to carry on with their lesson as planned or embark into unknown territory by using students’ thinking instructionally. This can take different forms.

For example, teachers may choose to ignore student thinking momentarily, guided by the knowledge that students will eventually understand the lesson’s objective with a bit more practice. If the teacher is trying to determine a pattern of thinking across students, she might ask multiple students to share their thinking before assessing what the class understands or does not understand and addressing those understandings through instruction.

When the teacher selects students to share their solution methods on the board, teachers may ignore a correct solution process in order to show where an incorrect process went amiss. Teachers’ decisions about how to instructionally use students’ thinking are often sensitive to the students’ specific thinking and the relationship of that thinking to the learning goal.

When rating the “aligning instruction to present student thinking” component, observers considered four types of actions when determining the degree to which teachers used students’ thinking instructionally:

• drawing attention to the students’ contribution

• asking a question in response to a student’s contribution

• requesting students provide the next step in the procedure or process

• acknowledging patterns in student contributions.

Observers also considered the degree to which teachers provided students with hints or cues. Cues and hints are comments or questions that are intended to move students’ thinking forward and are said in response to written or spoken student thinking.

The degree to which teachers used student thinking to inform instruction was rated holistically by observers on a four-point scale and aggregated as described in Chapter 2. A rating of 1 indicated that teachers did not use students’ contributions and did not provide cues or hints, and a rating of 4 indicates the teacher frequently used students’ contributions and provided frequent cues and hints to support students’ understandings.

On average, teachers across countries/economies sometimes used students’ thinking and provided some cues or hints. In all country/economy means were clustered between 2.65 and 3.25 (see Annex 5.A, Tables 5.A.8 and 5.A.9) except for Colombia with a mean score of 2.38.

In all but one country/economy, more than two-thirds of teachers used student thinking – on average – sometimes or frequently to adapt instruction: Germany* (98%), England (UK) (98%), K-S-T (Japan) (90%), Shanghai (China) (84%), B-M-V (Chile) (73%), Mexico (73%), Madrid (Spain) (68%) and Colombia (41%). This suggests that at a general level teachers’ words and actions aimed to connect to students’ thinking.

When teachers use student thinking instructionally, they may or may not provide explicit feedback about why that thinking is correct or incorrect. There is a large and persuasive body of research on the importance of such feedback to student learning (Black and Wiliam, 2009[16]). Therefore, it is important to understand the degree to which teachers address student thinking by providing feedback.

Feedback was defined as the back and forth exchanges between the teacher and students that focus on why the students’ understandings are correct or incorrect, and the degree to which teacher and student exchanges address the mathematics in a complete manner. This definition implies that when a teacher asks a student what the solution to is and a student responds that $x$ can be either $-1$ or $-2$; and the teacher says “correct”, this is not feedback about “why”. Saying correct does not help students understand why that answer is correct. Such an exchange can of course, be valuable and is a back and forth exchange – simply not one that focuses on “why”.

Observers assigned feedback ratings holistically on a four-point scale (see Chapter 2 for aggregation method). A rating of 1 indicated there was no feedback and/or teacher and student interactions addressed the mathematics in a limited manner. A rating of 4 was assigned when there was frequent feedback and teacher and student interactions addressed the mathematics in a complete manner

In all eight countries/economies, the typical classroom had a small number of feedback interactions focused on why. The mean score was around 2 in all participating countries/economies: Germany* (2.12), England (UK) (2.01), Shanghai (China) (1.94), Madrid (Spain) (1.97), Japan (1.83), Mexico (1.78), B-M-V (Chile) (1.73) and Colombia (1.67). The vast majority of teachers -- between 82% and 98% per country/economy – rarely provided feedback focused on why the mathematics was correct or incorrect; instead they had exchanges with students that were limited in the degree to which they addressed the mathematics at hand.

Only a few teachers sometimes provided feedback about why answers were correct or incorrect in Germany* (18%), Madrid (Spain) (16%), England (UK) (8%), K-S-T (Japan) (8%), Mexico (7%) and Shanghai (China) (5%) (see Annex 5.A, Tables 5.A.8 and 5.A.9).

Classroom discourse regularly requires students to make their detailed thinking visible to themselves, other students or the teacher supports students’ learning (Resnick, Asterhan and Clarke, 2018[18]). Discourse comes in both the written and spoken form and in all types of activity structures -- written work on the board for the whole class to see, spoken explanations in front of the whole class or in student pairs, or even written work on their own papers at their desks. The discourse required of students – its level of detail, its cognitive requirements and the degree to which it demands students to explain why the mathematics works the way it does – are critical features of high-quality instruction. Discourse patterns – who speaks and writes at what moment in the lesson – are related to the cultural norms of schools and classrooms. There were differences in these patterns by country/economy.

As Figure 5.2 showed, classroom discourse was somewhat superficial. Mean classroom discourse scores fell below or in the middle of the four-point scale: Germany* (2.54), K-S-T (Japan) (2.52), England (UK) (2.44), Madrid (Spain) (2.27), (Shanghai (China) (2.27), Mexico (2.11), B-M-V (Chile) (2.10) and Colombia (1.85). This section provides additional detail regarding the specific teaching practices measured in the discourse sub-domain.

The nature of the classroom discourse was rated holistically on a four-point scale (the aggregation method is detailed in Chapter 2). Observers rated the nature of classroom discourse, taking account of who was speaking or writing (teachers, students or both) and the level of detail in students’ contributions. At the low end of the scale, a 1 indicates that the teacher did much of the talking and writing and students did not offer detailed contributions. At the high end, a 4 indicates that students participated in the discourse and that students frequently contributed detailed thinking.

Approximately half or more of the classrooms were characterised by students initiating discourse and sometimes making detailed contributions to the discourse of their lessons as shown by mean scores around the middle of the four-point scale in Germany* (2.82), England (UK) (2.54), Madrid (Spain) (2.53) and K-S-T (Japan) (2.46) (see Annex 5.A, Tables 5.A.8 and 5.A.9).

In contrast, only 3% to 22% of classrooms in B-M-V (Chile), Colombia, Mexico and Shanghai (China) required students to participate in these ways. In these countries, on average, classrooms rarely had students contribute their detailed thinking and the teacher initiated most of the classroom discourse. The mean scores in these countries/economies were closer to a 2 on the four-point scale in Mexico (2.15), Shanghai (China) (2.06), B-M-V (Chile) (2.02) and Colombia (1.72).

Another feature of classroom discourse relates to the kinds of questions that prompt students’ responses. Students may be asked to participate in the classroom discourse in different ways, from recalling what has occurred to higher order tasks like analysing or synthesising. The types of questions to which students respond also shape the classroom discourse.

Questioning of students was rated by observers on a four-point scale; Figure 5.9 shows the distribution of scores over the four score ranges. Questions in rating 1 emphasise recalling and reporting answers while questions in rating 4 emphasise analysis, synthesis, justification and conjecture. Questions in the middle are mixtures of these emphases.

The mean classroom’s questioning score was around the middle of the four-point scale in Germany* (2.64), K-S-T (Japan) (2.62), England (UK) (2.51), Shanghai (China) (2.24), B-M-V (Chile) (2.19), Mexico (2.19), Madrid (Spain) (2.16) and lower in Colombia (1.73). This implies that the average classroom in every country/economy had a mixture of questions that asked students to recall and report answers, and questions that asked students to summarise and apply rules and procedures, but few questions that emphasised analysis, synthesis and other more complex mathematical actions.

Figure 5.9 shows the extent of differences on the type of questions that students engaged within countries/economies. In Germany* (70%), K-S-T (Japan) (64%) and England (UK) (54%) the majority of classrooms tended to emphasise questions requiring summarisation and the application of rules and procedures or analysis type questions. In Colombia (99%), Mexico (82%), Shanghai (China) (81%), Madrid (Spain) (80%) and B-M-V (Chile) (79%), the majority of classrooms tended to emphasise questions that exclusively focused on recalling and reporting answers or generally focused on those types of questions with some summarisation and application of rules and procedures.

Teaching materials can also provide students opportunities to explain or justify their thinking. Evidence that students were asked to communicate their mathematical thinking by explaining both how to perform procedures and why those procedures work was rated more highly (score 3) than evidence that students needed to offer either a how or a why explanation (score 2). Evidence that students were only asked to recall facts or definitions or to follow algorithms was assigned the lowest rating (score 1). Classroom averages were calculated as described in Chapter 2 and classroom maximum scores were calculated as the highest score achieved for each teacher on any given lesson.

In the majority of countries/economies, the typical teaching materials asked students to recall facts, recall definitions or follow algorithms (mean scores were close to 1) (see Annex 5.B, TM Table 5.B.15). Only in Shanghai (China), which had the highest mean score, the majority of teachers used materials that offered opportunities for students to explain both how to perform procedures and to justify why procedures work in at least one lesson. In England (UK), Germany*, K-S-T (Japan) and Mexico, there was evidence that the majority of teachers used teaching materials that asked students to explain their thinking (either how or why) in at least one lesson (see Annex 5.B, TM Table 5.B.16).

In contrast, the majority of teachers in B-M-V (Chile), Colombia and Madrid (Spain) did not use teaching materials that asked students to explain their thinking in any lessons. In only a few cases students were required to communicate explain how to perform procedures, describe how one thing is related to another or justify why procedures work (see Annex 5.B, TM Table 5.B.16).

Teaching materials may afford different opportunities for students to communicate their thinking. The nature of the tasks, the lesson goals, the specific subtopics covered and didactical approach (as described in Chapter 6) all play a role in whether such opportunities are present (Boaler and Staples, 2008[19]; Cohen and Lotan, 1994[20]; Horn, 2006[21]; Schoenfeld, Kilpatrick and Wood, 2008[22]). Teaching materials that focus on solving equations often focus more on recall and memories (Henningsen and Stein, 1997[13]; Porter et al., 2011[23]; Rivera and Becker, 2009[24]).

While the actual content to be taught may more naturally afford opportunities for students to explain their thinking, there was evidence that teachers have ways to embed such opportunities into any content. Opportunities for student explanation were associated to a different variety of subtopics. For example, opportunities to explain occurred most frequently in the context of exploring real world applications in Colombia (r = 0.35) and K-S-T (Japan) (r = 0.25), completing the square in Shanghai (China) (r = 0.31) and exploring functions in Germany* (r = 0.52) (see Annex 5.B, TM Tables 5.B.5 to 5.B.12).

Sometimes questions posed to students naturally require quite simple or straightforward responses. For example, in the middle of the quadratic equations unit when students are practicing the application of a single procedure, it may not be necessary to provide a detailed explanation about why the procedure works the way it does. But at other times in a lesson or unit, it may be critical for students to understand why a procedure works and perhaps how the procedure differs from the one they learnt the day before. Explanations – descriptions of why ideas or procedures are the way they are – are an important part of supporting the development of a robust understanding of quadratic equations.

The explanations teachers or students offered were observed and rated on a four-point scale. The highest rating (rating 4) indicates that written or spoken explanations focused on deeper features of the mathematics and/or they were lengthier, more detailed explanations. The lowest rating (rating 1) indicates there were no explanations of why ideas or procedures are correct or incorrect or the explanations were brief or superficial.

The average classroom in each country had explanations present however, they were generally brief or superficial with mean scores between 2 and 2.5 in Shanghai (China) (2.49), K-S-T (Japan) (2.47), England (UK) (2.29), Germany* (2.15), Madrid (Spain) (2.12), Colombia (2.09), B-M-V (Chile) (2.08), Mexico (2.00).

Yet, over half of the classrooms focused on lengthier explanations and explanations of deeper mathematics in Shanghai (China) (56%) and K-S-T (Japan) (55%), with classroom mean scores above 2.5 (Figure 5.10). This was only the case in a markedly smaller proportion of classrooms in England (UK) (24%), Madrid (Spain) (19%), Germany* (18%), B-M-V (Chile) (13%), Colombia (8%) and Mexico (7%).

The views of teachers and students on instructional practices matter. They can reveal their perceptions or awareness of what occurs in the classroom, and thus provide valuable information to complement video observation and teaching material-based findings. For example, observers can determine whether the class as a group is engaged in cognitively demanding subject matter, while student reports can help infer to what extent each individual student is engaged in thinking about mathematics through questions on how they felt and behaved during the lessons.

At the end of the unit on quadratic equations, teachers and students were asked about several instructional practices, representing all four sub-domains of instruction:

• Quality of subject matter: setting goals at the beginning of lessons.

• Cognitive engagement of students: self-reported cognitive engagement during the unit on quadratic equations.

• Assessment of and response to students: feedback given by teachers.

• Discourse: student participation in discourse and teachers’ explanations of mathematical procedures.

Both students and teachers were asked how often the teacher had “set goals at the beginning of instruction” during the unit on quadratic equations. Response options were never or almost never (1), occasionally (2), frequently (3) and always (4) (see Annex 5.A, Table 5.A.24).

Setting goals seems to be a routine, executed in most classrooms. Over two thirds of students in B-M-V (Chile), Colombia, England (UK), K-S-T (Japan), Madrid (Spain), Mexico and Shanghai (China) reported that their teacher always or frequently set goals at the beginning of lessons. In Germany*, however, more than two thirds of students reported that goals were never or occasionally stated in the beginning of the lesson. These country differences were mirrored in teachers’ reports, although teachers reported higher frequencies of goal setting at the beginning of instruction than students.

Students were asked to indicate their level of cognitive engagement at the end of the unit on quadratic equations. This can complement observation data on whether the class as a group is engaged in cognitively demanding subject matter.

Across participating countries/economies, more students reported engaging with mathematical tasks than developing their own ideas on mathematics (Table 5.2). This difference was especially large in Germany* and in England (UK). Students in Shanghai (China) seemed exceptionally engaged in understanding tasks, thinking about the mathematical content and developing their own ideas. These mental activities keep students engaged in learning the mathematics and help them thrive in the classroom.

Students were asked how often the teacher did “tell you about how well you were doing in your mathematics class” and “give you feedback on your strengths and weaknesses”. Table 5.3 presents the percentage of students who chose the response “Most lessons” or “Every lesson” for these two items (rather than “some lessons” or “never or hardly ever”).

Across countries/economies, at most half of the students reported either type of feedback to be given regularly. Feedback was exceptionally rare in Germany* and K-S-T (Japan) with only 15% and 23% of students respectively reporting its regular use.

Shanghai (China)’s students primarily experienced feedback based on their individual strengths and weaknesses. This kind of feedback can help students understand what exactly they need to improve (OECD, 2014[25]; Klieme, 2020[26]). In B-M-V (Chile), Madrid (Spain) and Mexico, on the other hand, students more often reported being told how well they were doing in comparison to their classmates, which might be both less informative and less supportive for students.

Teachers and students were asked whether teachers explained why procedures work (Explaining Procedures) and how many opportunities students had to engage actively in questioning, argumentation and discussion (Student Participation in Discourse) (Table 5.4).

Whole class instruction is the most prominent setting across all countries/economies. Usually this includes frequent teacher questions answered by students either voluntarily or upon being named by the teacher. Therefore, it is no surprise that more than two thirds of the students in all participating countries/economies reported frequent or routine opportunities to “explain their own ideas”.

Opportunities for students to exchange mathematical arguments among themselves vary by country/economy. 71% of students reported frequently or always being required to discuss among themselves in K-S-T (Japan) and Shanghai (China), while only about half of students in Colombia, England (UK) and Mexico, and less than a third in Madrid (Spain), Germany* and B-M-V (Chile) reported being required to do so.

Secondary school algebra, including the topic of quadratic equations, includes a good deal of handling mathematical procedures such as transforming equations or applying binomial rules. High quality mathematical discourse in algebra lessons can be characterised, among other features, by its focus on understanding why mathematical procedures work in addition to, or instead of practicing procedures in a repetitive way. Teachers frequently or always explained to students why mathematical procedures work according to most students in all participating countries/economies but Germany* where only 43% of students indicated so.

When comparing teachers’ explaining procedures with students’ engaging in discussions, the former practice is reported to be more frequent than the latter in most countries/economies with the exception of K-S-T (Japan) and Shanghai (China). This pattern may reflect a greater focus on student participation in discourse in the East Asian countries/economies and a stronger focus on teacher-directed instruction elsewhere.

Teachers are consistently more positive than students on the three statements related to classroom discourse in all participating countries/economies (Table 5.4) except for K-S-T (Japan) where the opposite is true. The largest gap between teacher reports and student reports has been found in Germany*.

How much do teachers’ views of classroom discourse align with their students’ views on the classroom level? To check the strength of this relationship, the indices of Explaining Procedures and Student Participation in Discourse were created.45

Teacher and student reports of Explaining Procedures were not aligned within countries/economies: correlations on the classroom level were low in Mexico (0.23) and England (UK) (0.29), and literally zero everywhere else.

Correlations between teacher and student views on Student Participation in Discourse, however, were significant on the classroom level in five countries/economies, ranging from 0.27 in Shanghai (China) through to 0.50 in Germany*. In four of these countries/economies, either teacher judgements or student judgements were also significantly correlated with the video score for “nature of discourse”.

The alignment between teacher, student and observer perspective when assessing different aspects of classroom discourse is slightly stronger than for social-emotional support (see Chapter 4), but much weaker than for classroom management (see Chapter 3).

## References

[11] Baumert, J. et al. (2010), “Teachers’ mathematical knowledge, cognitive activation in the classroom, and student progress”, American Educational Research Journal, Vol. 47/1, http://dx.doi.org/10.3102/0002831209345157.

[16] Black, P. and D. Wiliam (2009), “Developing the theory of formative assessment”, Educational Assessment, Evaluation and Accountability, http://dx.doi.org/10.1007/s11092-008-9068-5.

[2] Blum, W. (2002), “ICMI study 14: Applications and modelling in mathematics education - discussion document”, ZDM - International Journal on Mathematics Education, Vol. 34/5, http://dx.doi.org/10.1007/BF02655826.

[6] Boaler, J. (2000), “Mathematics from Another World: Traditional Communities and the Alienation of Learners”, Journal of Mathematical Behavior, Vol. 18/4, http://dx.doi.org/10.1016/S0732-3123(00)00026-2.

[19] Boaler, J. and M. Staples (2008), “Creating mathematical futures through an equitable teaching approach: The case of Railside school”, Teachers College Record, Vol. 110/3.

[7] CCSSI (2010), “Common Core State Standards for Mathematics”, Common Core State Standards Initiative.

[20] Cohen, E. and R. Lotan (1994), “Complex instruction: Higher-order thinking in heterogeneous classrooms”, Handbook of cooperative learning Methods.

[3] De Lange, J. (1996), “Chapter 2: Using and Applying Mathematics in Education”, in International handbook of mathematics education.

[10] Fauth, B. et al. (2014), “Student ratings of teaching quality in primary school: Dimensions and prediction of student outcomes”, Learning and Instruction, http://dx.doi.org/10.1016/j.learninstruc.2013.07.001.

[15] Fishman, B. and C. Dede (2016), “Teaching and Technology: New Tools for New Times”, in Handbook of Research on Teaching, http://dx.doi.org/10.3102/978-0-935302-48-6_21.

[4] Gravemeijer, K. et al. (2000), “Symbolizing, modeling, and instructional design”, in Symbolizing and Communicating in Mathematics Classrooms Perspectives on Discourse Tools and Instructional Design.

[13] Henningsen, M. and M. Stein (1997), “Mathematical Tasks and Student Cognition: Classroom-Based Factors That Support and Inhibit High-Level Mathematical Thinking and Reasoning”, Journal for Research in Mathematics Education, Vol. 28/5, p. 524, http://dx.doi.org/10.2307/749690.

[21] Horn, I. (2006), “Lessons learned from detracked mathematics departments”, Theory into Practice, Vol. 45/1, http://dx.doi.org/10.1207/s15430421tip4501_10.

[26] Klieme, E. (2020), “Policies and Practices of Assessment: A Showcase for the Use (and Misuse) of International Large Scale Assessments in Educational Effectiveness Research”, in Hall, J., A. Lindorff and P. Sammons (eds.), International Perspectives in Educational Effectiveness Research, Springer Nature Publishers, https://doi.org/10.1007/978-3-030-44810-3_7.

[14] Leinhardt, G. and M. Steele (2005), “Seeing the complexity of standing to the side: Instructional dialogues”, Cognition and Instruction, http://dx.doi.org/10.1207/s1532690xci2301_4.

[12] Lipowsky, F. et al. (2009), “Quality of geometry instruction and its short-term impact on students’ understanding of the Pythagorean Theorem”, Learning and Instruction, Vol. 19/6, pp. 527-537, http://dx.doi.org/10.1016/j.learninstruc.2008.11.001.

[8] National Council Of Teachers Of Mathematics (2000), “Principles and Standards for School Mathematics”, School Science and Mathematics, Vol. 47/8, http://dx.doi.org/10.1111/j.1949-8594.2001.tb17957.x.

[9] OECD (2019), PISA 2018 Assessment and Analytical Framework, PISA, OECD Publishing, Paris, https://dx.doi.org/10.1787/b25efab8-en.

[25] OECD (2014), New Insights from TALIS 2013: Teaching and Learning in Primary and Upper Secondary Education, TALIS, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264226319-en.

[5] Perry, B. and S. Dockett (2015), “Young children’s access to powerful mathematical ideas”, in Handbook of International Research in Mathematics Education, http://dx.doi.org/10.4324/9780203930236.ch5.

[23] Porter, A. et al. (2011), “Common core standards: The new U.S. intended curriculum”, Educational Researcher, http://dx.doi.org/10.3102/0013189X11405038.

[17] Rakoczy, K. et al. (2019), “Formative assessment in mathematics: Mediated by feedback’s perceived usefulness and students’ self-efficacy”, Learning and Instruction, http://dx.doi.org/10.1016/j.learninstruc.2018.01.004.

[18] Resnick, L., C. Asterhan and S. Clarke (2018), “Accountable Talk: Instructional dialogue that builds the mind”, Educational Practices Series 29, The international academy of education (IAE) and the International Bureau of Education (IBE) of the United Nations educational, Geneva, Switzerland, http://www.ibe.unesco.org/sites/default/files/resources/educational_practices_29-v7_002.pdf.

[24] Rivera, F. and J. Becker (2009), “Algebraic Reasoning through Patterns”, Mathematics Teaching in the Middle School.

[22] Schoenfeld, A., J. Kilpatrick and T. Wood (2008), “Toward a Theory of Proficiency in Teaching Mathematics”, International handbook of mathematics teacher education: Vol. 2 Tools and Processes in Mathematics Teacher Eduation.

[1] Stigler, J. and J. Hiebert (1999), The Teaching Gap: Best Ideas from the World’s Teachers for Improving Education in the Classroom, New York: The Free Press, http://dx.doi.org/10.1080/00220270050167215.