Chapter 10. Teaching, learning and assessing 21st century skills

James W Pellegrino
Learning Sciences Research Institute, University of Illinois, Chicago

This chapter draws upon the report Education for Life and Work: Developing Transferable Knowledge and Skills in the 21st Century (Pellegrino and Hilton, 2012) to address questions about why the terms labelled “deeper learning” and “21st century skills” have achieved prominence in the thinking and actions of multiple stakeholder groups and what we know from research to help us think productively about their educational and social implications.

In particular, the chapter considers issues of construct definition and identifies three important domains of competence – cognitive, intrapersonal and interpersonal. It then considers research evidence related to these domains including their importance for success in education and work, their representation in disciplinary standards, and the design of instruction in areas such as reading, mathematics and science to promote their development. It concludes with implications for curriculum, instruction, assessment, teacher learning and professional development.

  

Introduction

Across the globe there is substantial common interest in changing the landscape of education by promoting ideas that have come to be labelled as “deeper learning” and “21st century skills.” But what do these terms mean, why have they achieved such a degree of prominence in the thinking, writing, and actions of stakeholder groups ranging from parents, to professional educators, to educational foundations, to state departments of education and the federal government, and what do we know from research that can help us think productively about their educational and social implications? These were the questions posed by various stakeholder groups to a Committee established by the U.S. National Research Council (NRC) in 2010. The response was a report entitled Education for Life and Work: Developing Transferable Knowledge and Skills in the 21st Century” (Pellegrino and Hilton, 2012). This chapter draws upon that work as a way of introducing some of the key ideas underpinning the teaching, learning and assessment of 21st century skills, especially as they relate to teacher education and teacher professional development.

Key rhetoric

Many countries, including members of the Organisation for Economic Co-operation and Development (OECD), have long recognised that investments in public education can contribute to the common good (namely by enhancing national prosperity and supporting stable families, neighbourhoods and communities). Likewise, current economic, environmental and social challenges illustrate that education is even more critical today than it has been in the past. Today’s children can only meet future challenges if they have opportunities to prepare for their future roles as citizens, employees, managers, parents, volunteers and entrepreneurs. This calls for methods of learning that support not only retention, but also the use and application of skills and knowledge—a process called “transfer” in cognitive psychology – and herein lies the challenge.

As the Programme for International Student Assessment (PISA) has demonstrated, teaching students to apply their knowledge is no easy feat. In the 2009 PISA reading and science tests – which measured students’ ability to analyse, reason and communicate effectively while posing, interpreting and solving problems – the scores of 15-year-olds in the U.S. were merely average when compared to students from the other industrialised nations comprising the OECD. In mathematics, their scores were below the OECD average (OECD, 2010). Part of the reason for the weak performance of American students is uneven learning and achievement among different groups of students. Disparities in the relative educational achievement of children from high-income versus low-income families have grown enormously since the 1970s (Duncan and Murnane, 2011). In a related trend, the gap between average incomes of the wealthiest and poorest families has also grown.

Business leaders, educational organisations and researchers have begun to call for new education policies that target the development of broad, transferable skills and knowledge, often referred to as “21st century skills” (e.g. see Bellanca, 2014). For example, the US-based Partnership for 21st Century Skills1 argues that student success in college and careers requires four essential skills: critical thinking and problem solving, communication, collaboration, and creativity and innovation (Partnership for 21st Century Skills, 2010: 2). The 2012 NRC Report argued that the various sets of terms associated with the “21st century skills” label reflect important dimensions of human competence that have been valuable for many centuries, rather than skills that are suddenly new, unique and valuable today. The important difference across time may lie in society’s desire for all students to attain levels of mastery—across multiple areas of skill and knowledge—that were previously unnecessary for individual success in education and the workplace. At the same time, the pervasive use of new digital technologies has increased the pace of communication and information exchange throughout society with the consequence that all individuals may need to be competent in processing multiple forms of information to accomplish tasks that may be distributed across contexts that include home, school, the workplace and social networks.

Although these skills have long been valuable, they are particularly salient today, and are increasingly becoming a priority for education officials. In the U.S., multiple states have joined the Partnership for 21st Century Skills, based on a commitment to fuse 21st century skills with academic content (Partnership for 21st Century Skills, 2011) in their standards, assessments, curriculum and teacher professional development. Some state and local high school reform efforts have begun to focus on a four-dimensional framework of college and career readiness that includes not only academic content but also cognitive strategies, academic behaviours and contextual skills and awareness (Conley, 2011). At the international level, the U.S. Secretary of Education participates on the executive board of the Assessment and Teaching of 21st Century Skills (ATC21S) project, along with the education ministers of five other nations and the vice presidents of Cisco, Intel and Microsoft. This project aims to expand the teaching and learning of 21st century skills globally, especially by improving assessment of these skills. In a separate effort, a large majority of 16 OECD nations surveyed in 2009 reported that they are incorporating 21st century skills in their education policies, for example, in regulations and guidelines (Aniandou and Claro, 2009). Thus, it is clear that multiple stakeholder groups have been energised and mobilised to consider the problem as well as potential solutions.

In the next two sections we consider what might be meant by “21st century skills” – including how to organise these in ways that can be productive for educational practice and research – as well as how we might conceptualise the construct of “deeper learning”.

Three domains of competence

As a way to organise the various terms for “21st century skills” and provide a starting point for considering empirical evidence as to their meaning and value, the NRC Report (Pellegrino and Hilton, 2012) identified three broad domains of competence: cognitive, intrapersonal and interpersonal. The cognitive domain involves reasoning and memory; the intrapersonal domain involves executive functioning (metacognition) and emotion; and the interpersonal domain involves expressing ideas, and interpreting and responding to messages from others.

A content analysis was conducted, aligning several lists of 21st century skills proposed by various groups and individuals with the skills included in existing, research-based taxonomies of cognitive, intrapersonal and interpersonal skills and abilities. Through this process, various 21st century skills were assigned to clusters of competencies within each domain. Recognising areas of overlap between and among the skills and skill clusters, the committee developed the following initial classification scheme (see Chapter 2 of Pellegrino and Hilton, 2012 for additional details of the elements within each cluster):

  • The Cognitive Domain includes three clusters of competencies: cognitive processes and strategies; knowledge; and creativity. These clusters include skills such as critical thinking, reasoning and argumentation, and innovation.

  • The Intrapersonal Domain includes three clusters of competencies: intellectual openness; work ethic and conscientiousness; and self-regulation. These clusters include skills such as flexibility, initiative, appreciation for diversity and metacognition.

  • The Interpersonal Domain includes two clusters of competencies: teamwork and collaboration; and leadership. These clusters include skills such as co-operation and communication, conflict resolution and negotiation.

These three domains represent distinct facets of human thinking and build on previous efforts to identify and organise dimensions of human behaviour. For example, Bloom’s (1956) taxonomy of learning objectives included three broad domains: cognitive, affective and psychomotor. Following Bloom, the cognitive domain is viewed as involving thinking and related abilities, such as reasoning, problem solving and memory. The intrapersonal domain, like Bloom’s affective domain, involves emotions and feelings and includes self-regulation—the ability to manage one’s emotions and set and achieve one’s goals (Hoyle and Davisson, 2011). The interpersonal domain is not included in Bloom’s taxonomy but rather is based partly on an NRC workshop that clustered various 21st century skills into the cognitive, intrapersonal and interpersonal domains (National Research Council, 2011a). In that workshop, Bedwell, Salas and Fiore (2011) proposed that interpersonal competencies are those used to express information to others and to interpret others’ messages (both verbal and nonverbal) and respond appropriately.

Distinctions among the three domains are reflected in how they are delineated, studied and measured. In the cognitive domain, knowledge and skills are typically measured with tests of general cognitive ability (also referred to as g or IQ) or with more specific tests focusing on school subjects or work-related content. Research on interpersonal and intrapersonal competencies often uses measures of broad personality traits (discussed further below) or of child temperament (general behavioural tendencies, such as attention or shyness). Psychiatrists and clinical psychologists studying mental disorders use various measures to understand the negative dimensions of the interpersonal and intrapersonal domains (Almlund et al., 2011).

Although the three domains are differentiated for the purpose of understanding and organising 21st century skills, it is recognised that they are intertwined in human development and learning. Research on teaching and learning has begun to illuminate how interpersonal and intrapersonal skills support learning of academic content (e.g. National Research Council, 1999) and how to develop these valuable supporting skills (e.g. Yeager and Walton, 2011). For example, we now know that learning is enhanced by the intrapersonal skills used to reflect on one’s learning and adjust learning strategies accordingly—a process called “metacognition” (Pellegrino, Chudowsky and Glaser, 2001; Hoyle and Davisson, 2011). At the same time, research has shown that development of cognitive skills, such as the ability to stop and think objectively about a disagreement with another person, can increase positive interpersonal skills and reduce anti-social behaviour (Durlak et al., 2011). The interpersonal skill of effective communication is supported by the cognitive skills used to process and interpret complex verbal and nonverbal messages and formulate and express appropriate responses (Salas, Bedwell and Fiore, 2011).

In many respects, the foregoing use of “competencies” reflects terminology used by the OECD in its extensive project to identify key competencies required for life and work in the current era. According to the OECD (2005), a competency is:

“more than just knowledge and skills. It involves the ability to meet complex demands, by drawing on and mobilising psychosocial resources (including skills and attitudes) in a particular context. For example, the ability to communicate effectively is a competency that may draw on an individual’s knowledge of language, practical IT skills, and attitudes towards those with whom he or she is communicating” (OECD, 2005: 4).

Although research on how these 21st century competencies are related to desired outcomes in education, work and other areas of life has been limited, there are some promising findings. Cognitive competencies, which have been the most extensively studied, show consistent, positive correlations of modest size with students’ achieving higher levels of education, higher earnings and better health. Among intrapersonal competencies, conscientiousness, which includes such characteristics as being organised, responsible and hard-working, shows the strongest relationship with the same desirable outcomes. Conversely, antisocial behaviour, which reflects deficits in both intrapersonal skills (such as self-regulation) and interpersonal skills (such as communication) is related to poorer outcomes.

More research is needed to increase our understanding of relationships between particular 21st century competen-cies and desired adult outcomes—and especially to look at whether the competencies are causing the desired outcomes rather than simply correlated with them. This much is known, however: mastery of academic subject matter is not possible without deeper learning. The next section considers the process of deeper learning and how 21st century competencies develop.

In summary, while many lists of 21st century skills have been proposed, there is considerable overlap among them. Many of the constructs included in such lists trace back to the original SCANS report (Secretary’s Commission on Achieving Necessary Skills, 1991), and some now appear in the O*NET Content database. Aligning the various competencies with extant, research-based personality and cognitive ability taxonomies illuminates the relationships between them and suggests a preliminary, new taxonomy of 21st century competencies. Much further research is needed to more clearly define the competencies at each level of the proposed taxonomy, to understand the extent to which various competencies and competency clusters may be malleable, to elucidate the relationships among the competencies and desired educational and workplace outcomes, and to identify the most effective ways to teach and learn these competencies.

Deeper learning and 21st century competencies

The broad call for “deeper learning” and “21st century skills” reflects a long-standing issue in education and training – the difficult task of equipping individuals with transferable knowledge and skills. Associated with this is the challenge of creating learning environments that support development of the cognitive, interpersonal and intrapersonal competencies that enable learners to transfer what they have learned to new situations and new problems. These competencies include both knowledge in a domain and the understanding of how, why and when to apply this knowledge to answer more complex questions and solve problems.—integrated forms of knowledge that we refer to as “21st century competencies” and discuss further below.

If the goal of instruction is to prepare students to accomplish tasks or solve problems exactly like the ones addressed during instruction, then deeper learning is not needed. For example, if someone’s job calls for adding lists of numbers accurately, that individual needs to learn to become proficient in using the addition procedure, but does not need deeper learning about the nature of number and number theory that will allow transfer to new situations that involve the application of mathematical principles. Today’s technology has reduced demand for such routine skills (e.g. Autor, Levy and Murnane, 2003). Rather, for success in work and life in the 21st century, individuals must be able to adapt effectively to changing situations rather than rely solely on well-worn procedures. If the goal is to prepare students to be able to be successful in solving new problems and adapting to new situations, then deeper learning is called for. Calls for such “21st century skills” as innovation, creativity and creative problem-solving can thus also be seen as calls for deeper learning—helping students develop transferable knowledge that can be applied to solve new problems or respond effectively to new situations.

To clarify the meaning of “deeper learning” and illuminate its relationship to 21st century competencies, it is critical to consider two important strands of research and theory on the nature of human thinking and learning – the cognitive perspective and the socio-cultural perspective (also referred to as the situated perspective [Greeno, Pearson and Schoenfeld, 1996]).

The cognitive perspective focuses on types of knowledge and how they are structured in an individual’s mind, including the processes that govern perception, learning, memory and human performance. Research from the cognitive perspective investigates the mechanisms of learning and the nature of the products – the types of knowledge and skill – that result from those mechanisms, as well as how that knowledge and skill is drawn upon to perform a range of simple to complex tasks. The goal is theory and models that apply to all individuals, accepting the fact that there will be variation across individuals in execution of the processes and in the resultant products.

The socio-cultural perspective emerged in response to the perception that research and theory within the cognitive perspective was too narrowly focused on individual thinking and learning. In the socio-cultural perspective, learning takes place as individuals participate in the practices of a community, using the tools, language and other cultural artefacts of the community. From this perspective, learning is “situated” within, and emerges from, the practices in different settings and communities. A community may be large or small and may be located inside or outside of a traditional school context. It might range, for example, from colleagues in a company’s Information Technology department to a single elementary school classroom, or a global society of plant biologists.

Such research has important implications for how academic disciplines are taught in school. From the socio-cultural perspective, the disciplines are distinct communities that engage in shared practices of ongoing knowledge creation, understanding and revision. It is now widely recognised that science is both a body of established knowledge and a social process through which individual scientists and communities of scientists continually create, revise, and elaborate scientific theories and ideas (National Research Council, 2007, Polanyi, 1958). In one illustration of the social dimensions of science, Dunbar (2000) found that scientists’ interactions with their peers, particularly how they responded to questions from other scientists, influenced their success in making discoveries.

The idea that each discipline is a community with its own culture, language, tools and modes of discourse, has influenced teaching and learning. For example, Moje (2008) has called for re-conceptualising high school literacy instruction to develop disciplinary literacy programmes, based on research into what it means to write and read in mathematics, history and science, and what constitutes knowledge in these subjects. Moje (2008) argues that students’ understanding of how knowledge is produced in the subject areas is more important than the knowledge itself.

Socio-cultural perspectives are reflected in new disciplinary frameworks and standards for primary and secondary education. For example, the NRC Framework for primary and secondary Science Education (NRC, 2012) calls for integrated development of science practices, crosscutting concepts and core ideas. The Common Core State Standards in English language arts (Common Core State Standards Initiative, 2010a) reflect an integrated view of reading, writing, speaking/listening, and language and also respond to Moje’s (2008) call for disciplinary literacy by providing separate English language arts standards for history and science. Based on the view of each discipline as a community engaged in ongoing discourse and knowledge creation, the science framework and the standards in mathematics and English language arts include expectations for learning of interpersonal and intrapersonal knowledge and skills along with cognitive knowledge and skills (see Section IV below for further discussion of these disciplinary learning issues).

The link between deeper learning and 21st century competencies lies in the classic concept of transfer—the ability to use prior learning to support new learning or problem solving in culturally relevant contexts. We define “deeper learning” not as a “product”, but rather as the process through which transferable knowledge (i.e. 21st century competencies) develops. Through deeper learning, individuals not only develop expertise in a particular discipline, they also understand when, how and why to apply what they know. They recognise when new problems or situations are related to what they have previously learned, and they can apply their knowledge and skills to solve them.

The history of research on transfer suggests that there are limits to how far the knowledge and skills developed through deeper learning can transfer. Firstly, transfer is possible within a subject area or domain of knowledge when effective instructional methods are used. Secondly, research on expertise suggests that deeper learning involves the development of well-organised knowledge in a domain that can be readily retrieved to apply (transfer) to new problems in that domain. Thirdly, research suggests that deeper learning requires extensive practice, aided by explanatory feedback that helps learners correct errors and practice correct procedures, and that multimedia learning environments can provide such feedback. Fourthly, the work of psychologists allows us to distinguish between rote learning and meaningful learning (or deeper learning). Meaningful learning (develops deeper understanding of the structure of the problem and the solution method) leads to transfer, while rote learning does not (Mayer, 2010).

We can also distinguish between different types of tests and the learning they measure. Retention tests are designed to assess learners’ memory for the presented material using recall tasks (e.g. “What is the definition of deeper learning?”) or recognition tasks (e.g. “Which of the following is not part of the definition of deeper learning? A. learning that facilitates future learning, B. learning that facilitates future problem solving, C. learning that promotes transfer, D. learning that is fun.”). While retention and recognition tests are often used in educational settings, experimental psychologists use transfer tests to assess learners’ ability to use what they learned in new situations to solve problems or to learn something new (e.g. “write a transfer test item to evaluate someone’s knowledge of deeper learning”).

Although using the senses to attend to relevant information may be all that is required for success on retention tasks, success on transfer tasks requires deeper processing that includes organising new information and integrating it with prior knowledge. Results from the two different types of assessments can be used to distinguish between three different types of learning outcomes—no learning, rote learning and meaningful learning (see Table 10.1; also Mayer, 2010). No learning is indicated by poor performance on retention and transfer tests. Rote learning is indicated by good retention performance and poor transfer performance. Meaningful learning (which also could be called deeper learning) is indicated by good retention performance and good transfer performance. Thus, the distinguishing feature of meaningful learning (or deeper learning) is the learner’s ability to transfer what was learned to new situations.

Table 10.1. Three types of learning outcomes

Type of Outcome

Retention Performance

Transfer Performance

No learning

Poor

Poor

Rote learning

Good

Poor

Meaningful (deeper) learning

Good

Good

Source: Mayer, R.E. (2010), Applying The Science Of Learning, Pearson.

Mayer (2010) suggests that deeper learning involves developing an interconnected network of five types of knowledge:

  • facts, which are statements about the characteristics or relationships of elements in the universe

  • concepts, which are categories, schemas, models or principals

  • procedures, or step-by-step processes

  • strategies (general methods)

  • beliefs about one’s own learning.

Mentally organising knowledge helps an individual to quickly identify and retrieve the relevant knowledge when trying to solve a novel problem (i.e. when trying to transfer the knowledge). According to Mayer (2010), the way in which a learner organises these five types of knowledge influences whether the knowledge leads to deeper learning and transfer. For example, factual knowledge is more likely to transfer if it is integrated, rather than existing as isolated bits of information, and conceptual knowledge is more likely to transfer if it is mentally organised around schemas, models, or general principles. As the research on expertise and the power law of practice would indicate, procedures that have been practiced until they become automatic and embedded within long-term memory are more readily transferred to new problems than those that require much thought and effort. In addition, specific cognitive and metacognitive strategies, discussed later in this chapter, promote transfer. Finally, development of transferable 21st century skills is more likely if the learner has productive beliefs about his or her ability to learn and about the value of learning. Table 10.2 outlines the cognitive processing of the five types of integrated knowledge and dispositions that, working closely together, support deeper learning and transfer.

Table 10.2. Transferable knowledge

Type of Knowledge

Format or Cognitive Processing

Factual

Integrated, rather than separate facts

Conceptual

Schemas, models, principles

Procedures

Automated, rather than effortful

Strategies

Specific cognitive and metacognitive strategies

Beliefs

Productive beliefs about learning

Source: Adapted from Mayer, R.E. (2010), Applying The Science Of Learning, Pearson.

Deeper learning involves co-ordinating all five types of knowledge. The learner acquires an interconnected network of specific facts, automates procedures, refines schemas and mental models, and refines cognitive and metacognitive strategies, while at the same time developing productive beliefs about learning. Through this process the learner develops transferable knowledge, which encompasses not only the facts and procedures that support retention but also the concepts, strategies, and beliefs needed for success in transfer tasks. We view these concepts, thinking strategies, and beliefs as 21st century skills.

This proposed model of transferable knowledge reflects research on development of expertise, which has distinguished differences in the knowledge of experts and novices in various academic domains such as physics, as well as other domains of knowledge and skills such as chess and medicine (see Table 10.3). Novices tend to store facts as isolated units, whereas experts store them in an interconnected network. Novices tend to create categories based on surface features, whereas experts create categories based in structural features. Novices need to expend conscious effort in applying procedures, whereas experts have automated basic procedures, thereby freeing them of the need to expend conscious effort in applying them. Novices tend to use general problem-solving strategies such as means–ends analysis, which require a backwards strategy starting from the goal, whereas experts tend to use specific problem-solving strategies tailored to specific kinds of problems in a domain, which involves a forward strategy starting from what is given. Finally, novices may hold unproductive beliefs, such as the idea that their performance depends on ability, whereas experts may hold productive beliefs, such as the idea that if they try hard enough they can solve the problem. In short, analysis of learning outcomes in terms of five types of knowledge has proven helpful in addressing the question of what expert problem solvers know that novice problem solvers do not.

Table 10.3. Expert–novice differences on five kinds of knowledge

Knowledge

Novices

Experts

Facts

Fragmented

Integrated

Concepts

Surface

Structural

Procedures

Effortful

Automated

Strategies

General

Specific

Beliefs

Unproductive

Productive

Source: Adapted from Mayer, R.E. (2010), Applying The Science Of Learning, Pearson.

Findings from a vast array of research have important implications for how to organise teaching and learning to facilitate deeper learning and development of transferable 21st century competencies. As summarised in another NRC report, (Pellegrino et al., 2001), research conducted over the past century has:

. . .. clarified the principles for structuring learning so that people will be better able to use what they have learned in new settings. If knowledge is to be transferred successfully, practice and feedback need to take a certain form. Learners must develop an understanding of when (under what conditions) it is appropriate to apply what they have learned. Recognition plays an important role here. Indeed, one of the major differences between novices and experts is that experts can recognise novel situations as minor variants of situations to which they already know how to apply strong methods (p. 87).

For example, we know that experts’ ability to recognise familiar elements in novel problems allows them to apply (or transfer) their knowledge to solve such problems. The research has also clarified that transfer is also more likely to occur when the person understands the underlying principles of what was learned. The models children develop to represent a problem mentally, and the fluency with which they can move back and forth among representations, are other important dimensions of transfer that can be enhanced through instruction. The main challenge in designing instruction for transfer is to create learning experiences for learners that will prime appropriate cognitive processing during learning without overloading the learner’s information-processing system.

The connection to disciplinary learning and standards

Deeper learning and the development of 21st century competencies do not happen separately from learning academic content. Rather, deeper learning enables students to thoroughly understand academic content and to recognise when, how and why to apply that content knowledge to solve new problems. Thus, it is important to consider the relationship between concepts of deeper learning and 21st century competencies and the disciplinary standards documents that have been introduced in recent years for English Language Arts2 , Mathematics and Science (CCSS, 2010a, b; Achieve, 2013). Given that these standards will likely shape curriculum and instruction in the United States for many years to come, the 2012 NRC Report considered how each of the different disciplinary standards documents aligns with concepts of deeper learning and 21st century competencies as described earlier. What follows is a glimpse of that alignment for the area of English language arts, mathematics and science learning.

Deeper learning in English language arts

Discussions of how to teach reading and writing in the United States are often contentious, as reflected in the military metaphors used to describe them, such as “the reading wars”. These “wars” reflect the two ends of a wide spectrum of opinions about how to develop reading for understanding. One approach, which can be called the “simple view of reading”, holds that reading comprehension is the product of listening comprehension and decoding. Its proponents argue that students in the early grades should learn all of the letters of the alphabet and their corresponding sounds to a high degree of accuracy, until they are automatic. Once the code is mastered, students will further their understanding of the written word through wide reading of literature, which allows them to gather new ideas about the world.

The opposite position, which might best be called the utilitarian view of reading and writing, instead starts with the ultimate goal of reading in order to motivate children to learn the basic elements of reading. Proponents argue that, beginning in kindergarten, educators should engage children in a quest to make sense of their world through deep engagement with the big ideas that have puzzled humankind for centuries. Then, as they seek new information to understand and shape their world, students will need to use and refine their reading and writing skills. Once students feel the need to learn to read, proponents say, it will be much easier to teach them the decoding and other basic skills they need to transform print into meaning.

Rather than solidly favouring either of these approaches, the research consistently supports a balanced position that includes both approaches. This balance strongly stresses the basic skills of phonemic awareness, alphabet knowledge and decoding for accurate word learning in the early stages of reading development, but places an equal emphasis on reading for meaning at all stages of learning to read. Although there is strong support for emphasising the basics in the all-important early stages of reading, this emphasis need not preclude monitoring one’s reading and writing to see if it makes sense or transferring the reading competencies to disciplinary learning tasks. As students mature and the demands of school curriculum focus more on acquiring disciplinary knowledge, the emphasis on reading for meaning increases.

The Four Resources Model

The Four Resources Model, developed by Australian scholars Freebody and Luke in the 1990s, can be useful in understanding the meaning of deeper learning in the context of English language arts. The model is a set of four different stances that readers can take toward a text, each of which approaches reading differently. A reader can assume any one of these four stances in the quest to make meaning in response to a text.

  1. The reader as decoder asks: What does the text say? In the process, the reader builds a coherent understanding of the text by testing each idea encountered for its coherence with all of the previous ideas in the text.

  2. The reader as meaning maker asks: What does the text mean? In answering that question, the reader seeks to develop meaning based on a) the ideas in the text itself, and b) the reader’s prior knowledge.

  3. The reader as text analyst asks: What tools does the author use to achieve his or her goals and purposes? The text analyst considers how the author’s choice of words, form, and structure shape our regard for different characters or our stance towards an issue, a person, or a group. The reader goes beyond the words and tries to evaluate the validity of the arguments, ideas, and images that the author presents.

  4. The reader as text critic asks questions about intentions, subtexts, and political motives. The text critic assumes that no texts are ideologically neutral, asking such questions as: Whose interests are served or not served by this text? Who is privileged, marginalised or simply absent? What are the political, economic, epistemological or ethical goals of the author?

Reading and writing simultaneously consist of code breaking, meaning making, analysing and critiquing. The stance a reader takes can change from text to text, situation to situation, and even moment to moment when reading a text. Which stance dominates at a particular moment depends on many factors, including the reader’s level of knowledge about and interest in the topic and the purpose of the particular reading task.

Drawing on the four resources model, deeper learning in English language arts can be defined from two perspectives: (1) as favouring activities that are successively higher on the list—those in which the reader acts as meaning maker, text analyst or text critic; or (2) as favouring the management of all four stances based on the reader’s assessment of the difficulty of the text or task and the purpose of the task. In other words, deeper learning means that a student understands when and why it is appropriate to use each stance, as well as how to do so. These two approaches are not mutually exclusive. Deeper learning could involve selecting a stance that elicits the skills and processes that best fit the situation or problem that a reader faces at a given moment as well as suggest a preference for incorporating the higher levels – those of the text analyst and critic – whenever it is possible and appropriate to do so.

Deeper learning in the English language arts common core

The widely adopted Common Core State Standards in English language arts are highly supportive of deeper learning, as reflected in the four resources model. For example, the ten college and career readiness “anchor standards”, which represent what high school graduates should know and be able to do, require students to be able to take all four stances toward a text: decoder, meaning maker, analyst and critic. The standards address the basics – including phonemic awareness, phonics and fluency – primarily in the foundational skills addendum to the standards for kindergarten through grade 5 (K-5). The standards also ask students to apply their developing reading skills to acquire disciplinary knowledge in literature, science, and history, especially in grades 6 through 12 – a significant shift away from treating reading as a separate subject.

The domain of cognitive competencies – including such skills as non-routine problem solving and critical thinking – is well represented in the standards, as the figure below shows. In contrast, serious consideration of the interpersonal and intrapersonal domains is missing. However, recent research in English language arts demonstrates the potential for developing competencies in these domains. Such work also illustrates the way in which the standards engage students in using reading, writing and language practice to acquire knowledge of the disciplines. These opportunities for additional practice of English language arts support deeper learning and transfer.

Figure 10.1. English language arts
picture

Deeper learning in mathematics

Research studies provide a clear, consistent picture of typical school mathematics instruction in the United States. What we know is largely derived from two kinds of data and associated research analyses. One type of study that has been carried out over several decades has involved direct observation of classroom teaching (e.g. Hiebert et al., 2005; Stake and Easley, 1978; Stigler et al., 1999; Stodolsky, 1988), and another has used teacher self-reported data from surveys (e.g. Grouws, Smith and Sztajn, 2004; Weiss et al., 2001).

These studies present a remarkably consistent characterisation of mathematics teaching in upper elementary school and middle-grade classrooms in the United States: Students generally work alone and in silence, with little opportunity for discussion and collaboration and little to no access to suitable computational or visualisation tools. They focus on low-level tasks that require memorising and recalling facts and procedures rather than tasks requiring high-level cognitive processes, such as reasoning and connecting ideas or solving complex problems. The curriculum includes a narrow band of mathematics content (e.g. arithmetic in the elementary and middle grades) that is disconnected from real-world situations, and a primary goal for students is to produce answers quickly and efficiently without much attention to explanation, justification, or the development of meaning (e.g. Stigler and Hiebert, 1999; Stodolsky, 1988). Research evidence regarding how people learn best when the goal is developing understanding (National Research Council, 1999) strongly indicates that such pedagogy is at odds with goals aimed at deeper learning and transfer.

Although this pervasive approach to mathematics teaching has not been directly established as the cause of the generally low levels of student achievement, it is difficult to deny the plausibility of such a connection. In response, an array of reform initiatives has been aimed at changing what and how mathematics is taught and learned in American schools. While the reformers disagree over some issues, they share the goal of giving students more opportunities to learn what is called “mathematics with understanding”. As summarised in Silver and Mesa, (2011: 69), teaching mathematics for understanding is sometimes referred to as:

. . .authentic instruction, ambitious instruction, higher order instruction, problem-solving instruction, and sense-making instruction (e.g. Brownell and Moser, 1949; Brownell and Sims, 1946; Carpenter, Fennema, and Franke, 1996; Carpenter et al., 1989; Cohen, 1990; Cohen, McLaughlin, and Talbert, 1993; Fuson and Briars, 1990; Hiebert and Wearne, 1993; Hiebert et al., 1996; Newmann and Associates, 1996). Although there are many unanswered questions about precisely how teaching practices are linked to students’ learning with understanding (see Hiebert and Grouws, 2007), the mathematics education community has begun to emphasize teaching that aims for this goal.

Studies over the past 60 years provide a solid body of evidence about the benefits of teaching mathematics in this way. Hallmarks of teaching mathematics for understanding include using:

  1. Cognitively demanding mathematical tasks drawn from a broad array of content areas. Although research has shown that it is not easy for teachers to use cognitively demanding tasks well in classrooms, those tasks can lead to increased student understanding, the development of problem solving and reasoning, and greater overall student achievement.

  2. Teaching practices that support collaboration and mathematical discourse among students and that engage them in mathematical reasoning and explanation, consideration of real-world applications, and use of technology or physical models.

The latest reform effort in the United States targeting mathematics for understanding has been the Common Core State Standards for Mathematics. If widely implemented, the new standards would enable a giant leap forward in the development of mathematics with understanding.

Deeper Learning in Common Core Mathematics Standards. The new Common Core standards emphasise deeper learning of mathematics, learning with understanding, and the development of usable, transferable mathematics competencies. By identifying several important learning goals – critical thinking, problem solving, constructing and evaluating evidence-based arguments, systems thinking and complex communication – these new standards emphasise the deeper learning of mathematics and the development of transferable numerical competencies.

As shown in the figure below, these standards correspond most strongly with 21st century competencies in the cognitive domain. The two most prominent areas of overlap are in the themes of argumentation/reasoning and problem solving. These themes are central to mathematics and have long been viewed as key leverage points in efforts to teach mathematics for understanding. The theme of argumentation/reasoning is explicitly stated in two of the standards for mathematical practice: “Reason abstractly and quantitatively” and “Construct viable arguments and critique the reasoning of others”. The standards also deal explicitly with problem solving; the first standard in the category of mathematical practice is “make sense of problems and persevere in solving them”.

Unlike competencies in the cognitive domain, those in the intrapersonal and interpersonal domains are not particularly prominent in the standards. However, the standards for mathematical practice give some attention to the intrapersonal competencies of self-regulation, persistence and the development of an identity as someone who can do mathematics.

Figure 10.2. Mathematics
picture

Deeper learning in science

As with the English language arts and mathematics, how best to teach science has often been a matter of controversy. Conflicts over science education have traditionally been about the relative importance of content (facts, formulas, concepts and theories) versus process (scientific method, inquiry and discourse). Historically, science teaching in American classrooms has placed a heavy emphasis on content— generally in the form of memorising isolated facts. In an attempt to correct this overem-phasis, reformers in the 1990s shifted the focus to “inquiry.” This reform effort, however, led to unintended consequences due to insuf-ficient understanding of the nature of scientific inquiry, which came to be associated primarily with hands-on science. While hands-on activities can be effective if they are designed with clear learning goals and are thoughtfully integrated with the learning of science content, such integration is not typical in American high schools. Instead, overemphasis on hands-on activities has led to the neglect of other aspects of scientific inquiry such as critical reasoning, analysis of evidence, development of models and written and oral discourse.

In addition, some advocates for hands-on science have tended to treat scientific methodology as divorced from content. Many students, for instance, are introduced to a generic “scientific method”, which is presented as a fixed linear sequence of steps that students are often asked to apply in a superficial or scripted way, designed to produce a particular result. This approach to the scientific method often distorts the processes of inquiry as they are actually practiced by scientists. In the work of scientists, content and process are not disconnected. Rather, they are deeply intertwined: Scientists view science as both a body of established knowledge and an ongoing process of discovery that can lead to revisions in that body of knowledge. Sophisticated science learning involves students’ learning both content knowledge and process skills in a simultaneous, mutually reinforcing way.

Science in current classrooms

As with mathematics, today’s science classrooms generally do not reflect the research on how students learn science. The standard curriculum has been criticised as being “a mile wide and an inch deep”. Large science textbooks cover many topics with little depth, providing little guidance on how to place the learning of science concepts and processes in the context of meaningful real-world problems. As teachers try to cover the broad curriculum, they give insufficient attention to students’ understanding and instead focus on superficial recall-level questions.

Similarly, at the high school level, laboratory activities that typically take up about one science class period each week are disconnected from the flow of science instruction. Instead of focusing on clear learning objectives, laboratory manuals and teachers often emphasise procedures, leaving students uncertain about what they are supposed to learn. Furthermore, these activities are rarely designed to integrate the learning of science content and processes. During the rest of the week, students spend time listening to lectures, reading textbooks and preparing for tests that emphasise recall of disparate facts.

Making matters worse, during the past decade, time and resources for science education in the US have often been cut back because of the No Child Left Behind law. Since this legislation does not count science test scores when measuring the yearly progress of schools, the emphasis has been on English and mathematics.

Deeper learning in the K-12 science education framework

An attempt to better integrate scientific content and processes and to focus on depth rather than breadth of knowledge began with the 2012 release of the National Research Council’s Framework for K-12 Science Education. The framework explains in detail what all students should know and be able to do in science by the end of high school. Standards based on the framework have been developed by a group of states, coordinated by the non-profit organisation “Achieve”. An overarching goal expressed in the framework is to ensure that all students—whether or not they pursue careers in the fields of science, technology, engineering and mathematics (STEM)—have “sufficient knowledge of science and engineering to engage in public discussions on related issues, are careful consumers of scientific and technological information related to their everyday lives, and are able to continue to learn about science outside of school”. In other words, the goal is the development of transferable science knowledge.

The framework has three dimensions, which are conceptually distinct but are integrated in practice in the teaching, learning, and doing of science and engineering:

  1. Disciplinary core ideas. By identifying and focusing on a small set of core ideas in each discipline, the framework attempts to reduce the long and often disconnected catalogue of factual knowledge that students currently must learn. Core ideas in physics include energy and matter, for example, and core ideas in the life sciences include ecosystems and biological evolution. Students encounter these core ideas over the course of their school years at increasing levels of sophistication, deepening their knowledge over time.

  2. Cross-cutting concepts. The framework identifies seven cross-cutting concepts that have importance across many disciplines, such as patterns, cause and effect, and stability and change.

  3. Practices. Eight key science and engineering practices are identified, such as asking questions (for science) and defining problems (for engineering); planning and carrying out investigations; and engaging in argument from evidence.

The framework emphasises that disciplinary knowledge and scientific practices are intertwined and must be coordinated in science and engineering education. By engaging in the practices of science and engineering, students gain new knowledge about the disciplinary core ideas and come to understand the nature of how scientific knowledge develops.

The figure below shows areas of overlap between the framework and 21st century skills. Cognitive skills—especially critical thinking, non-routine problem solving, and constructing and evaluating evidence-based arguments—are all strongly supported in the framework, as is complex communication. In the domain of interpersonal skills, the framework provides strong support for collaboration and teamwork; a prominent theme is the importance of understanding science and engineering as a social enterprise conducted in a community, requiring well-developed skills for collaborating and communicating. The framework also supports adaptability, in the form of the ability and inclination to revise one’s thinking or strategy in response to evidence and review by one’s peers.

In terms of intrapersonal skills, the framework gives explicit support to metacognitive reasoning about one’s own thinking and working processes, as well as the capacity to engage in self-directed learning about science and engineering throughout one’s lifetime. Support for motivation and persistence, attitudes, identity and value issues, and self-regulation is weaker or more indirect.

Figure 10.3. Science and engineering
picture

The foregoing summary of the alignment between the Common Core Standards in English language arts, mathematics and the National Research Council Framework for Science Education with concepts of deeper learning and 21st century competencies highlights that in all three cases the standards focus on key disciplinary ideas and practices of the type that promote deeper learning that in turn can support transfer. While they have a decided bias towards cognitive competencies, as one might expect given the disciplinary focus, they do not ignore nor do they contradict an emphasis on integration of the cognitive competencies with those in the interpersonal and the intrapersonal domains.

Teaching for transfer

While the evidence indicates that various cognitive competencies are teachable and learnable in ways that promote transfer, such instruction remains rare in American classrooms; few effective strategies and programmes to foster deeper learning exist. Research and theory suggest a set of principles that can guide the development of such strategies and programmes, as discussed below. It is important to note that the principles are derived from research that has focused primarily on transfer of knowledge and skills within a single topic area or domain of knowledge.

How can teachers aid students’ deeper learning of subject matter and promote transfer? Addressing this seemingly simple question has been a central task of researchers for more than a century, and in the past several decades they have made progress toward evidence-based answers. Applying the instructional principles below will aid students’ deeper learning of subject-matter content in any discipline. Because deeper learning takes time and repeated practice, instruction aligned with these principles should begin in preschool and continue across all levels of learning, from kindergarten through college and beyond. Teaching in these ways will make it more likely that students will come to understand the general principles underlying the specific content they are learning and be able to transfer their knowledge to solve new problems in the same subject area. These principles and practices are based on research in the cognitive domain. They have not been studied in terms of developing transferable competencies in the interpersonal and intrapersonal domains, but it is plausible that they are applicable.

Use multiple and varied representations of concepts and tasks, and help students understand how different representations of the same concept are “mapped” or related to one another. Research has shown that adding diagrams to a text or adding animation to a narration that describes how a mechanical or biological system works can increase students’ performance on a subsequent problem-solving transfer test. In addition, allowing students to use concrete objects to represent arithmetic procedures has been shown to increase their performance on transfer tests. This finding has been shown both in classic studies in which bundles of sticks are used to represent two-column subtraction and in an interactive, computer-based lesson in which students move a bunny along a number line to represent addition and subtraction of numbers.

Encourage elaboration, questioning and self-explanation. The techniques of elaboration, questioning, and self-explanation require students to actively engage with the material—going beyond memorising to process the content in their own words. Some specific techniques that have been shown to aid deeper learning include:

  • prompting students who are reading a text to explain the material to themselves aloud, in their own words, as they read

  • asking students certain questions about material they have just read or been taught—such as why, how, what if, what if not, and so what

  • using teaching practices that establish classroom norms of students’ questioning each other and justifying their answers

  • asking learners to summarise what they have learned in writing

  • having students test themselves without external feedback, for example, by asking themselves questions about material they have just read.

Engage learners in challenging tasks, with supportive guidance and feedback. Over 40 years of research has shown that asking students to solve challenging problems in science and other disciplines without appropriate guidance and support is ineffective at promoting deeper learning. In contrast, asking students to solve challenging problems while providing specific cognitive guidance along the way does promote deeper learning. For example, there is no compelling evidence that beginners deeply learn science concepts or processes simply by freely exploring a science simulation or game, but if they receive guidance in the form of advice, feedback, and prompts—for example, completing part of the task for the learner—they are more likely to learn the content deeply.

Teach with examples and cases. Using examples and cases can help students see how a general principle or method is relevant to a variety of situations and problems. One approach is a worked-out example, in which a teacher models how to carry out a procedure—for example, solving probability problems—while explaining it step by step. Offering worked-out examples to students as they begin to learn a new procedural skill can help them develop deeper understanding of the skill. In particular, deeper learning is facilitated when the problem is broken down into conceptually meaningful steps that are clearly explained; the explanations are gradually taken away with increasing practice.

Prime student motivation. Another way to promote deeper learning is to prime students’ motivation so that they are willing to exert the effort to learn. Research shows that students learn more deeply when they:

  • attribute their performance to effort rather than to ability

  • have the goal of mastering the material rather than the goal of performing well or not performing poorly

  • expect to succeed on a learning task and value the learning task

  • believe they are capable of achieving the task at hand

  • believe that intelligence is changeable rather than fixed

  • are interested in the task.

There is promising evidence that these kinds of motivational approaches can be fostered in learners through such techniques as peer modelling. For example, elementary school students showed increased self-confidence (an intrapersonal competency) for solving subtraction problems and increased test performance after watching a peer demonstrate how to solve subtraction problems while exhibiting high self-efficacy (such as saying “I can do that one” or “I like doing these”).

Use formative assessment. A formative assessment is one that is used throughout the learning process to monitor students’ progress and adjust instruction when needed, in order to continually improve student learning. It is different from traditional “summative” assessment, which focuses on measuring what a student has learned at the end of a set period of time. Deeper learning is enhanced when formative assessment is used to:

  • make learning goals clear to students

  • continuously monitor, provide feedback, and respond to students’ learning progress

  • involve students in peer- and self-assessment.

These uses of formative assessment are grounded in the research demonstrating that practice is essential for deeper learning and skill development, while practice without feedback yields little learning. Formative assessment involves a change in instructional practice: It is not a regular part of most teachers’ practice, and teachers’ pedagogical content knowledge may be an impediment to its realisation (Heritage et al., 2009; Herman, Osmundson and Silver, 2010).

Learning goals and targets of assessment

Educational interventions may reflect different theoretical perspectives on learning and may target different skills or domains of competence. In all cases, however, the design of instruction for transfer should start with a clear delineation of the learning goals and a well-defined model of how learning is expected to develop (NRC, 2001). The model—which may be hypothesised or established by research—provides a solid foundation for the coordinated design of instruction and assessment aimed at supporting students’ acquisition and transfer of targeted competencies.

Designing measures to evaluate student accomplishment of the particular learning goals can be an important starting point for the development process because outcome measures can provide a concrete representation of the ultimate student learning performances that are expected and of the key junctures along the way, which in turn can enable the close coordination of intended goals, learning environment characteristics, programmatic strategies, and performance outcomes. Such assessments also communicate to educators and learners—as well as designers—what knowledge, skills, and capabilities are valued (Resnick and Resnick, 1992; Herman, 2008). An evidence-based approach to assessment rests on three pillars that need to be closely synchronised (Pellegrino et al., 2001: 44):

  • A model of how students represent knowledge and develop competence in a domain

  • Tasks or situations that allow one to observe student performance relative to the model

  • An interpretation framework for drawing inferences from student performance.

Developing that first pillar—a model of the learning outcomes to be assessed—offers a first challenge in the assessment of cognitive, intrapersonal, and interpersonal competencies. Within each of these three broad domains, theorists have defined and conducted research on a wealth of individual constructs. In the previous sections we noted that the research literature on cognitive and non-cognitive competencies has used a wide variety of definitions, particularly in the intrapersonal and interpersonal domains. Questions remain, however, about the implications of these definitions. For example, the range of contexts and situations across which the learning of these competencies should transfer remains unclear.

A second challenge arises from the existing models and methodologies for observing and interpreting students’ responses relative to these constructs. It is widely acknowledged that most current large-scale measures of educational achievement do a poor job of reflecting deeper learning goals in part because of constraints on testing formats and testing time (Webb, 1999). While a variety of well-developed exemplars exist for constructs in the cognitive domain, those for inter- and intrapersonal competencies are less well developed. Below, we briefly discuss examples of measures for each domain of competence (for a fuller discussion of this topic see NRC 2011a).

Measures of cognitive competence. Promising examples of measures focused on important cognitive competencies can be found in national and international assessments, in training and licensing tests, and in initiatives currently underway in American grades K–12. One example is the computerised problem-solving component of the Programme for International Student Assessment (PISA), which was operationally administered in 2012 (National Research Council, 2011b). In this 40-minute test, items are grouped in units around a common problem, which keeps reading and numeracy demands to a minimum. The problems are presented within realistic, everyday contexts, such as refuelling a moped, playing on a handball team, mixing a perfume, feeding cats, mixing elements in a chemistry lab, and taking care of a pet. The difficulty of the items is manipulated by increasing the number of variables or the number of relationships that the test taker has to deal with.

Scoring of the items reflects the PISA 2012 framework, which defines four processes that are components of problem solving: (1) information retrieval, (2) model building, (3) forecasting, and (4) monitoring and reflecting. Points are awarded for information retrieval, based on whether the test taker recognises the need to collect baseline data and uses the method of manipulating one variable at a time. Scoring for the process of model building reflects whether the test taker generates a correct model of the problem. Scoring of forecasting is based on the extent to which responses to the items indicate that the test taker has set and achieved target goals. Finally, points are awarded for monitoring and reflecting, which includes checking the goal at each stage, detecting unexpected events, and taking remedial action if necessary.

Another promising example of assessment of complex cognitive competencies, created by the National Council of Bar Examiners, consists of three multi-state examinations that jurisdictions may use as one step in the process of licensing lawyers. The three examinations are the Multi-state Bar Exam (MBE), the Multi-state Essay Exam (MEE), and the Multi-state Performance Test (MPT). All are paper-and-pencil tests that are designed to measure the knowledge and skills necessary to be licensed in the profession and to ensure that the newly licensed professional knows what he or she needs to know to practice. These overarching goals reflect an assumption that law students need to have developed transferable knowledge that they will be able to apply when they become lawyers.

These and other promising examples each start with a strong model of the competencies to be assessed; use simulated cases and scenarios to pose problems that require extended analysis, evaluation, and problem solving; and apply sophisticated scoring models to support inferences about student learning. The PISA example, in addition, demonstrates the dynamic and interactive potential of technology to simulate authentic problem-solving situations.

The PISA problem-solving test is one of a growing set of examples that use technology to simultaneously engage students in problem solving and assess their problem-solving skills. Another example is “SimScientists”, a simulation-based curriculum unit that includes a sequence of assessments designed to measure student understanding of ecosystems (Quellmalz, Timms and Buckley, 2010). The SimScientists summative assessment is designed to measure middle-school students’ understanding of ecosystems and scientific inquiry. Students are presented with the overarching task of describing an Australian grassland ecosystem for an interpretive centre and respond by drawing food webs and conducting investigations with the simulation. Finally, they are asked to present their findings about the grasslands ecosystem. SimScientists also includes elements focusing on transfer of learning, as described in a previous NRC report (National Research Council, 2011b; 94).

To assess transfer of learning, the curriculum unit engages students with a companion simulation focusing on a different ecosystem (a mountain lake). Formative assessment tasks embedded in both simulations identify the types of errors individual students make, and the system follows up with graduated feedback and coaching. The levels of feedback and coaching progress from notifying the student that an error has occurred and asking him or her to try again, to showing the results of investigations that met the specifications.

Students use this targeted, individual feedback to engage with the tasks in ways that improve their performance. Practice is essential for deeper learning, but knowledge is acquired much more rapidly if learners receive information about the correctness of their results and the nature of their mistakes.

Combining expertise in content, measurement, learning, and technology, these assessment examples employ evidence-centred design and are developing full validity arguments. They reflect the emerging consensus that problem solving must be assessed as well as developed within specific content domains (as discussed in the previous section; also see National Research Council, 2011a). In contrast, many other current technology-based projects designed to impact student learning lack a firm assessment or measurement basis (National Research Council, 2011b).

Project and problem-based learning and performance assessments that require students to engage with novel, authentic problems and to create complex, extended responses in a variety of media would seem to be prime vehicles for measuring important cognitive competencies related to transfer. What remains to be seen, however, is whether the assessments are valid for their intended use and if the reliability of scoring and the generalisability of results can achieve acceptable levels of rigor, thereby avoiding validity and reliability problems that have existed in the past with complex performance assessments (e.g. Linn et. al., 1995; Shavelson, Baxter and Gao, 1993).

Measures of interpersonal and intrapersonal competence. There are few well-established practical assessments for interpersonal competencies that are suitable for use in schools, with the exception of tests designed to measure those skills related to formal written and oral communication. Some large-scale measures of collaboration were developed as part of performance assessments during the 1990s, but the technical quality of such measures was never firmly established. The development of those assessments revealed an essential tension between the nature of group work and the need to assign valid scores to individual students. Today there are examples of teacher-developed assessments of teamwork and collaboration being used in classrooms, but technical details are lacking.

Most well-established instruments for measuring interpersonal competencies have been developed for research and theory-building or for employee selection purposes, rather than for use in schools. These instruments tend to be one of four types: surveys (self-reports and informant reports), social network analysis, situational judgment tests, or behavioural observations (Salas, Bedwell and Fiore, 2011). Potential problems arise when applying any of these methods to large-scale educational assessment, to which stakes are often attached. Stakes are high when significant positive or negative consequences are applied to individuals or organisations based on their test performance – consequences such as high school graduation, grade to grade promotion, specific rewards or penalties and special programme placement. Stakes attached to large-scale assessment results heighten the need for reliability and validity as well as attention to concerns such as security and feasibility in terms of cost and administration conditions. Each of the instrument types has limitations relative to these criteria. Self-report, social network analysis, and situational judgment tests, which can provide relatively efficient, reliable, and cost-effective measures, are all subject to social desirability bias, the tendency to give socially desirable and socially rewarded rather than honest responses to assessment items or tasks. Some situational judgment tests used for employee selection are carefully designed to correct for social desirability bias. However, if any of these three types of assessment instruments were used for educational purposes, where high stakes consequences were attached to the results, social desirability bias would likely be heightened.

Behavioural ratings, in contrast, present challenges in assuring reliability and cost feasibility. For example, if students’ interpersonal skills are assessed based on self, peer, or teacher ratings of student presentations of portfolios of their past work (including work as part of a team), a number of factors may limit the reliability and validity of the scores. These include differences in the nature of the interactions reflected in the portfolios for different students or at different times, differences in raters’ application of the scoring rubric, differences in the groups with whom individual students have interacted, and other differences. This lack of uniformity in the sample of interpersonal skills included in the portfolio poses a threat to both validity and reliability (National Research Council, 2011a). Dealing with these threats to reliability takes additional time and money, beyond that required for simply presenting and scoring student presentations.

Collaborative problem-solving tasks currently being evaluated by PISA offer one of the few examples today of a direct, large-scale assessment targeting social and collaboration competencies; other prototypes are under development by the ATC21S project and by the military. The quality and practical feasibility of any of these measures are not yet fully documented. However, like many of the promising cognitive measures, these rely on the abilities of technology to engage students in interaction, to simulate others with whom students can interact, to track students’ ongoing responses, and to draw inferences from those responses.

As is the case with interpersonal skills, many of the existing instruments for the measurement of intrapersonal skills have been designed for research and theory development purposes and thus have the same limitations for large-scale educational uses as the instruments for measuring interpersonal skills. These instruments include surveys (self-reports and informant reports), situational judgment tests, and behavioural observations. As with the assessment of interpersonal competencies, it is possible that evidence of intrapersonal competencies could be elicited from the process and products of student work on suitably designed complex tasks. For example, project or problem-based performance assessments theoretically could be designed to include opportunities for students to demonstrate metacognitive strategies or persistence in the face of obstacles. Student products could be systematically observed or scored for evidence of the targeted competencies and then these scores could be counted in student grades or scores on end-of-year accountability assessment. To date however, strong design methodologies, interpretive frameworks, and approaches to assuring the score reliability, validity, and fairness have not been developed for such project or problem-based performance assessments.

In summary, there are a variety of constructs and definitions of cognitive, intrapersonal and interpersonal competencies, and a paucity of high-quality measures for assessing them. All of the examples discussed above are measures of maximum performance, rather than of typical performance (see Cronbach, 1970). They measure what students can do rather than what they are likely to do in a given situation or class of situations. While the cognitive domain usually focuses on measures of maximum performance, typical performance (i.e. what students are likely to do) may be the primary focus of measures for some inter and intrapersonal competencies. For example, measures of dispositions and attitudes related to conscientiousness, multi-cultural sensitivity and persistence could be designed to assess typical performance. In comparison to measures of maximum performance, measures of typical performance require more complex designs and tend to be less stable and reliable (Paltry, 2011).

By way of summary, the variety of definitions of constructs across the three domains of competence, and the lack of high-quality measures pose challenges for teaching, assessment and learning 21st century competencies. Some of these challenges are further considered in the next section.

Implications and challenges

Current teaching practices in many classrooms in the US and elsewhere across the globe do not encourage deeper learning of subject matter. Helping students develop the full range of 21st century competencies— including those in the interpersonal and intrapersonal domains—will require changes across many elements of the education system, including curriculum, instruction, assessment, and teacher education and professional development.

In the area of curriculum and instruction, further research and development is needed to create more specific instructional materials and strategies that can help develop transferable competencies. Future curricula, inspired by the concept of deeper learning, should integrate learning across the cognitive, interpersonal and intrapersonal domains in whatever ways are most appropriate for the targeted learning goals. Multiple stakeholder groups should actively support the development and use of curriculum and instructional programmes that include research-based teaching methods to support deeper learning, such as those discussed earlier in this chapter.

In the area of assessment, research has shown that assessment and feedback play an essential role in the deeper learning of cognitive competencies. In particular, ongoing formative assessment by teachers can provide guidance to students that supports and extends their learning, encouraging deeper learning and development of transferable competencies. Current educational policies, however, focus on summative assessments that measure mastery of content and often hold schools and districts accountable for improving student scores on such assessments. Although this focus on summative assessment poses a challenge to the wider teaching and learning of 21st century competencies, recent policy developments do appear to open the window for a wider diffusion of interventions to develop such competencies. For example, a previous section of this chapter noted that the new Common Core State Standards and the Framework for K–12 Science Standards include facets of 21st century competencies.

While new national goals that encompass 21st century competencies have been articulated in the Common Core State Standards for English language, arts and mathematics, and in the NRC’s science standards framework, the extent to which these goals are realised in educational settings will be strongly influenced by their inclusion in district, state and national assessments. Because educational policy remains focused on outcomes from summative assessments that are part of accountability systems, teachers and administrators will focus instruction on whatever is included in state assessments. Thus, as new assessment systems are developed to reflect the new standards in English language arts, maths, and science, significant attention will need to be given to the design of tasks and situations that call upon a range of important 21st century competencies as applied in each of the major content areas.

Although improved assessments would facilitate a wider focus on teaching approaches that support the development of 21st century competencies, there are a number of challenges to developing such assessments. First, research to date has focused on a wide variety of different constructs in the cognitive, intrapersonal, and interpersonal domains. Although the taxonomy presented earlier offers a useful starting point, further research is needed to more carefully organise, align and define these constructs. There are also psychometric challenges. Although progress has been made in assessing cognitive skills, much further research is needed to develop assessments of intrapersonal and interpersonal skills that are suitable for both formative and summative assessment uses in educational settings. Experiences during the 1980s and 1990s in the development and implementation of performance assessments and assessments with open-ended tasks can offer valuable insights, but assessments must be reliable, valid, and fair if they are to be widely used in formal and informal learning environments.

A third challenge involves political and economic forces influencing assessment development and use. Traditionally, policy makers have favoured the use of standardised, on-demand, end-of-year tests for purposes of accountability. Composed largely of selected response items, these tests are relatively cheap to develop, administer and score; have sound psychometric properties; and provide easily quantifiable and comparable scores for assessing individuals and institutions. However, such standardised tests have not been conducive to measuring or supporting the process of deeper learning nor to the development of 21st century competencies. In the face of current fiscal constraints at the federal and state levels, policymakers in the US may seek to minimise assessment costs by maintaining lower-cost, traditional test formats, rather than incorporating into their systems relatively more expensive, richer performance- and curriculum-based assessments that may better measure 21st century competencies. The fourth challenge involves teacher and administrator capacity to understand and interpret the new assessments. The features of instruction and assessment discussed earlier in this chapter are not well known to teachers, students or school administrators.

In the areas of teacher education and professional development, current systems and programmes will require major changes if they are to support teaching that encourages deeper learning and the development of transferable knowledge and skills. Changes will need to be made not only in the conceptions of what constitutes effective professional practice but also in the purposes, structure and organisation of pre-service and professional learning opportunities (Darling-Hammond, 2006; Garrick and Rhodes, 2000; Lampert, 2010; Webster-Wright, 2009). For example, Windschitl (2009) proposed that developing 21st century competencies in the context of science will require ambitious new teaching approaches that will be unlike the science instruction that most teachers have participated in or even witnessed.

To address these teacher learning challenges, Wilson (2011), Windschitl (2009) and others have recommended replacing current, disjointed teacher learning opportunities with more integrated continuums of teacher preparation, induction, support and ongoing professional development. Within such a continuum, Windschitl (2009) proposed that teacher preparation programmes should centre on a common core curriculum grounded in a substantial knowledge of child or adolescent development, learning and subject-specific pedagogy. It was suggested that such programmes should also provide future teachers with extended opportunities to practice under the guidance of mentors (student teaching), lasting at least 30 weeks, that reflect the programme’s vision of good teaching and that are interwoven with coursework.

Research to date has identified other characteristics of effective teacher preparation programmes, including extensive use of case study methods, teacher research, performance assessments and portfolio examinations that are used to relate teachers’ learning to classroom practice (Darling-Hammond, 1999). Deeper learning and the acquisition of 21st century skills—for both teachers and their students—might also be supported through preparation programs that help new teachers make effective use of study groups, peer learning, managed classroom discussions and disciplined discourse routines (Ghousseini, 2009; Monk and King, 1994). Wilson (2011) and others have noted that one of the most promising practices for both induction and professional development involves bringing teachers together to analyse samples of student work, such as drawings, explanations or essays, or to observe videotaped classroom dialogues. Working from principled analyses of how the students are responding to the instruction, the teachers can then change their instructional approaches accordingly.

Windschitl (2009) identified a number of features of professional development that could help science teachers implement new teaching approaches to cultivate students’ 21st century skills in the context of science. These features are:

  • active learning opportunities focusing on science content, scientific practice and evidence of student learning (Desimone et al., 2002)

  • coherence of the professional development with teachers’ existing knowledge, with other development activities, with existing curriculum and with standards in local contexts (Garet et al., 2001; Desimone et al., 2002)

  • the collective development of an evidence-based “inquiry stance” by participants towards their practice (Blumenfeld et al., 1991; Kubitskey and Fishman, 2006)

  • the collective participation by teachers from the same school, grade or subject area (Desimone et al., 2002)

  • adequate time, both for planning and enacting new teaching practices.

More broadly, across the disciplines, pre-service teachers and in-service teachers will need opportunities to engage in the kinds of teaching and learning environments envisioned in this chapter and in the 2012 NRC report. Experiencing instruction designed to support transfer will help them to design and implement such environments in their own classrooms. Teachers will also need opportunities to learn about different approaches to assessment and the purposes of these different approaches. As noted earlier, most teachers are not familiar with formative assessment and do not regularly incorporate it in their teaching practice (Heritage et al., 2009; Herman, Osmundson and Silver, 2010).

In thinking about the implications of the work discussed in this chapter, it is worth reminding ourselves that 21st century competencies support learning of school subjects in particular and educational attainment more generally. Thus, more explicit attention to the development of these skills in school curricula could potentially reduce disparities in educational attainment and allow a broader swathe of young people to enjoy the fruits of workplace success, improved health and greater civic participation. However, important challenges remain for attaining such outcomes. For educational interventions focused on developing transferable cognitive, intrapersonal and interpersonal competencies to move beyond isolated promising examples and to flourish more widely, larger systemic issues and policies will need to be addressed, including new types of assessment systems, new curricula that incorporate research-based features such as those described above and new approaches to teacher preparation and professional development.

References

Achieve (2013), Next Generation Science Standards, www.nextgenscience.org/.

Almlund, M. et al. (2011), “Personality psychology and economics”, in E.A. Hanushek, S. Machin, and L. Wossmann (eds.), Handbook Of The Economics Of Education, pp. 1-181, Elsevier, Amsterdam.

Ananiadou, K. and M. Claro (2009), “21st Century Skills and Competences for New Millennium Learners in OECD Countries”, OECD Education Working Papers, No. 41, OECD Publishing, Paris, http://dx.doi.org/10.1787/218525261154.

Autor, D., F. Levy and R. Murnane (2003), “The skill content of recent technological change: An empirical exploration”, Quarterly Journal of Economics, Vol. 118/4, pp. 1279-1333.

Ball, D.L. and D.K. Cohen (1999), “Developing practice, developing practitioners: Toward a practice-based theory of professional education”, in G. Sykes and L. Darling-Hammond (eds.), Teaching as the Learning Profession: Handbook of Policy and Practice, pp. 3-32, Jossey Bass, San Francisco.

Bedwell, W.L., E. Salas and S.M. Fiore (2011), “Developing the 21st century (and beyond) workforce: A review of interpersonal skills and measurement strategies”, Paper prepared for the NRC Workshop on Assessing 21st Century Skills.

Bellanca, J. (2014), Deeper Learning: Beyond 21st Century Skills, Solution Tree Press, Bloomington, IN.

Bloom, B.S. (1956), Taxonomy Of Educational Objectives, Handbook I: The Cognitive Domain, David McKay, New York.

Blumenfeld, P. et al. (1991), “Motivating project-based learning: Sustaining the doing, supporting the learning”, Educational Psychologist, Vol. 26/3, Vol. 26/4, pp. 369-398.

Brownell, W.A. and H.E. Moser (1949), “Meaningful vs. mechanical learning: A study on grade 3 subtraction”, in Duke University Research Studies in Education, No. 8, Duke University Press Durham, NC.

Brownell, W.A. and V.M. Sims (1946), “The nature of understanding”, in N.B. Henry (ed.), The Measurement of Understanding: Forty-Fifth Yearbook of the National Society for the Study of Education. Part I, pp. 27-43, University of Chicago Press, Chicago.

Carpenter, T.P., E. Fennema and M. Franke (1996), “Cognitively guided instruction: A knowledge base for reform in primary mathematics instruction”, Elementary School Journal, Vol. 97/1, pp. 3-20.

Cohen, D.K. (1990), “A revolution in one classroom: The case of Mrs. Oublier”, Educational Evaluation and Policy Analysis, Vol. 12, pp. 327-345.

Cohen, D.K., M. McLaughlin and J. Talbert (eds.) (1993), Teaching For Understanding: Challenges For Policy And Practice, Jossey-Bass, San Francisco.

Common Core State Standards Initiative (2010a), “English language arts standards”, Washington, DC: National Governors Association and Council of Chief State School Officers, www.corestandards.org/the-standards/english-language-artsstandards, (accessed February 2012).

Common Core State Standards Initiative (2010b), “Mathematics standards”, Washington, DC: National Governors Association and Council of Chief State School Officers, www.corestandards.org/assets/CCSSI_Math%20Standards.pdf., (accessed April 2012).

Conley, D. (2011), “Crosswalk analysis of deeper learning skills to common core state standards”, Prepared for the William H. and Flora Hewlett Foundation by the Educational Policy Improvement Center (EPIC), Unpublished manuscript.

Cronbach, L.J. (1970), Essentials Of Psychological Testing (3rd Ed), Harper and Row, New York.

Darling-Hammond, L. (2006), “Constructing 21st-century teacher education”, Journal of Teacher Education, Vol. 57, pp. 1-15.

DeSimone, L.M. et al. (2002), “Effects of professional development on teachers’ instruction: Results from a three-year longitudinal study”, Educational Evaluation and Policy Analysis, Vol. 24/2, pp. 81-112.

Dunbar, K. (2000), “How scientists think in the real world: Implications for science education”, Journal of Applied Developmental Psychology, Vol. 21/1, pp. 49-58.

Duncan, G.J. and R.J. Murnane (eds.) (2011), Whither Opportunity? Rising Inequality, Schools, And Children’s Life Chances, Russell Sage Foundation, New York.

Durlak, J.A. et al. (2011), “The impact of enhancing students’ social and emotional learning: A meta-analysis of school based universal interventions”, Child Development, Vol. 82/1, pp. 405-432.

Fennema, E. and T.A. Romberg (eds.) (1999), Mathematics Classrooms That Promote Understanding, Erlbaum, Mahwah, NJ.

Fuson, K.C. and D.J. Briars (1990), “Using a base-ten blocks learning/teaching approach for first- and second-grade place-value and multidigit addition and subtraction”, Journal for Research in Mathematics Education, Vol. 21, pp. 180-206.

Garet, M.S. et al. (2001), “What makes professional development effective? Results from a national sample of teachers”, American Educational Research Journal, Vol. 38/4, pp. 915-945.

Greeno, J.G., P.D. Pearson and A.H. Schoenfeld (1996), “Implications for NAEP of research on learning and cognition. Report of a study commissioned by the National Academy of Education”, Panel on the NAEP Trial State Assessment, Conducted by the Institute for Research on Learning, National Academy of Education, Stanford, CA.

Grouws, D.A., M.S. Smith and P. Sztajn (2004), “The preparation and teaching practices of United States mathematics teachers: Grades 4 and 8”, in P. Kloosterman and F.K. Lester (eds.), Results And Interpretations Of The 1990-2000 Mathematics Assessments Of The National Assessment Of Educational Progress, pp. 221-267), National Council of Teachers of Mathematics, Reston, VA.

Heritage, M. (2010), Formative Assessment: Making It Happen In The Classroom, Corwin Press, Thousand Oaks, CA.

Heritage, M. et al. (2009), “From evidence to action: A seamless process in formative assessment?”, Educational Measurement: Issues and Practice, Vol. 28/3, pp. 24-31.

Herman, J.L. (2008), “Accountability and assessment in the service of learning: Is public interest in K-12 education being served?”, in L. Shepard and K. Ryan (eds.), The Future of Test Based Educational Accountability, pp. 211-232, Taylor and Francis, New York.

Herman, J.L., E. Osmundson and D. Silver (2010), “Capturing quality in formative assessment practice: Measurement challenges”, CRESST Technical Report #770, CRESST, Los Angeles, CA.

Hiebert, J. and D.A. Grouws (2007), “The effects of classroom mathematics teaching on students’ learning”, in F.K. Lester (ed.), Second Handbook Of Research On Mathematics Teaching And Learning, pp. 371-404, Information Age, Charlotte, NC.

Hiebert, J. and D. Wearne (1993), “Instructional tasks, classroom discourse, and students’ learning in second-grade arithmetic”, American Educational Research Journal, Vol. 30, pp. 393-425.

Hiebert, J. et al. (1996), “Problem solving as a basis for reform in curriculum and instruction: The case of mathematics”, Educational Researcher, Vol. 25/4, pp. 12-21.

Hiebert, J. et al. (2005), Mathematics teaching in the United States today (and tomorrow): Results from the TIMSS 1999 video study. Educational Evaluation and Policy Analysis, Vol. 27, pp. 111-132.

Hoyle, R.H. and E.L. Davisson (2011), “Assessment of self-regulation and related constructs: Prospects and challenges”, Paper prepared for the NRC Workshop on Assessment of 21st Century Skills.

Kubitskey, B. and B.J. Fishman (2006), “A role for professional development in sustainability: Linking the written curriculum to enactment”, in S.A. Barab, K.E. Hay, and D.T. Hickey, (eds.), Proceedings Of The 7th International Conference Of The Learning Sciences, Vol. 1, pp. 363-369, Erlbaum, Mahwah, NJ.

Lampert, M. (2010), “Learning teaching in, from, and for practice: What do we mean?”, Journal of Teacher Education, Vol. 61, pp. 1-2.

Linn, R.L., E.L. Baker and S.B. Dunbar (1991), “Complex, performance-based assessment: Expectations and validation criteria”, Educational Researcher, Vol. 20/8, pp. 15-21.

Luke, A. and P. Freebody (1997), “The social practices of reading”, in S. Muspratt, A. Luke, and P. Freebody (eds.), Constructing Critical Literacies: Teaching And Learning Textual Practices, pp. 185-226, Allen and Unwin, St Leonards, New South Wales.

Mayer, R.E. (2010), Applying The Science Of Learning, Pearson, Upper Saddle River, NJ.

Moje, E.B. (2008), “Foregrounding the disciplines in secondary literacy teaching and learning: A call for change”, Journal of Adolescent and Adult Literacy, Vol. 52/2, pp. 96-107.

Monk, D.H. and J. King (1994), “Multi-level teacher resource effects on pupil performance in secondary mathematics and science: The role of teacher subject matter preparation in contemporary policy issues: Choices and consequences in education”, in R. Ehrenberg (ed.), Contemporary Policy Issues: Choices and Consequences In Education, pp. 29-58, ILR Press, Ithaca, NY.

National Research Council (2012), “A framework for K-12 science education: Practices, crosscutting concepts, and core ideas”, Committee on a Conceptual Framework for New K-12 Science Education Standards. Board on Science Education, Division of Behavioral and Social Sciences and Education, The National Academies Press, Washington, DC.

National Research Council (2011a), “Assessing 21st century skills: Summary of a workshop”, J.A. Koenig, Rapporteur, Committee On The Assessment Of 21st Century Skills. Board On Testing And Assessment, Division Of Behavioral And Social Sciences And Education, The National Academies Press, Washington, DC.

National Research Council (2011b), “Learning science through computer games and simulations. Committee on science learning: computer games, simulations, and education”, M.A. Honey and M.L. Hilton (eds.), Board on Science Education, Division of Behavioral and Social Sciences and Education, The National Academies Press, Washington, DC.

National Research Council (2007), “Taking science to school: Learning and teaching science in grades K-8”, R.A. Duschl, H.A. Schweingruber, and A.W. Shouse (eds.), Committee on Science Learning, Kindergarten through Eighth Grade. Board on Science Education, Center for Education. Division of Behavioural and Social Sciences and Education, The National Academies Press, Washington, DC.

National Research Council (1999), “How people learn: Brain, mind, experience, and school”, J.D. Bransford, A.L. Brown, and R.R. Cocking (eds.), Committee on Developments in the Science of Learning. Commission on Behavioral and Social Sciences and Education, National Academy Press, Washington, DC.

Newmann, F.M. and associates (1996), Authentic Achievement: Restructuring Schools For Intellectual Quality, Jossey-Bass, San Francisco, CA.

OECD (2005), Definition and selection of key competencies: Executive summary, OECD Publishing, Paris.

OECD (2010), PISA 2009 Results: What Students Know and Can Do – Student Performance in Reading, Mathematics and Science (Volume I), OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264091450-en.

Partnership for 21st Century Skills (2010), 21stcentury readiness for every student: A policymaker’s guide, Tucson, AZ, www.p21.org/documents/policymakersguide_final.pdf.

Partnership for 21st Century Skills (2011), Overview of state leadership initiative, www.p21.org/index.php?option=com_content&task=view&id=505&Itemid=189.

Patry, J.L. (2011), “Methodological consequences of situation specificity: Biases in assessments”, Frontiers in Psychology, Vol. 2/18.

Pellegrino, J.W. and M. Hilton (eds.) (2012), Education For Life and Work: Developing Transferable Knowledge and Skills in the 21st Century, National Academies Press, Washington, DC.

Polanyi, M. (1958), Personal Knowledge: Towards a Post-Critical Philosophy, University of Chicago Press, Chicago.

Quellmalz, E.S., M.J. Timms and B.C. Buckley (2010), “The promise of simulation-based science assessment: The Calipers Project”, International Journal of Learning Technologies, Vol. 5/3, pp. 243-263.

Resnick, L. and D. Resnick (1992), “Assessing the thinking curriculum: New tools for educational reform”, in B.R. Gifford and M.C. O’Connor (eds.), Changing Assessments: Alternative Views Of Aptitude, Achievement And Instruction. pp. 37-75, Kluwer Academic, Boston, MA.

Secretary’s Commission on Achieving Necessary Skills (1991), “What work requires of schools: A SCANS report for America 2000”, Washington, DC: U.S. Department of Labor, http://wdr.doleta.gov/SCANS/whatwork/, (accessed April 2012).

Shavelson, R.J., G.P. Baxter and X. Gao (1993), “Sampling variability of performance assessments.” Journal of Educational Measurement, Vol. 33/3, pp. 215-232.

Silver, E.A. and V. Mesa (2011), “Coordination characterizations of high-quality mathematics teaching: Probing the intersection”, in Y. Li and G. Kaiser (eds.), Expertise in Mathematics Instruction: An International Perspective, pp. 63-84, Springer, New York.

Stake, R.E. and J. Easley (1978), Case Studies In Science Education, University of Illinois, Urbana, IL.

Stigler, J.W. and J. Hiebert (1999), The Teaching Gap, The Free Press, New York.

Stodolsky, S.S. (1988), The Subject Matters: Classroom Activities In Math and Social Sciences, University of Chicago, Chicago.

Webb, N.L. (1999), “Alignment of science and mathematics standards and assessments in four states”, Research monograph #18, National Institute for Science Education and Council of Chief State School Officers. Madison: Wisconsin Center for Education Research, www.wcer.wisc.edu/archive/nise/publications/Research_Monographs/vol18.pdf., (accessed February 2012).

Webster-Wright, A. (2009), “Reframing professional development through understanding authentic professional learning”, Review of Educational Research, Vol. 79/2, pp. 702-739.

Wilson, S. (2011), “Effective STEM teacher preparation, induction, and professional development”, Paper presented at the NRC Workshop on Highly Successful STEM Schools or Programs.

Windschitl, M. (2009), “Cultivating 21st century skills in science learners: How systems of teacher preparation and professional development will have to evolve”, Paper commissioned for the NRC Workshop on Exploring the Intersection between Science Education and the Development of 21st Century Skills.

Yaeger, D.S. and G.M. Walton (2011), “Social-psychological interventions in education: They’re not magic”, Review of Educational Research, Vol. 81, pp. 267-301.

Notes

← 1. This non-profit organisation includes business, education, community, and governmental groups.

← 2. We use English language arts to exemplify developing competency in language arts in whatever is an individual’s first language and their primary language of learning and instruction.