2. PISA 2018 Reading Framework

Reading is the major domain of assessment of the 2018 cycle of the Programme for International Student Assessment (PISA). This chapter defines reading literacy as it is assessed in PISA 2018. It describes the types of processes and scenarios exhibited in the tasks that PISA uses to assess reading literacy. Moreover, it describes how the nature of reading literacy has changed over the past two decades, notably through the growing presence of digital texts. The chapter also explains how PISA assesses the ease and efficiency with which a student reads, and how it measures various metacognitive aspects of students’ reading practices. It then discusses how student performance in reading is measured and reported. Various sample items from the reading assessment are included at the end of this chapter.

    

Introduction

Reading as the major domain

PISA 2018 marks the third time that reading is a major domain and the second time that the reading literacy framework receives a major revision. Such a revision must reflect the changing definition of reading literacy as well as the changing contexts in which reading is used in citizens’ lives. Thus, the present revision of the framework builds on contemporary and comprehensive theories of reading literacy and considers how students acquire and use information in a variety of contexts.

We live in a rapidly changing world in which both the quantity and variety of written materials are increasing and where people are expected to use these materials in new and increasingly complex ways. It is now generally accepted that our understanding of reading literacy evolves as society and culture themselves change. The reading literacy skills needed for individual growth, educational success, economic participation and citizenship 20 years ago are different from those required today, and it is likely that in 20 years’ time they will change further still.

The goal of education has continued to shift its emphasis from the collection and memorisation of information to a broader concept of knowledge: “whether a technician or a professional person, success lies in being able to communicate, share, and use information to solve complex problems, in being able to adapt and innovate in response to new demands and changing circumstances, in being able to marshal and expand the power of technology to create new knowledge and expand human capacity and productivity” (Binkley et al., 2011[1]). The ability to locate, access, understand and reflect on all kinds of information is essential if individuals are to be able to participate fully in our knowledge-based society. Reading literacy is not only a foundation for achievement in other subject areas within the educational system but also a prerequisite for successful participation in most areas of adult life (Cunningham and Stanovich, 1997[2]; OECD, 2013[3]; Smith et al., 2000[4]). The PISA framework for assessing the reading literacy of students towards the end of compulsory education, therefore, must focus on reading literacy skills that include finding, selecting, interpreting, integrating and evaluating information from the full range of texts associated with situations that extend beyond the classroom.

Changes in the nature of reading literacy

Evolving technologies have rapidly changed the ways in which people read and exchange information, both at home and in the workplace. The automation of routine jobs has created a demand for people who can adapt to quickly changing contexts and who can find and learn from diverse sources of information. In 1997, when the first PISA framework for reading began to be discussed, just 1.7% of the world’s population used the Internet. By 2014, the number had grown to a global penetration rate of 40.4%, representing almost three billion people (International Telecommunications Union, 2014[5]). Between 2007 and 2013, the number of mobile phone subscriptions doubled: in 2013, there were almost as many active subscriptions as people on earth (95.5 subscriptions per 100 people) and the number of mobile broadband subscriptions had increased to almost two billion worldwide (International Telecommunications Union, 2014[6]). The Internet increasingly pervades the life of all citizens, from learning in and out of school, to working in real or virtual workplaces, to dealing with personal matters such as taxes, health care or holiday planning. Personal and professional development is a lifelong endeavour and the students of tomorrow will need to be skilled with digital tools in order to successfully manage the increased complexity and quantity of information available.

In the past, the primary interest when evaluating student reading literacy proficiency was the ability to understand, interpret and reflect upon single texts. While these skills remain important, greater emphasis on the integration of information technologies into citizens’ social and work lives requires that the definition of reading literacy be updated and extended. It must reflect the broad range of newer skills associated with literacy tasks required in the 21st century (Ananiadou and Claro, 2009[7]; Kirsch et al., 2002[8]; Rouet, 2006[9]; Spiro et al., 2015[10]). This necessitates an expanded definition of reading literacy encompassing both basic reading processes and higher-level digital reading skills while recognising that what constitutes literacy will continue to change due to the influence of new technologies and changing social contexts (Leu et al., 2013[11]; 2015[12]).

As the medium through which we access textual information moves from print to computer screens to smartphones, the structure and formats of texts have changed. This in turn requires readers to develop new cognitive strategies and clearer goals in purposeful reading. Therefore, success in reading literacy should no longer be defined by just being able to read and comprehend a single text. Although the ability to comprehend and interpret extended pieces of continuous texts – including literary texts – remains valuable, success will also require the deployment of complex information-processing strategies, including the analysis, synthesis, integration and interpretation of relevant information from multiple text (or information) sources. In addition, successful and productive citizens will need to use information from across domains, such as science and mathematics, and employ technologies to effectively search, organise and filter a wealth of information. These will be the key skills necessary for full participation in the labour market, in further education as well as in social and civic life in the 21st century (OECD, 2013[13]).

Continuity and change in the framework from 2000 to 2015

With the changes in the nature of reading literacy, the framework also has changed. Reading literacy was the major domain assessed during the first PISA cycle (PISA 2000). For the fourth PISA cycle (PISA 2009), it was the first to be revisited as a major domain, requiring a full review of its framework and the development of new instruments that represent it. For the seventh PISA cycle (2018), the framework is once again being revised.

The original reading literacy framework for PISA was developed for the PISA 2000 cycle (from 1998 to 2001) through a consensus-building process involving experts in reading selected by the participating countries to form the PISA 2000 reading expert group (REG). The definition of reading literacy evolved in part from the IEA Reading Literacy Study (1992) and the International Adult Literacy Survey (IALS, 1994, 1997 and 1998). In particular, it reflected the IALS emphasis on the importance of reading skills for active participation in society. It was also influenced by contemporary – and still current – theories of reading, which emphasise the multiple linguistic-cognitive processes involved in reading and their interactive nature (Britt, Goldman and Rouet, 2013[14]; Kamil et al., 2000[15]; Perfetti, 1985[16]; 2007[17]; Snow and the RAND Corporation, 2002[18]; Rayner and Reichle, 2010[19]), models of discourse comprehension (Kintsch, 1998[20]; Zwaan and Singer, 2003[21]) and theories of performance in solving information problems (Kirsch, 2001[22]; Kirsch and Mosenthal, 1990[23]; Rouet, 2006[9]).

Much of the substance of the PISA 2000 framework was retained in the PISA 2009 framework, respecting one of the central purposes of PISA: to collect and report trend information about performance in reading, mathematics and science. However, the PISA domain frameworks are designed to be evolving documents that adapt to and integrate new developments in theory and practice, reflecting both an expansion in our understanding of the nature of reading and changes in the world. This evolution is shown in greater detail in Appendix A, which provides an overview of the primary changes in the reading framework from 2000 to 2015.

Changes in our concept of reading since 2000 have led to an expanded definition of reading literacy, which recognises the motivational and behavioural characteristics of reading alongside the cognitive characteristics. Both reading engagement and metacognition – an awareness and understanding of how one develops an understanding of text and uses reading strategies – were referred to briefly at the end of the first PISA framework for reading under “Other issues” (OECD, 2000[24]). In the light of recent research, reading engagement and metacognition were featured more prominently in the PISA 2009 and 2015 reading frameworks as elements that can be developed and fostered as components of reading literacy.

A second major modification of the framework from PISA 2000 to PISA 2009 was the inclusion of digital texts, in recognition of the increasing role of such texts in both individual growth and active participation in society (OECD, 2011[25]). This modification was concomitant with the new computer-based format of the assessment and thus involved the presentation of texts on a computer screen. PISA 2009 was the first large-scale international study to assess the reading of digital texts.

During PISA 2015, reading was a minor domain and the description and illustration of reading literacy developed for PISA 2009 were kept. However, PISA 2015 involved important changes in the test administration procedures, some of which required adjustments in the wording of the reading framework. For example, the reading assessment in the 2015 cycle was administered primarily on computer. As a result, the “environment” and “medium” dimensions were revisited and further elaborated with the inclusion of the terms “fixed” and “dynamic”.

Revising the framework for PISA 2018

The PISA 2018 reading literacy framework retains aspects of the 2009/2015 frameworks that are still relevant to PISA 2018. However, the framework has been enhanced and revised in the following ways:

  • The framework fully integrates reading in a traditional sense together with the new forms of reading that have emerged over the past decades and that continue to emerge due to the spread of digital devices and digital texts.

  • The framework incorporates constructs involved in basic reading processes. These constructs, such as fluent reading, literal interpretation, inter-sentence integration, extraction of the central themes and drawing inferences, are critical skills for processing complex or multiple texts for specific purposes. If students fail at performing higher-level text processing functions, it is critical to know whether the failure was due to difficulties in these basic skills in order to provide appropriate support to these students.

  • The framework revisits the way in which the domain is organised to incorporate reading processes such as evaluating the veracity of texts, seeking information, reading from multiple sources and integrating/synthesising information across sources. The revision rebalances the prominence of different reading processes to reflect the global importance of the different constructs, while ensuring there is a link to the prior frameworks in order to be able to measure trends in achievement.

  • The revision considers how new technology options and the use of scenarios involving print and digital text can be harnessed to achieve a more authentic assessment of reading, consistent with the current use of texts around the world.

The importance of digital reading literacy

Reading in today's world is very different from what it was just 20 years ago. Up to the mid-1990s, reading was mostly performed on paper. Printed matter existed and continue to exist in many different forms, shapes and textures, from children’s books to lengthy novels, from leaflets to encyclopaedias, from newspapers and magazines to scholarly journals, from administrative forms to notes on billboards.

In the early 1990s, only a small percentage of people owned computers and most such computers were mainframes or desktop PCs. Very few people owned laptops for their personal use, whereas digital tablets and smartphones were still yet to become popular. Computer-based reading was limited to specific users and uses, typically a specialised worker dealing with technical or scientific information. In addition, due to mediocre display quality, reading on the computer was slower, more error-prone and more tiring than reading on paper (Dillon, 1994[26]). Initially acclaimed as a means to "free" the reader from the printed text "straightjacket", emerging hypertext technology, such as the linking of digital information pages allowing each reader to dynamically construct their own route through chunks of information (Conklin, 1987[27]), also led to disorientation and cognitive overload, as the Web was still in its infancy (Foltz, 1996[28]; Rouet and Levonen, 1996[29]). But at that time, only a very small fraction of the world population had access to the newly-born World Wide Web.

In less than 20 years, the number of computers in use worldwide grew to an estimated 2 billion in 2015 (International Telecommunications Union, 2014[6]). In 2013, 40% of the world’s population had access to the Internet at home, with a sharp contrast between developed countries, where access reached 80% of the population, and some less developed countries, where access lagged below 20% (International Telecommunications Union, 2014[6]). The last decade has witnessed a dramatic expansion of portable digital devices, with wireless Internet access overtaking fixed broadband subscriptions in 2009 (OECD, 2012[30]). By 2015, computer sales were slowing, while sales of digital pads, readers and cell phones were still growing at two-digit rates (Gartner, 2014[31]).

As a notable consequence of the spread of information and communication technology (ICT) among the general public, reading is massively shifting from print to digital texts. For example, computers have become the second most-used source of news for American citizens, after TV and before radio and printed newspapers and magazines (American Press Institute, 2014[32]). Similarly, British children and teenagers prefer to read digital rather than printed texts (Clark, 2014[33]), and a recent UNESCO report showed that two thirds of users of a phone-based reader across five developing countries indicated that their interest in reading and time spent reading increased once it was possible to read on their phones (UNESCO, 2014[34]). This shift has important consequences for the definition of reading as a skill. Firstly, the texts that people read on line are different from traditional printed texts. In order to enjoy the wealth of information, communication and other services offered through digital devices, online readers have to cope with smaller displays, cluttered screens and challenging networks of pages. In addition, new genres of digital-based communication have appeared, such as e-mail, short messaging, forums and social networking applications. It is important to stress that the rise of digital technology means that people need to be selective in what they read while they must also read more, more often and for a broader range of purposes. Reading and writing are even replacing speech in some everyday communication acts, such as using chat systems rather than telephoning help desks. A consequence is that readers have to understand these new text-based genres and socio-cultural practices.

Readers in the digital age also have to master several new skills. They have to be minimally ICT literate in order to understand and operate devices and applications. They also have to search for and access the texts they need through the use of search engines, menus, links, tabs and other paging and scrolling functions. Due to the uncontrolled profusion of information on the Internet, readers also have to be discerning in their choice of information sources and must assess of information quality and credibility. Finally, readers have to read across texts to corroborate information, to detect potential discrepancies and conflicts and to resolve them. The importance of these new skills was clearly illustrated in the OECD’s PISA 2009 digital reading study, whose report noted the following:

Navigation is a key component of digital reading, as readers “construct” their text through navigation. Thus, navigational choices directly influence what kind of text is eventually processed. Stronger readers tend to choose strategies that are suited to the demands of the individual tasks. Better readers tend to minimise their visits to irrelevant pages and locate necessary pages efficiently. (OECD, 2011, p. 20[25])  
        

In addition, a 2015 study of student use of computers in the classroom (OECD, 2015, p. 119[35]) shows, for instance, that “students’ average navigation behaviour explains a significant part of the differences in digital reading performance between countries/economies that is not accounted for by differences in print-reading performance”; see also Naumann (2015[36]).

Thus, in many parts of the world, skilful digital reading literacy is now key to one’s ability to achieve one’s goals and participate in society. The 2018 PISA reading framework has been revised and expanded so as to encompass those skills that are essential for reading and interacting with digital texts.

Reading motivation, practices and metacognition

Individuals’ reading practices, motivation and attitudes towards reading, as well as an awareness of how effective reading strategies are, play a prominent role in reading. Students who read more frequently, be it with print or on-screen, who are interested in reading, who feel confident in their reading abilities and who know which strategies to use, to, for instance, summarise a text or search for information on the Internet, tend to be more proficient in reading.

Moreover, practices, motivation, and metacognition deserve close attention not only because they are potential predictors of reading achievement and growth but also because they can be considered important goals or outcomes of education, potentially driving life-long learning (Snow and the RAND Corporation, 2002[18]). Furthermore, they are malleable variables, amenable to change. For instance, there is strong evidence that reading engagement and metacognition (awareness of strategies) can be enhanced through teaching and supportive classroom practices (Brozo and Simpson, 2007[37]; Guthrie, Wigfield and You, 2012[38]; Guthrie, Klauda and Ho, 2013[39]; Reeve, 2012[40]). Reading motivation, practices and metacognition are briefly discussed in the reading literacy framework since they are critical factors of reading. However, they are assessed in the questionnaire and are thus covered in more detail in the questionnaire framework.

The structure of the reading literacy framework

Having addressed what is meant by the term “reading literacy” in PISA and introduced the importance of reading literacy in today’s society in this introduction, the remainder of the framework is organised as follows. The second section defines reading literacy and elaborates on various phrases that are used in the reading framework, along with the assumptions underlying the use of these words. The third section focuses on the organisation of the domain of reading literacy and discusses the characteristics that will be represented in the tasks included in the PISA 2018 assessment. The fourth section discusses some of the operational aspects of the assessment and how reading literacy will be measured, and presents sample items. Finally, the last section describes how the reading literacy data will be summarised and outlines plans for reporting the results.

Defining reading literacy

Definitions of reading and reading literacy have changed over time to reflect changes in society, economy, culture and technology. Reading is no longer considered an ability acquired only in childhood during the early years of schooling. Instead it is viewed as an expanding set of knowledge, skills and strategies that individuals build on throughout life in various contexts, through interaction with their peers and the wider community. Thus, reading must be considered across the various ways in which citizens interact with text-based artefacts and its role in life-long learning.

Cognitively-based theories of reading emphasise the constructive nature of comprehension, the diversity of cognitive processes involved in reading and their interactive nature (Binkley, Rust and Williams, 1997[41]; Kintsch, 1998[20]; McNamara and Magliano, 2009[42]; Oakhill, Cain and Bryant, 2003[43]; Snow and the RAND Corporation, 2002[18]; Zwaan and Singer, 2003[21]). The reader generates meaning in response to text by using previous knowledge and a range of text and situational cues that are often socially and culturally derived. When constructing meaning, competent readers use various processes, skills and strategies to locate information, to monitor and maintain understanding (van den Broek, Risden and Husbye-Hartmann, 1995[44]) and to critically assess the relevance and validity of the information (Richter and Rapp, 2014[45]). These processes and strategies are expected to vary with context and purpose as readers interact with multiple continuous and non-continuous texts both in print and when using digital technologies (Britt and Rouet, 2012[46]; Coiro et al., 2008[47]).

Box 2.1. The definition of reading literacy in earlier PISA cycles

The PISA 2000 definition of reading literacy was as follows:

Reading literacy is understanding, using and reflecting on written texts, in order to achieve one’s goals, to develop one’s knowledge and potential, and to participate in society.

The PISA 2009 definition of reading literacy, also used in 2012 and 2015, added engagement in reading as part of reading literacy:

Reading literacy is understanding, using, reflecting on and engaging with written texts, in order to achieve one’s goals, to develop one’s knowledge and potential, and to participate in society.

For 2018 the definition of reading literacy includes the evaluation of texts as an integral part of reading literacy and removes the word “written”.

Box 2.2. The 2018 definition of reading literacy

Reading literacy is understanding, using, evaluating, reflecting on and engaging with texts in order to achieve one’s goals, to develop one’s knowledge and potential and to participate in society.

Each part of the definition is considered in turn below, taking into account the original elaboration and some important developments in the definition of the domain that use evidence from PISA and other empirical studies and that take into account theoretical advances and the changing nature of the world.

Reading literacy...

The term “reading literacy” is used instead of the term “reading” because it is likely to convey to a non-expert audience more precisely what the survey is measuring. “Reading” is often understood as simply decoding (e.g., converting written text into sounds), or even reading aloud, whereas the intention of this assessment is to measure much broader and more encompassing constructs. Reading literacy includes a wide range of cognitive and linguistic competencies, from basic decoding to knowledge of words, grammar and the larger linguistic and textual structures needed for comprehension, as well as integration of meaning with one’s knowledge about the world. It also includes metacognitive competencies: the awareness of and ability to use a variety of appropriate strategies when processing texts. Metacognitive competencies are activated when readers think about, monitor and adjust their reading activity for a particular goal.

The term “literacy” typically refers to an individual’s knowledge of a subject or field, although it has been most closely associated with an individual’s ability to learn, use and communicate written and printed information. This definition seems close to the notion that the term “reading literacy” is intended to express in this framework: the active, purposeful and functional application of reading in a range of situations and for various purposes. PISA assesses a wide range of students. Some of these students will go on to university, possibly to pursue an academic or professional career; some will pursue further studies in preparation for joining the labour force; and some will enter the workforce directly upon completion of secondary schooling. Regardless of their academic or labour-force aspirations, reading literacy will be important to students’ active participation in their community and in their economic and personal lives.

... is understanding, using, evaluating, reflecting on...

The word “understanding” is readily connected with the widely accepted concept of “reading comprehension”, which states that all reading involves some level of integrating information from the text with the reader's pre-existing knowledge. Even at the earliest stages of reading, readers must draw on their knowledge of symbols (e.g., letters) to decode texts and must use their knowledge of vocabulary to generate meaning. However, this process of integration can also be much broader, including, for instance, the development of mental models of how texts relate to the world. The word “using” refers to the notions of application and function – doing something with what we read. The term “evaluating” was added for PISA 2018 to incorporate the notion that reading is often goal-directed, and consequently the reader must weigh such factors as the veracity of the arguments in the text, the point of view of the author and the relevance of a text to the reader’s goals. “Reflecting on” is added to “understanding”, “using” and “evaluating” to emphasise the notion that reading is interactive: readers draw on their own thoughts and experiences when engaging with text. Every act of reading requires some reflection, where readers review and relate information within the text with information from outside the text. As readers develop their stores of information, experience and beliefs, they constantly test what they read against outside knowledge, thereby continually reviewing and revising their sense of the text. Reflecting on texts can include weighing the author's claim(s), their use of rhetorical and other means of discourse, as well as inferring the author’s perspective. At the same time, incrementally and perhaps imperceptibly, readers’ reflections on texts may alter their sense of the world. Reflection might also require readers to consider the content of the text, apply their previous knowledge or understanding or think about the structure or form of the text. Each of these skills in the definition – “understanding”, “using”, “evaluating” and “reflecting on” – are necessary, but none is sufficient for successful reading literacy.

...and engaging with...

A person who is literate in reading not only has the skills and knowledge to read well, but also values and uses reading for a variety of purposes. It is therefore a goal of education to cultivate not only proficiency but also engagement with reading. Engagement in this context implies the motivation to read and comprises a cluster of affective and behavioural characteristics that include an interest in and enjoyment of reading, a sense of control over what one reads, involvement in the social dimension of reading and diverse and frequent reading practices.

...texts...

The phrase “texts” is meant to include all language as used in its graphic form: handwritten, printed or screen-based. In this definition, we exclude as texts purely aural language artefacts such as voice recordings, film, TV, animated visuals and pictures without words. Texts do include visual displays such as diagrams, pictures, maps, tables, graphs and comic strips, which include some written language (for example, captions). These visual texts can exist either independently or they can be embedded within larger texts.

Dynamic texts, which give the reader some level of decision-making power as to how to read them, differ from fixed texts in a number of respects, including the lack of physical clues allowing readers to estimate the length and quantity of text (e.g. the dimensions of paper-based documents are hidden in virtual space); the way different parts of a piece of text and different texts are connected with one another through hypertext links; whether multiple summarised texts are shown as the result of a search. As a result of these differences, readers also typically engage differently with dynamic texts. To a much greater extent than with text that is printed, readers need to construct their own pathways to complete any reading activity associated with dynamic texts.

The term “texts” was chosen instead of the term “information” because of its association with written language and because it more readily connotes literary as well as information-focused reading.

...in order to achieve one’s goals, to develop one’s knowledge and potential and to participate in society.

This phrase is meant to capture the full scope of situations in which reading literacy plays a role, from private to public, from school to work, from formal education to lifelong learning and active citizenship. "To achieve one’s goals” and “to develop one’s knowledge and potential” both spell out the long-held idea that reading literacy enables the fulfilment of individual aspirations – both defined ones such as graduating or getting a job, and those less defined and less immediate that enrich and extend one’s personal life and that contribute to lifelong education (Gray and Rogers, 1956[48]). The PISA definition of reading literacy also embraces the new types of reading in the 21st century. It conceives of reading literacy as the foundation for full participation in the economic, political, communal and cultural life of contemporary society. The word “participate” is used because it implies that reading literacy allows people to contribute to society as well as to meet their own needs: “participating” includes social, cultural and political engagement (Hofstetter, Sticht and Hofstetter, 1999[49]). For instance, literate people have greater access to employment and more positive attitudes toward institutions (OECD, 2013[3]). Higher levels of reading literacy have been found to be related to better health and reduced crime (Morrisroe, 2014[50]). Participation may also include taking a critical stance, a step toward personal liberation, emancipation and empowerment (Lundberg, 1991[51]).

Organising the domain

Reading as it occurs in everyday life is a pervasive and highly diverse activity. In order to design an assessment that adequately represents the many facets of reading literacy, the domain is organized according to a set of dimensions. The dimensions will in turn determine the test design and, ultimately, the evidence about student proficiencies that can be collected and reported.

Snow and the RAND Reading Group’s (2002[18]) influential framework defined reading comprehension as the joint outcome of three combined sources of influence: the reader, the text and the activity, task or purpose for reading. Reader, text and task dimensions interact within a broad sociocultural context, which can be thought of as the diverse range of situations in which reading occurs. PISA adopts a similar view of the dimensions of reading literacy, as illustrated in Figure 2.1. A reader brings a number of reader factors to reading, which can include motivation, prior knowledge, and other cognitive abilities. The reading activity is a function of text factors (i.e. the text or texts that are available to the reader at a given place and time). These factors include the format of the text, the complexity of the language used, and the number of pieces of text a reader encounters. The reading activity is also a function of task factors (i.e. the requirements or reasons that motivate the reader's engagement with text). Task factors include the potential time and other practical constraints, the goals of the task (e.g. whether reading for pleasure, reading for deep understanding or skimming for information) and the complexity or number of tasks to be completed. Based on their individual characteristics and their perception of text and task factors, readers apply a set of reading literacy processes in order to locate and extract information and construct meaning from texts to achieve tasks.

Figure 2.1. Factors that contribute to reading literacy
Figure 2.1. Factors that contribute to reading literacy

The PISA cognitive assessment measures reading literacy by manipulating task and text factors. An additional questionnaire assays some of the reader factors, such as motivation, disposition and experience.

In designing the PISA reading literacy assessment, the two most important considerations are, first, to ensure broad coverage of what students read and for what purposes they read, both in and outside of school, and, second, to represent a natural range of difficulty in texts and tasks. The PISA reading literacy assessment is built on three major characteristics: text – the range of material that is read; processes – the cognitive approach that determines how readers engage with a text; and scenarios – the range of broad contexts or purposes for which reading takes place. Within scenarios are tasks – the assigned goals that readers must achieve in order to succeed. All three contribute to ensuring broad coverage of the domain. In PISA, task difficulty can be varied by manipulating text features and task goals, which then require deployment of different cognitive processes. Thus, the PISA reading literacy assessment aims to measure students’ mastery of reading processes (the possible cognitive approaches of readers to a text) by varying the dimensions of text (the range of material that is read) and scenarios (the range of broad contexts or purposes for which reading takes place) with one or more thematically related texts. While there may be individual differences in reader factors based on the skills and background of each reader, these are not manipulated in the cognitive instrument but are captured through the assessment in the questionnaire.

These three characteristics must be operationalised in order to use them to design the assessment. That is, the various values that each of these characteristics can take on must be specified. This allows test developers to categorise the materials they work with and the tasks they construct so that they can then be used to organise the reporting of the data and to interpret results.

Processes

The PISA typology of the cognitive aspects involved in reading literacy was designed at the turn of the 21st century (OECD, 2000[24]). A revision of these aspects in the 2018 PISA reading literacy framework is needed for at least three reasons:

  1. a) A definition of reading literacy must reflect contemporary developments in school and societal literacy demands, namely, the increasing amount of text information available in print and digital forms and the increasing diversity and complexity of situations involving text and reading. These developments are partly driven by the spread of digital information technology and in particular by increased access to the Internet worldwide.

  2. b) The PISA 2018 framework should also reflect recent developments in the scientific conceptualisation of reading and be as consistent as possible with the terminology used in current theories. There is a need to update the vocabulary that was used to designate the cognitive processes involved in reading, taking into account progress in the research literature.

  3. c) Finally, a revision is needed to reassess the necessary trade-off between the desire to stay faithful to the precise definition of the aspects as described in the framework and the limited possibility to account for each of these individual aspects in a large-scale international assessment. Such a reassessment of the reading framework is particularly relevant in the context of PISA 2018, in which reading literacy is the main domain.

The 2018 framework replaces the phrase “cognitive aspects”, used in previous versions of the framework, with the phrase “cognitive processes” (not to be confused with the reading literacy processes described above). The phrase “cognitive processes” aligns with the terminology used in reading psychology research and is more consistent with a description of reader skills and proficiencies. The term “aspects” tended to confound the reader's actual cognitive processes with the requirements of various types of tasks (i.e. the demands of specific types of questions). A description of reading processes permits the 2018 framework to map these processes to a typology of tasks.

Recent theories of reading literacy emphasise the fact that "reading does not take place in a vacuum" (Snow and the RAND Corporation, 2002[18]; McCrudden and Schraw, 2007[52]; Rouet and Britt, 2011[53]). Indeed, most reading activities in people's daily lives are motivated by specific purposes and goals (White, Chen and Forsyth, 2010[54]). Reading as a cognitive skill involves a set of specific reading processes that competent readers use when engaging with texts in order to achieve their goals. Goal setting and goal achievement drive not only readers' decisions to engage with texts, their selection of texts and passages of text, but also their decisions to disengage from a particular text, to re-engage with a different text, to compare, and to integrate information across multiple texts (Britt and Rouet, 2012[46]; Goldman, 2004[55]; Perfetti, Rouet and Britt, 1999[56]).

To achieve reading literacy as defined in this framework, an individual needs to be able to execute a wide range of processes. Effective execution of these processes, in turn, requires that the reader have the cognitive skills, strategies and motivation that support the processes.

The PISA 2018 reading framework acknowledges the goal-driven, critical and intertextual nature of reading literacy (McCrudden and Schraw, 2007[52]; Vidal-Abarca, Mañá and Gil, 2010[57]). Consequently, the former typology of reading aspects (OECD, 2000[24]) has been revised and extended so as to explicitly represent the fuller range of processes from which skilled readers selectively draw depending on the particular task context and information environment.

More specifically, two broad categories of reading processes are defined for PISA 2018: text processing and task management (Figure 2.2). This distinction is consistent with current views of reading as a situated and purposeful activity, see e.g. (Snow and the RAND Corporation, 2002[18]). The focus of the cognitive assessment is on processes identified in the text processing box.

Figure 2.2. PISA 2018 Reading framework processes
Figure 2.2. PISA 2018 Reading framework processes

Text processing

The 2018 typology of reading processes specifically identifies the process of reading fluently as distinct from other processes associated with text comprehension.

Reading fluently

Reading fluency can be defined as an individual’s ability to read words and text accurately and automatically and to phrase and process these words and texts in order to comprehend the overall meaning of the text (Kuhn and Stahl, 2003[58]). In other words, fluency is the ease and efficiency of reading texts for understanding. There is considerable empirical evidence demonstrating a link between reading ease/efficiency/fluency and reading comprehension (Chard, Pikulski and McDonagh, 2006[59]; Kuhn and Stahl, 2003[58]; Wagner et al., 2010[60]; Wayman et al., 2007[61]; Woodcock, McGrew and Mather, 2001[62]; Jenkins et al., 2003[63]). The chief psychological mechanism proposed to explain this relationship is that the ease and efficiency of reading text is indicative of expertise in the foundational reading skills of decoding, word recognition and syntactic parsing of texts.

Fluent reading frees up attention and memory resources, which can be allocated to higher-level comprehension processes. Conversely, weaknesses in reading fluency divert resources from comprehension towards the lower-level processes necessary to process printed text, resulting in weaker performance in reading comprehension (Cain and Oakhill, 2008[64]; Perfetti, Marron and Foltz, 1996[65]). Acknowledging this strong link between fluency and comprehension, the National Reading Panel (2000) in the United States recommended fostering fluency in reading to enhance students’ comprehension skills.

Locating information

Competent readers can carefully read an entire piece of text in order to comprehend the main ideas and reflect on the text as a whole. On a daily basis, however, readers most often use texts for purposes that require the location of specific information, with little or no consideration for the rest of the text (White, Chen and Forsyth, 2010[54]). Furthermore, locating information is an obligatory component of reading when using complex digital information such as search engines and websites (Brand-Gruwel, Wopereis and Vermetten, 2005[66]; Leu et al., 2013[11]). The 2018 framework defines two processes whereby readers find information within and across texts:

Accessing and retrieving information within a piece of text. Locating information from tables, text chapters or whole books is a skill in and by itself (Dreher and Guthrie, 1990[67]; Moore, 1995[68]; Rouet and Coutelet, 2008[69]). Locating information draws on readers' understanding of the demands of the task, their knowledge of text organisers (e.g., headers, paragraphs) and their ability to assess the relevance of a piece of text. The ability to locate information depends on readers' strategic awareness of their information needs and their capacity to quickly disengage from irrelevant passages (McCrudden and Schraw, 2007[52]). In addition, readers sometimes have to skim through a series of paragraphs in order to retrieve specific pieces of information. This requires an ability to modulate one's reading speed and depth of processing and to know when to keep in consideration or dismiss the information in the text (Duggan and Payne, 2009[70]). Access and retrieval tasks in PISA 2018 require the reader to scan a single piece of text in order to retrieve target information composed of a few words, phrases or numerical values. There is little or no need to comprehend the text beyond the phrase level. The identification of target information is achieved through literal or close to literal matching of elements in the question and in the text, although some tasks may require inferences at the word or phrase level.

Searching for and selecting relevant text. Proficient readers are able to select information when faced with not just one, but also when faced with several pieces of text. In electronic environments, the amount of available information often largely exceeds the amount readers are able to actually process. In these multiple-text reading situations, readers have to make decisions as to which of the available pieces of text is the most important, relevant, accurate or truthful (Rouet and Britt, 2011[53]). These decisions are based on readers' assessment of the qualities of the pieces of text, which are made from partial and sometimes opaque indicators, such as the information contained in a web link (Gerjets, Kammerer and Werner, 2011[71]; Mason, Boldrin and Ariasi, 2010[72]; Naumann, 2015[36]; Rieh, 2002[73]). Thus, one's ability to search for and select a piece of text from among a set of texts is an integral component of reading literacy. In PISA 2018, text search and selection tasks involve the use of text descriptors such as headers, source information (e.g. author, medium, date), and embedded or explicit links such as search engine result pages.

Understanding

A large number of reading activities involve the parsing and integration of extended passages of text in order to form an understanding of the meaning conveyed in the passage. Text understanding (also called comprehension) may be seen as the construction by the reader of a mental representation of what the text is about, which Kintsch (1998[20]) defines as a “situation model”. A situation model is based on two core processes: the construction of a memory representation of the literal meaning of the text; and the integration of the contents of the text with one's prior knowledge through mapping and inference processes (McNamara and Magliano, 2009[42]; Zwaan and Singer, 2003[21]).

Acquiring a representation of the literal meaning of a text requires readers to comprehend sentences or short passages. Literal comprehension tasks involve a direct or paraphrased match between the question and target information within a passage. The reader may need to rank, prioritise or condense information at a local level. (Note that tasks requiring integration at the level of an entire passage, such as identifying the main idea, summarizing the passage, or giving a title to the passage, are considered to be integration tasks; see below.)

Constructing an integrated text representation requires working from the level of individual sentences to the entire passage. The reader needs to generate various types of inferences, ranging from simple connecting inferences (such as the resolution of anaphora) to more complex coherence relationships (e.g. spatial, temporal, causal or claim-argument links) (van den Broek, Risden and Husbye-Hartmann, 1995[44]). Inferences might link different portions of the text together, or they may link the text to the question statement. Finally, the production of inferences is also needed in tasks where the reader must identify the implicit main idea of a given passage, possibly in order to produce a summary or a title for the passage.

When readers are faced with more than one text, integration and inference generation may need to be performed based on pieces of information located in different pieces of texts (Perfetti, Rouet and Britt, 1999[56]). One specific problem that may arise when integrating information across multiple pieces of text is that they might provide inconsistent or conflicting information. In those cases, readers must engage in evaluation processes in order to acknowledge and handle the conflict (Bråten, Strømsø and Britt, 2009[74]; Stadtler and Bromme, 2014[75]) (see below).

Evaluating and reflecting

Competent readers can reason beyond the literal or inferred meaning of the text. They can reflect on the content and form of the text and critically assess the quality and validity of the information therein.

Assessing quality and credibility. Competent readers can evaluate the quality and credibility of the information in a piece of text: whether the information is valid, up-to-date, accurate and/or unbiased. Proficient evaluation sometimes requires the reader to identify and assess the source of the information: whether the author is competent, well-informed and benevolent.

Reflecting on content and form. Competent readers must also be able to reflect on the quality and style of the writing. This reflection involves being able to evaluate the form of the writing and how the content and form together relate to and express the author’s purposes and point of view. Reflecting also involves drawing upon one's knowledge, opinions or attitudes beyond the text in order to relate the information provided within the text to one’s own conceptual and experiential frames of reference. Reflection items may be thought of as those that require readers to consult their own experience or knowledge to compare, contrast or hypothesise different perspectives or viewpoints. Evaluation and reflection were arguably always part of reading literacy, but their importance has increased with the increased amount and heterogeneity of information readers are faced with today.

Detecting and handling conflict. When facing multiple pieces of text that contradict each other, readers need to be aware of the conflict and to find ways to deal with it (Britt and Rouet, 2012[46]; Stadtler and Bromme, 2013[76]; 2014[75]). Handling conflict typically requires readers to assign discrepant claims to their respective sources and to assess the soundness of the claims and/or the credibility of the sources. As these skills underlie much of contemporary reading, it is an issue of critical importance to measure the extent to which 15-year-olds can meet the new challenges of comprehending, comparing and integrating multiple pieces of texts (Bråten et al., 2011[77]; Coiro et al., 2008[47]; Goldman, 2004[55]; Leu et al., 2015[12]; Mason, Boldrin and Ariasi, 2010[72]; Rouet and Britt, 2014[78]).

Task management processes

In the context of any assessment, but also in many everyday reading situations (White, Chen and Forsyth, 2010[54]), readers engage with texts because they receive some kind of assignment or external prompt to do so. Reading literacy involves one's ability to accurately represent the reading demands of a situation, to set up task-relevant reading goals, to monitor progress toward these goals, and to self-regulate their goals and strategies throughout the activity (see, e.g., Hacker (1998[79]) and Winne and Hadwin, (1998[80]), for discussions of self-regulated reading).

Task-oriented goals fuel the reader's search for task-relevant texts and/or passages within a text (McCrudden and Schraw, 2007[52]; Rouet and Britt, 2011[53]; Vidal-Abarca, Mañá and Gil, 2010[57]). Finally, monitoring (metacognitive) processes enable the dynamic updating of goals throughout the reading activity. Task management is represented in the background of text processing to emphasise the fact that it constitutes a different, metacognitive level of processing.

While readers’ own interpretation of a task’s requirements is an important component of the task management processes, the construction of reading goals extends beyond the explicit task instructions as goals may be self-generated based on one's own interests and initiative. However, the PISA reading literacy assessment only considers those goals that readers form upon receiving external prompts to accomplish a given task. In addition, due to implementation constraints, task management processes are represented but not directly and independently assessed as part of PISA 2018. However, portions of the background questionnaire will estimate readers' awareness of reading strategies. Future cycles may consider the use of computer-generated process indicators (such as how often and at what time intervals a student visits a particular page of text or the number of looks back at a question a student makes) as part of the assessment of task management skills.

Summary of reading processes

To summarise, the 2018 framework features a comprehensive and detailed typology of the cognitive processes involved in purposeful reading activities as they unfold in single or multiple text environments. Due to design constraints, it is not possible to distinguish each of these processes in a separate proficiency scale. Instead, the framework defines a smaller list of processes that will form the basis for scaling and reporting (Table 2.1).

It is worth noting that the 2018 process typology also permits an analysis of changes over time in students’ proficiency at the level of broad reading processes, as the former “cognitive aspects” featured in previous frameworks can be mapped onto specific processes in the new typology. Table 2.1 shows the correspondence between the 2018 typology and the 2009 typology (which was also used in 2012 and 2015). The distinction between single and multiple text processes is discussed in greater detail below.

Table 2.1. Mapping of the 2018 process typology to 2018 reporting scales and to 2009-2015 cognitive aspects

2018 Cognitive processes

Superordinate Category Used for Scaling in 2018

2009-2015 Aspects

Reading fluently

Reported on PISA scale1

Not assessed

Accessing and retrieving information within a text

Locating information

Accessing and retrieving

Searching for and selecting relevant text

Representing literal meaning

Understanding

Integrating and interpreting

Integrating and generating inferences

Assessing quality and credibility

Evaluating and reflecting

Reflecting and evaluating

Reflecting on content and form

Detecting and handling conflict

Complex

Note 1. Reading fluency items were scaled in three steps. First, only the (other) reading items were scaled. Second, these reading items were finalised and item fits were evaluated in a way that was not affected by reading fluency items. Third, reading fluency items were added to the scaling procedure and item fits were evaluated. As reading fluency items reflect the orthography of the test language, it was expected that such items had stronger item-to-country/language associations than other items in the assessment.

Texts

Reading necessarily requires material for the reader to read. In an assessment, that material – a piece of text or a set of texts related to a particular task – must include sufficient information for a proficient reader to engage in meaningful comprehension and resolve the problem posed by the task. Although it is obvious that there are many different kinds of text and that any assessment should include a broad range of texts, there was never a single agreed-upon categorisation of the many different kinds of text that readers encounter. With the advent of digital media and the profusion of new text genres and text-based communication services – some of which may not survive the next decade, some of which may be newly created in the same time span – this issue becomes even more complex. Box 2.3 outlines a categorisation that was used between PISA 2009 and PISA 2015.

Box 2.3. Characteristics used to classify texts in the PISA 2009, 2012 and 2015 reading frameworks

The previous reference framework (2009) included four major dimensions to characterise texts:

  1. 1) Medium: print or electronic

  2. 2) Environment: authored or message-based

  3. 3) Text format: continuous, non-continuous, mixed or multiple

  4. 4) Text type: description, narration, exposition, argumentation, instruction or transaction

A Digital Reading Assessment was offered as an optional component in 2009 and 2012.

For the 2015 reading literacy assessment, only texts that had their origin as paper-based print documents were used, albeit presented on computer. For clarity, these were referred to as fixed and dynamic texts under the heading “text display space” instead of medium (in an attempt to clarify that while their origin was paper-based print, students were in fact reading them on a computer screen, hence on an electronic medium). Because reading literacy was a minor domain in 2015, no new tasks were designed and implemented. Consequently, dynamic texts, i.e. texts such as websites designed to take advantage of hyperlinks, menus, and other navigational features of an electronic medium, were not part of PISA 2015.1

Reading is the major domain in 2018 and with a revised framework, a broader range of texts can now be represented in the assessment. These include texts that are typical of the print medium but also the ever-expanding category of texts typical of the digital medium. Just like printed texts, some digital texts are "static" in that they come with a minimal set of tools for interaction (scrolling, paging and a find function). This describes, for instance, documents intended to be printed but displayed on a computer screen (e.g. word processing documents or PDF files). However, many digital texts come with innovative features that increase the possibilities for the reader to interact with the material, hence their characterisation as "dynamic texts". Features of dynamic text include embedded hyperlinks that take the reader to other sections, pages or websites; advanced search functions that provide ad hoc indexes of the searched keywords and/or highlight these words in the text; and social interaction as in interactive text-based communication media such as e-mail, forums and instant messaging services.

The 2018 framework defines four dimensions of texts: source (single, multiple); organisational and navigational structure (static, dynamic); format (continuous, non-continuous, mixed); and type (description, narration, exposition, argumentation, instruction, interaction, transaction). The design of test materials that vary along these four dimensions will ensure a broad coverage of the domain and a representation of traditional as well as emerging reading practices.

Source

In the PISA 2018 framework, a source is a unit of text. Single-source texts may be defined by having a definite author (or group of authors), time of writing or publication date, and reference title or number. Authors may be defined precisely, like in most traditional printed books, or more vaguely like the pseudonyms in a blog post or the sponsors of a website. A single-source text may also be construed as such because it is presented to the reader in isolation from other texts, even if it does not explicitly bear any source indication. Multiple-source texts are defined by having different authors, or by being published at different times, or by bearing different titles or reference numbers. Note that in the PISA framework, “title” is meant in the sense of a bibliographical catalogue unit. Lengthy texts that feature several sections with titles and subtitles are still single texts, to the extent that they were written by a definite author (or group of authors) at a given date. Likewise, multi-page websites are single-source texts as long as there is no explicit mention of a different author or date. Multiple-source texts may be represented on a single page. This is the case in printed newspapers and in many textbooks, but also in forums, customer reviews and question-and-answer websites. Finally, a single text may contain embedded sources, that is, references to other authors or texts (Rouet and Britt, 2014[78]; Strømsø et al., 2013[81]).

In sum, the multiple texts considered in previous versions of the framework correspond to multiple-source texts in the PISA 2018 framework as long as they involve several sources. All the other texts are subsumed under the category of single-source texts.

Organisational and navigational structure

Screen sizes vary dramatically in digital environments, from cell phone displays, which are smaller than a traditional index card, to large, multiple screen displays for simultaneously showing multiple screen windows of information. At the time of the drafting of this framework, however, the typical computer screen (such as the 15" or 17" screen that comes with ordinary desktop and laptop computers) features a display resolution of 1024x768 pixels. Assuming a typical font size, this is enough to display about a half-page of A4 or US-Letter page, that is, a very short piece of text. Given the wide variation in the “landscape” available on screens to display text, digital texts come with a number of tools meant to let the user access and display specific passages. These tools range from generic tools, such as the scroll bar and tabs (also found in a number of other software applications like spreadsheets and word processors) and tools to resize or position the text on the screen, to more specific tools such as menus, tables of contents and embedded hyperlinks to move between text segments. There is growing evidence that navigation in digital text requires specific skills (OECD, 2011[25]; Rouet, Vörös and Pléh, 2012[82]). Therefore, it is important to assess readers' ability to handle texts featuring a high density of navigational tools. For reasons of simplicity, the PISA 2018 framework distinguishes “static” texts, with a simple organisation and low density of navigational tools (typically, one or several screen pages arranged linearly), from “dynamic” texts, which feature a more complex, non-linear organisation and a higher density of navigational devices. Note that the term “density” is preferred to “number” to mark the fact that dynamic texts do not have to be longer than static texts.

In order to ensure a broad coverage of the domain and to maintain consistency with past frameworks, the 2018 framework also retains two former dimensions of the classification of texts, “format” and “type”, that remain for the most part unchanged from the previous framework.

Text format

An important way to classify texts, and one at the heart of the organisation of the PISA 2000 framework and assessment, is to distinguish between continuous and non-continuous texts. Continuous texts are typically composed of sentences that are, in turn, organised into paragraphs. These may fit into even larger structures such as sections, chapters and books. Non-continuous texts are most frequently organised in matrix format, based on combinations of lists.

Texts in continuous and non-continuous formats can be either fixed or dynamic texts. Mixed and multiple format texts can also be fixed texts but are particularly often dynamic texts. Each of these four formats is elaborated below.

Other non-text-formatted objects are also commonly used in conjunction with fixed texts and particularly with dynamic texts. Pictures and graphic images occur frequently in fixed texts and can legitimately be regarded as integral to such texts. Static images as well as videos, animations and audio files regularly accompany dynamic texts and can, also, be regarded as integral to those texts. As a reading literacy assessment, PISA does not include non-text formatted objects in their own right, but any such objects may, in principle, appear in PISA as part of a (verbal) text. However, in practice, the use of video and animation is very limited in the current assessment. Audio is not used at all because of practical limitations such as the need for headphones and audio translation.

Continuous texts

Continuous texts are formed by sentences organised into paragraphs. Examples of continuous texts include newspaper reports, essays, novels, short stories, reviews and letters.

Graphically or visually, text is organised by its separation into sentences and paragraphs with spacing (e.g. indentation) and punctuation conventions. Texts also follow a hierarchical structure signalled by headings and content that help readers to recognise its organisation. These markers also provide clues to text boundaries (showing section completion, for example). The location of information is often facilitated by the use of different font sizes, font types such as italic and boldface, and borders and patterns. The use of typographical and format clues is an essential subskill of effective reading.

Discourse markers also provide organisational information. For example, sequence markers (“first”, “second”, “third”, etc.) signal the relation of each of the units introduced to each other and indicate how the units relate to the larger surrounding text. Causal connectors (“therefore”, “for this reason”, “since”, etc.) signify cause-and-effect relationships between parts of a text.

Non-continuous texts

Non-continuous texts are organised differently to continuous texts and therefore require a different kind of reading approach. Most non-continuous texts are composed of a number of lists (Kirsch and Mosenthal, 1990[23]). Some are single, simple lists, but most consist of several simple lists possibly crossed with one another.

Examples of non-continuous text objects are lists, tables, graphs, diagrams, advertisements, schedules, catalogues, indices and forms. These text objects may be either fixed or dynamic.

Mixed texts

Many fixed and dynamic texts are single, coherent objects consisting of a set of elements in both continuous and non-continuous formats and are therefore known as mixed texts. Examples of mixed texts include a paragraph together with a picture, or a graph with an explanatory legend. If such mixed texts are well-constructed, the components (for example, a graph or table with an associated prose explanation) support one another through coherent and cohesive links both at local (e.g., locating a city on a map) and global (e.g., discussing the trend represented in a graph) levels.

Mixed text is a common format in fixed-text magazines, reference books and reports, where authors employ a variety of representations to communicate information. Among dynamic texts, authored web pages are typically mixed texts, with combinations of lists, paragraphs of prose and often graphics. Message-based texts, such as online forms, e-mail messages and forums, also combine texts that are continuous and non-continuous in format.

The “multiple” format defined in the previous versions of the framework is now represented as one modality of the new “source” dimension defined above.

Assessing reading literacy

The previous section outlined the conceptual framework for reading literacy. The concepts in the framework must in turn be represented in tasks and questions in order to measure students’ proficiencies in reading literacy.

In this section, we consider the use of scenarios, factors affecting item difficulty, dimensions ensuring coverage of the domain and some of the other major issues in constructing and operationalising the assessment.

Scenarios

Reading is a purposeful act that occurs within the context of particular goals. In many traditional reading assessments, test takers are presented with a series of unrelated passages on a range of general topics. Students answer a set of discrete items on each passage and then move on to the next unrelated passage. In this traditional design, students are effectively expected to “forget” what they have read previously when answering questions on later passages. Consequently, there is no overarching purpose for reading other than to answer discrete questions (Rupp, Ferne and Choi, 2006[83]). In contrast, a scenario-based assessment approach can enhance students' engagement with the tasks and thus enable a more accurate assessment of what they can do (Sabatini et al., 2014[84]; 2015[85]).

The PISA 2018 assessment will include scenarios in which students are provided an overarching purpose for reading a collection of thematically related texts in order to complete a higher-level task (e.g responding to some larger integrative question or writing a recommendation based on a set of texts), along with traditional standalone PISA reading units. The reading purpose sets up a collection of goals, or criteria, that students use to search for information, evaluate sources, read for comprehension and/or integrate across texts. The collection of sources can be diverse and may include a selection from literature, textbooks, e-mails, blogs, websites, policy documents, primary historical documents and so forth. Although the prompts and tasks that will evolve from this framework may not grant student test takers the freedom to choose their own purposes for reading and the texts related to those individual purposes, the goal of this assessment is to offer test takers some freedom in choosing the textual sources and paths they will use to respond to initial prompts. In this way, goal-driven reading can be assessed within the constraints of a large-scale assessment.

Tasks

Each scenario is made up of one or more tasks. In each task, students may be asked questions about the texts contained therein ranging from traditional comprehension items (locating information, generating an inference) to more complex tasks such as the synthesis and integration of multiple texts, evaluating web search results or corroborating information across multiple texts. Each task is designed to assess one or more of the processes identified in the framework. Tasks in a scenario can be ordered from least difficult to most difficult to measure student abilities. For instance, a student might encounter an initial task in which he or she must locate a particular document based on a search result. In the second task, the student might have to answer a question about information that is specifically stated in the text. Finally, in the third task, the student might need to determine if the author’s point of view in the first text is the same as in a second text. In each case, these tasks can be scaffolded so that if a student fails to find the correct document in the first task, he or she is then provided with the correct document in order to complete the second task. In this way, complex multipart scenarios do not become an “all or none activity”, but are rather a way to triangulate the level of different student skills through a realistic set of tasks. Thus, scenarios and tasks in the PISA 2018 reading literacy assessment correspond to units and items in previous assessments.

A scenario-based assessment mimics the way an individual interacts with and uses literacy source material in a more authentic way than a traditional, decontextualised assessment would. It presents students with realistic problems and issues to solve, and it involves the use of both basic and higher-level reading and reasoning skills (O’Reilly and Sabatini, 2013[86]).

Scenarios represent a natural extension of the traditional, unit-based approach in PISA. A scenario-based approach was used in the PISA 2012 assessment of problem solving and the PISA 2015 assessment of collaborative problem solving. Tasks 2-4 in Appendix B illustrate a sample scenario with multiple items.

Distribution of tasks

Each task will primarily assess one of the three main categories of cognitive process defined earlier. As such, they can be thought of as individual assessment items. The approximate distribution of tasks for the 2018 reading literacy assessment are shown below in Table 2.2 and are contrasted with the distribution of tasks for the 2015 assessment.

Table 2.2. Approximate distribution of tasks by targeted process and text source

2015 FRAMEWORK

2018 FRAMEWORK

SINGLE Text

MULTIPLE Text

Accessing and retrieving 25%

Scanning and locating 15%

Searching for and selecting relevant text 10%

Integrating and interpreting 50%

Literal Comprehension 15%

Inferential Comprehension 15%

Multiple-text Inferential Comprehension 15%

Reflecting and evaluating 25%

Assessing quality and credibility

Reflecting on content and form

20 %

Corroborating/handling conflict 10%

Items will be reused from previous PISA reading literacy assessments in order to allow for the measurement of trends. In order to achieve the desired proportion of tasks involving multiple pieces of text, and because prior PISA assessments focused on tasks involving only single texts, the development of new items will mostly require the creation of tasks involving multiple texts (e.g. searching for and selecting relevant text, multiple-text inferential comprehension and corroborating/handling conflict). At the same time, a sufficient number of single-text items need to be present to ensure that future trend items cover the entire framework.

Factors affecting item difficulty

The PISA reading literacy assessment is designed to monitor and report on the reading proficiency of 15-year-olds as they approach the end of compulsory education. Each task in the assessment is designed to gather a specific piece of evidence about that proficiency by simulating a reading activity that a reader might carry out either inside or outside school, as an adolescent or as an adult.

The PISA reading literacy tasks range from straightforward locating and comprehension activities to more sophisticated activities requiring the integration of information across multiple pieces of text. Drawing on Kirsch and Mosenthal’s work (Kirsch, 2001[22]; Kirsch and Mosenthal, 1990[23]), task difficulty can be manipulated through the process and text format variables. In Table 2.3 below, we outline the factors on which the difficulty of different types of tasks depend. Box 2.4 discusses how the availability of text – whether the student can see the text when answering questions about it – is related to their performance on comprehension questions.

Table 2.3. Item difficulty as a function of task and source dimensions

Single

Multiple

In scanning and locating tasks, the difficulty depends on the number of pieces of information that the reader needs to locate, the number of inferences the reader must make, the amount and prominence of competing information and the length and complexity of the piece of text.

The difficulty of searching through multiple pieces of text depends on the number of pieces of text, the complexity of the document hierarchy (depth and breadth), the reader’s familiarity with the hierarchy, the amount of non-hierarchical linking, the salience of target information, the relevance of the headers and the degree of similarity between different source texts.

In literal and inferential comprehension tasks, the difficulty depends on the type of interpretation required (for example, making a comparison is easier than finding a contrast); the number of pieces of information to be considered and the distance among them; the degree and prominence of competing information in the text; and the nature of the text (the longer, less familiar and the more abstract the content and organisation of ideas, the more difficult the task is likely to be).

In tasks involving multiple documents, the difficulty of making inferences depends on the number of pieces of text, the relevance of the headers, the similarity of the content between the pieces of text (e.g. between the arguments and points of view), and the similarity of the physical presentation/structure of the sources.

In reflecting on content and form tasks, the difficulty depends on the nature of the knowledge that the reader needs to bring to the piece of text (a task is more difficult if the reader needs to draw on narrow, specialised knowledge rather than broad and common knowledge); on the abstraction and length of the piece of text; and on the depth of understanding of the piece of text required to complete the task.

For assessing quality and credibility tasks, the difficulty depends on whether the credentials and intention of the author are explicit or left for the reader to guess, and whether the text genre (e.g., a commercial message vs. a public health statement) is clearly marked.

In tasks involving multiple documents, the difficulty of tasks requiring readers to corroborate or handle conflict is likely to increase with the number of pieces of text, the dissimilarity of the content or arguments across texts, differences in the amount of information available about the sources, its physical presentation, and organisation.

In scanning and locating tasks, the difficulty depends on the number of pieces of information that the reader needs to locate, the number of inferences the reader must make, the amount and prominence of competing information and the length and complexity of the piece of text.

The difficulty of searching through multiple pieces of text depends on the number of pieces of text, the complexity of the document hierarchy (depth and breadth), the reader’s familiarity with the hierarchy, the amount of non-hierarchical linking, the salience of target information, the relevance of the headers and the degree of similarity between different source texts.

In literal and inferential comprehension tasks, the difficulty depends on the type of interpretation required (for example, making a comparison is easier than finding a contrast); the number of pieces of information to be considered and the distance among them; the degree and prominence of competing information in the text; and the nature of the text (the longer, less familiar and the more abstract the content and organisation of ideas, the more difficult the task is likely to be).

In tasks involving multiple documents, the difficulty of making inferences depends on the number of pieces of text, the relevance of the headers, the similarity of the content between the pieces of text (e.g. between the arguments and points of view), and the similarity of the physical presentation/structure of the sources.

Box 2.4. Text availability and its impact on comprehension

In the last decade, there has been some debate as to whether memory-based measures of reading comprehension, i.e. answering comprehension question while the text is not available to students after initial reading, might be a better indicator of students’ reading comprehension skills than answering questions with the text available. Answering comprehension questions with the text by one’s side might be more ecologically valid because many reading settings (especially in the digital age) allow the reader to refer back to the text. In addition, if the text is not available to students, their performance on the comprehension questions might be confounded with their ability to remember the content of the text. On the other hand, answering comprehension questions when the text is no longer available is also a common situation (e.g. answering questions during a class session about a textbook chapter that was read the evening before). Empirical studies (Ozuru et al., 2007[87]; Schroeder, 2011[88]) provide some evidence that comprehension questions without text availability might be more sensitive to the quality of the processes that are executed while students are reading a text and the strength of the resulting memory representation. At the same time, however, both measures are highly correlated and are thus difficult to dissociate empirically. At present, therefore, there is not enough evidence to justify any major changes in the way the PISA reading assessment is administered. However, to further explore this issue, future cycles of PISA could consider measuring the time spent during the initial reading of a piece of text, the time spent re-reading the text when answering questions, and the total time spent on a task.

Factors improving the coverage of the domain

Situations

Scenarios can be developed to simulate a wide range of potential reading situations. The word “situation” is primarily used to define the contexts and uses for which the reader engages with the text. Most often, contexts of use match specific text genres and author purposes. For instance, textbooks are typically written for students and used by students in educational contexts. Therefore, the situation generally refers to both the context of use and the supposed audience and purpose of the text. Some situations, however, involve the use of texts that belong to various genres, such as when a history student works from both a first-hand account of an event (e.g., a personal diary, a court testimony) and a scholarly essay written long after the event (Wineburg, 1991[89]).

The framework categorises situations using a typology adapted from the Common European Framework of Reference (CEFR) developed for the Council of Europe. The situations may be personal, public, occupational or educational; these terms are defined in Box 2.5.

Box 2.5. Categorisation of situations

A personal situation is intended to satisfy an individual’s personal interests, both practical and intellectual. This category also includes leisure or recreational activities that are intended to maintain or develop personal connections with other people through a range of text genres such as personal letters, fiction, biography and informational texts (e.g., a gardening guide). In the electronic medium, they include reading personal e-mails, instant messages and diary-style blogs.

A public situation is one that relates to the activities and concerns of the larger society. This category makes use of official documents as well as information about public events. In general, the texts associated with this category involve more or less anonymous contact with others; therefore, they also include message boards, news websites and public notices that are encountered both on line and in print.

Educational situations make use of texts designed specifically for the purpose of instruction. Printed textbooks, electronic textbooks and interactive learning software are typical examples of material generated for this kind of reading. Educational reading normally involves acquiring information as part of a larger learning task. The materials are often not chosen by the reader but are instead assigned by an instructor.

A typical occupational reading situation is one that involves the accomplishment of some immediate task. The task could be to find a job, either in a print newspaper’s classified advertisement section or online; or it could be following workplace directions. Texts written for these purposes, and the tasks based on them, are classified as occupational in PISA. While only some of the 15-year-olds who are assessed are currently working, it is important to include tasks based on work-related texts since the assessment of young people’s readiness for life beyond compulsory schooling and their ability to use their knowledge and skills to meet real-life challenges is a fundamental goal of PISA.

Many texts used in classrooms are not specifically designed for classroom use. For example, a piece of literary text may typically be read by a 15-year-old in a mother-tongue language or literature class, yet the text was written (presumably) for readers’ personal enjoyment and appreciation. Given its original purpose, such a text is classified as being of a personal situation in PISA. As Hubbard (1989[90]) has shown, some kinds of reading usually associated with out-of-school settings for children, such as rules for clubs and records of games, often take place informally at school as well. These are classified as public situations in PISA. Conversely, textbooks are read both in schools and at home, and the process and purpose probably differ little from one setting to another. These are classified as educational situations in PISA.

It should be further emphasised that many texts can be cross-classified as pertaining to different situations. In practice, for example, a piece of text may be intended both to delight and to instruct (personal and educational); or to provide professional advice, which is also general information (occupational and public). The intent of sampling texts of a variety of situations is to maximise the diversity of content that will be included in the PISA reading literacy test.

Text types

The construct of text type refers both to the intent and the internal organisation of a text. Major text types include: description, narration, exposition, argumentation, instruction and transaction (Meyer and Rice, 1984[91]).2 Real-world texts tend to cut across text type categories are typically difficult to categorise. For example, a chapter in a textbook might include some definitions (exposition), some directions on how to solve particular problems (instruction), a brief historical account of the discovery of the solution (narration) and descriptions of some typical objects involved in the solution (description). Nevertheless, in an assessment like PISA, it is useful to categorise texts according to text type, based on the predominant characteristics of the text, in order to ensure that a range of types of reading is represented.

The classification of text types used in PISA 2018 is adapted from the work of Werlich (1976[92]) and is shown in Box 2.6. Again, many texts can be cross-classified as belonging to multiple text types.

Box 2.6. Classification of text types

Description texts are texts where the information refers to properties of objects in space. Such texts typically provide an answer to what questions. Descriptions can take several forms. Impressionistic descriptions present subjective impressions of relations, qualities and directions in space. Technical descriptions, on the other hand, are objective observations in space. Technical descriptions frequently use non-continuous text formats such as diagrams and illustrations. Examples of description-type text objects are a depiction of a particular place in a travelogue or diary, a catalogue, a geographical map, an online flight schedule and a description of a feature, function or process in a technical manual.

The information in narration texts refer to properties of objects in time. Narration texts typically answer questions relating to when, in what sequence, and why characters in stories behave as they do. Narration can take different forms. Narratives record actions and events from a subjective point of view. Reports record actions and events from an objective point of view, one which can be verified by others. News stories intend to enable readers to form their own independent opinion of facts and events without being influenced by the reporter’s own views. Examples of narration-type text objects are novels, short stories, plays, biographies, comic strips and newspaper reports of events.

Exposition texts present information as composite concepts or mental constructs, or those elements through which such concepts or mental constructs can be analysed. The text provides an explanation of how the different elements interrelate and form a meaningful whole and often answers questions about how. Expositions can take various forms. Expository essays provide a simple explanation of concepts, mental constructs or experiences from a subjective point of view. Definitions relate terms or names to mental concepts, thereby explaining their meaning. Explications explain how a mental concept can be linked with words or terms. The concept is treated as a composite whole that can be understood by breaking it down into its constituent elements and then listing the relationships between those elements. Summaries explain and communicate texts in a shorter form than the original text. Minutes are a record of the results of meetings or presentations. Text interpretations explain the abstract concepts that are discussed in a particular (fictional or non-fictional) piece of text or group of texts. Examples of exposition-type text objects are scholarly essays, diagrams showing a model of how a biological system (e.g. the heart) functions, graphs of population trends, concept maps and entries in an online encyclopaedia.

Argumentation texts present the relationship among concepts or propositions. Argumentative texts often answer why questions. An important subclassification of argumentation texts is persuasive and opinionative texts, referring to opinions and points of view. Comments relate events, objects and ideas to a private system of thoughts, values and beliefs. Scientific argumentation relates events, objects and ideas to systems of thought and knowledge so that the resulting propositions can be verified as valid or non-valid. Examples of argumentation-type text objects might be letters to the editor, poster advertisements, posts in an online forum and web-based reviews of books or films.

Instruction texts, also sometimes called injunction texts, provide directions on what to do. Instructions are the directions to complete a task. Rules, regulations and statutes specify certain behaviours. Examples of instruction-type text objects are recipes, a series of diagrams showing a first-aid procedure and guidelines for operating digital software.

Transaction texts aim to achieve a specific purpose, such as requesting that something be done, organising a meeting or making a social engagement with a friend. Before the spread of electronic communication, the act of transaction was a significant component of some kinds of letters and the principal purpose of many phone calls. Werlich’s categorisation (1976[92]), used until now in the PISA framework, did not include transaction texts.

The term “transaction” is used in PISA not to describe the general process of extracting meaning from texts (as in reader-response theory), but the type of text written for the kinds of purposes described here. Transactional texts are often personal in nature, rather than public, and this may help to explain why they do not appear to be represented in many text typologies. For example, this kind of text is not commonly found on websites (Santini, 2006[93]). With the ubiquity of e-mails, text messages, blogs and social networking websites today as means of personal communication, transactional text has become much more significant as a reading text type in recent years. Transactional texts often build on the possibly private knowledge and understanding common to those involved in the transaction – though clearly, such prior relationships are difficult to replicate in a large-scale assessment. Examples of transaction-type text objects are everyday e-mail and text message exchanges between colleagues or friends that request and confirm arrangements.

Narration-type texts occupy a prominent position in many national and international assessments. Some such texts are presented as accounts of the world as it is (or was) and therefore claim to be factual or non-fictional. Fictional accounts bear a more metaphorical relation to the world, presenting it as how it might be or of how it seems to be. In other large-scale reading studies, particularly those for school students (the National Assessment of Educational Progress [NAEP]; the IEA Reading Literacy Study [IEARLS] and the IEA Programme in International Reading Literacy Study [PIRLS]), the major classification of texts is between fictional or literary texts and non-fictional texts (reading for literary experience and reading for information or to perform a task in NAEP; for literary experience and to acquire and use information in PIRLS). This distinction is becoming increasingly blurred as authors use formats and structures typical of factual texts when writing fiction. The PISA reading literacy assessment includes both factual and fictional texts, and texts that may not be clearly one or the other. PISA, however, does not attempt to measure differences in reading proficiency between factual and fictional texts. In PISA, fictional texts are classified as narration-type texts.

Response formats

The form in which evidence of student ability is collected – the response format – varies depending on the kinds of evidence that is being collected, and also according to the pragmatic constraints of a large-scale assessment. As in any large-scale assessment, the range of feasible item formats is limited. However, computer-based assessment makes possible response formats that involve interactions with text, such as highlighting and dragging-and-dropping. Computer-based assessments can also include multiple choice and short constructed-response items (to which students write their own answer), just as paper-based assessments do.

Students with different characteristics might perform differently with different response formats. For example, closed and some multiple-choice items are typically more dependent on decoding skills than open constructed-response items, because readers have to decode distractors or items (Cain and Oakhill, 2006[94]). Several studies based on PISA data suggest that the response format has a significant effect on the performance of different groups: for example, students at different levels of proficiency (Routitsky and Turner, 2003[95]), students in different countries (Grisay and Monseur, 2007[96]), students with different levels of intrinsic reading motivation (Schwabe, McElvany and Trendtel, 2015[97]), and boys and girls (Lafontaine and Monseur, 2006[98]; 2006[99]; Schwabe, McElvany and Trendtel, 2015[97]). Given this variation, it is important to maintain a similar proportion of response formats in the items used in each PISA cycle so as to measure trends over time.

A further consideration in the reading literacy assessment is that open constructed-response items are particularly important to assess the reflecting and evaluating process, where the intent is often to assess the quality of a student’s thinking rather than the student’s final response itself. Nevertheless, because the focus of the assessment is on reading and not on writing, constructed response items should not be designed to put great emphasis on assessing writing skills such as spelling and grammar (see Box 2.7 for more on the place of writing skills in the reading literacy assessment). Finally, various response formats are not equally familiar to students in different countries. Including items in a variety of formats is therefore likely to provide all students, regardless of nationality, the opportunity to see both familiar and less familiar formats.

In summary, to ensure proper coverage of the ability ranges, to ensure fairness given the inter-country and gender differences observed and to ensure a valid assessment of the reflecting and evaluating process, both multiple choice and open constructed-response items continue to be used in PISA reading literacy assessments regardless of the change in delivery mode. Any major change in the distribution of item types from that used in the paper-based reading assessment would also impact the measurement of trends.

Box 2.7. The status of writing skills in the PISA 2018 reading literacy assessment

Readers are often required to write comments, explanations or essays in response to questions, and they might choose to make notes, outlines and summaries, or simply write down their thoughts and reflections about texts, while achieving their reading goals. They also routinely engage in written communication with others (e.g. teachers, fellow students or acquaintances) for educational reasons (e.g. to e-mail an assignment to a teacher) or for social reasons (e.g. to chat with peers about text or in other school literacy contexts). The PISA 2018 reading framework considers writing to be an important correlate of reading literacy. However, test design and administration constraints prohibit the inclusion of an assessment of writing skills, where writing is in part defined as the quality and organization of the production. However, a significant proportion of test items require readers to articulate their thinking into written answers. Thus, the assessment of reading skills also draws on readers' ability to communicate their understanding in writing, although such aspects as spelling, quality of writing and organization are not measured in PISA.

Assessing ease and efficiency

The PISA 2018 reading literacy assessment will include an assessment of reading fluency, defined as the ease and efficiency with which students can read simple texts for understanding. This will provide a valuable indicator for describing and understanding differences between students, especially those in the lower reading proficiency levels. Students with low levels of foundational reading skills may be exerting so much attention and cognitive effort on lower-level skills of decoding, word recognition and sentence parsing that they have fewer resources left to perform higher-level comprehension tasks with single or multiple texts. This finding applies to young as well as to teenage readers (Rasinski et al., 2005[100]; Scammacca et al., 2007[101]).

The computerised administration and scoring in PISA 2018 allows for the measurement of ease and efficiency with which 15-year-olds can read texts accurately with understanding. While not all slow reading is poor reading, as noted above, a large body of evidence documents how and why a lack of automaticity in one’s basic reading processes can be a bottleneck to higher-level reading proficiency and is associated with poor comprehension (Rayner et al., 2001[102]). Thus, it is valuable to have an indicator of ease and efficiency to better describe and interpret very low-level performance on PISA reading comprehension tasks. A basic indicator of reading rate under low-demand conditions can also be used for other purposes, such as investigating how much students regulate their reading rate or strategic processes in the face of more complex tasks or larger volumes of text.

It is further worth noting that with the exponential expansion of text content available on the Internet, there is an ever greater need for 21st century students to be not only proficient readers, but also efficient readers (OECD, 2011[103]). While there are many ways to define, operationalise and measure reading ease, efficiency or fluency, the most common method when using silent reading tasks, those where the reader does not read aloud, are indicators of accuracy and rate. Oral reading fluency measures, where the reader does read aloud, can also be used to estimate prosody and expressiveness of the reader, but unfortunately, it is currently infeasible to implement and score oral reading fluency in all the languages in which PISA is administered. Furthermore, it is not yet established whether such oral reading fluency measures add value to the silent reading indicators of accuracy and rate (Eason et al., 2013[104]; Kuhn, Schwanenflugel and Meisinger, 2010[105]). Thus, a silent reading task design is most feasible for a PISA administration.

In order to better understand the challenges facing 15-year-olds scoring at lower levels on the PISA reading literacy assessment, a specific task can be administered near the start of the assessment to measure reading ease and efficiency. Performance on this task can be scaled and reported independently from the main proficiency scale. As noted above, inefficient reading can be a symptom of low foundational skills. However, some individuals, such as non-native speakers of the assessment language, might be relatively slow readers and yet possess compensatory or strategic processes that permit them to be higher-level readers when given sufficient time to complete complex tasks. Thus, it seems most prudent to use the ease of reading indicator as a descriptive variable to help differentiate students who may have foundational skill deficits from those who are slow, but nonetheless proficient readers.

In addition, a measure of reading ease and efficiency could be, as one of several indicators, used to place students in a level for adaptive testing (see section below on “Considerations for adaptive testing”). However, for the reasons cited in the previous paragraph, the measure would not be suitable as the sole indicator of reading level.

One task that has been used effectively as an indicator of reading ease and efficiency in other surveys requires students to read a sentence and make a judgment of the plausibility of the sentence based on general knowledge or the internal logical consistency of the sentence. The measure takes into account both the accuracy of the student’s understanding of the text and the time it takes to read and respond. This task has been used in the Woodcock Johnson Subtest of Reading Fluency (Woodcock, McGrew and Mather, 2001[62]) and the Test of Silent Reading Efficiency and Comprehension (TOSREC) (Wagner et al., 2010[60]). It is also used in the PIAAC Reading Components task set (OECD, 2013[13]; Sabatini and Bruce, 2009[106]). A similar task was used in the Austrian PISA 2000 assessment and was highly correlated (r = .64) with the final test score (Landerl and Reiter, 2002[107]). Task 1 in Appendix B shows a sample reading ease and efficiency item taken from the PIAAC Reading Components task set.

In PISA 2018, data from complex reading literacy tasks will not be used to measure reading fluency. The design and instructions accompanying reading fluency tasks should specifically target the reading fluency construct. The texts therefore need to be simple and short so that students do not make use of strategic or compensatory processes when responding to questions. In addition, the task demands should require minimal reasoning so as to not confound decision time with reading time. The more complex the task, the less likely it is that it evaluates solely reading fluency.

However, it is recommended that the log files from this cycle be analysed to evaluate whether there are indicators within the new PISA Reading Literacy task set that are strongly correlated with the sentence-level efficiency task.

Assessing students' reading motivation, reading practices and awareness of reading strategies

Since PISA 2000, the reading literacy framework has highlighted the importance of readers’ motivational attributes (such as their attitude toward reading) and reading practices (e.g. the reader factors in Figure 2.1); accordingly, items and scales have been developed to measure these constructs in the student questionnaire. It is important to note that reading motivation and reading strategies may vary with the context and type of text. Therefore, questionnaire items assessing motivation and reading strategies should refer to a range of situations that students may find themselves in. In addition to being more relevant, items referring to more specific and concrete situations are known to decrease the risk of response bias that comes with ratings and self-reports.

Intrinsic motivation and interest in reading

“While motivation refers to goals, values, and beliefs in a given area, such as reading, engagement refers to behavioural displays of effort, time, and persistence in attaining desired outcomes” (Klauda and Guthrie, 2015, p. 240[108]). A number of studies have shown that reading engagement, motivation and practices are strongly linked with reading proficiency (Becker, McElvany and Kortenbruck, 2010[109]; Guthrie et al., 1999[110]; Klauda and Guthrie, 2015[108]; Mol and Bus, 2011[111]; Morgan and Fuchs, 2007[112]; Pfost, Dörfler and Artelt, 2013[113]; Schaffner, Philipp and Schiefele, 2016[114]; Schiefele et al., 2012[115]). In PISA 2000, engagement in reading (comprising interest, intrinsic motivation, avoidance and practices) was strongly correlated with reading proficiency, even more so than socio-economic status was (OECD, 2002[116]; 2010[117]). In other studies, reading engagement has been shown to explain reading achievement more than any other variable besides previous reading achievement (Guthrie and Wigfield, 2000[118]). Furthermore, perseverance as a characteristic of engagement has also been linked to successful learning and achievement outside of school (Heckman and Kautz, 2012[119]). Thus, motivation and engagement are powerful variables and levers on which one can act in order to enhance reading proficiency and reduce gaps between groups of students.

During the previous PISA cycles in which reading literacy was the major domain (PISA 2000 and PISA 2009), the main motivational construct investigated was interest in reading and intrinsic motivation. The scale measuring interest and intrinsic motivation also captured reading avoidance, which is a lack of interest or motivation and is strongly and negatively associated with achievement, especially among struggling readers (Klauda and Guthrie, 2015[108]; Legault, Green-Demers and Pelletier, 2006[120]). For PISA 2018, in accordance with what was done in mathematics and science, two other motivational constructs will be investigated as part of the PISA questionnaire: self-efficacy, an individual’s perceived capacity of performing specific tasks, and self-concept, an individual’s own perceived abilities in a domain.

Reading practices

Reading practices were previously measured as the self-reported frequencies of reading different types of texts in various media, including online reading. In PISA 2018, the list of online reading practices will be updated and extended in order to take into account emerging practices (such as e-books, online search, short messaging and social networking).

Awareness of reading strategies

Metacognition is an individual’s ability to think about and control his or her reading and comprehension strategies. A number of studies have found an association between reading proficiency and metacognitive strategies (Artelt, Schiefele and Schneider, 2001[121]; Brown, Palincsar and Armbruster, 1984[122]). Explicit or formal instruction of reading strategies leads to an improvement in text understanding and information use (Cantrell et al., 2010[123]). More specifically, it is assumed that the reader becomes independent of the teacher after these strategies have been acquired and can be applied without much effort. By using these strategies, the reader can effectively interact with the text by conceiving of reading as a problem-solving task that requires the use of strategic thinking to accomplish reading comprehension.

In previous PISA cycles, engagement and metacognition proved to be robust predictors of reading achievement, mediators of gender or socioeconomic status (OECD, 2010[124]) and also potential levers to reduce achievement gaps. The measures of motivational, metacognition and reader practices have been updated and extended in the questionnaire in order to take into account recent and emerging practices in reading as well as to better measure the teaching practices and the classroom support that support reading growth.

Skilled reading requires students to know and employ strategies in order to optimise the knowledge they gain from a piece of text given their purposes and goals. For instance, students must know when it is appropriate to scan a passage or when the task requires the sustained and complete reading of the passage. PISA 2009 collected information about reading strategies through two reading scenarios. In the first scenario, students were asked to evaluate the effectiveness of different reading and text comprehension strategies to fulfil the goal of summarising information; in the second, students had to evaluate the effectiveness of other strategies for understanding and remembering a text. In accordance with the new characterisation of reading processes (Figure 2.2), PISA 2018 will also collect information about knowledge of reading strategies specifically linked to the goal of assessing the quality and credibility of sources, which is particularly prominent in digital reading and when reading multiple pieces of text.

Teaching practices and classroom support for reading growth and engagement

There is strong research evidence showing that classroom practices, such as the direct teaching of reading strategies, contribute to growth in reading skills (Pressley, 2000[125]; Rosenshine and Meister, 1997[126]; Waters and Schneider, 2010[127]). In addition, teachers’ scaffolding and support for autonomy, competence and ownership of their tasks improve students’ reading proficiency, awareness of strategies, and engagement in reading (Guthrie, Klauda and Ho, 2013[39]; Guthrie, Wigfield and You, 2012[38]). While in most education systems, reading is no longer taught as a subject matter to 15-year-old students in the same way that mathematics and science are, some reading instruction may be explicitly or incidentally given in language lessons and in other disciplines (such as social science, science, foreign languages, civic education or ICT). The dispersed nature of reading instruction represents a challenge for articulating questionnaire items that measure the classroom practices and opportunities to learn reading strategies that best support the development of students’ reading skills, practices and motivation.

Considerations for adaptive testing

The deployment of computer-based assessment in PISA creates the opportunity to implement adaptive testing. Adaptive testing enables higher levels of measurement precision using fewer items per individual student. This is accomplished by presenting students with items that are aligned to their ability level.

Adaptive testing has the potential to increase the resolution and sensitivity of the assessment, most particularly at the lower end of the distribution of student performance. For example, students who perform poorly on items that assess their reading fluency will likely struggle on highly complex multiple text items. Future cycles of PISA could provide additional lower-level texts to those students to better assess specific aspects of their comprehension.

Reporting proficiency in reading

Reporting scales

PISA reports students’ results through proficiency scales that can be interpreted in educational policy terms. In PISA 2000, when reading was the major domain, the results of the reading literacy assessment were first summarised on a single composite reading literacy scale with a mean of 500 and a standard deviation of 100. In addition to the composite scale, student performance was also represented on five subscales: three process (aspect) subscales (retrieving information, interpreting texts, and reflection and evaluation) and two text format subscales (continuous and non-continuous) (Kirsch et al., 2002[8]). These five subscales made it possible to compare mean scores and distributions among subgroups and countries in each of the components of the reading literacy construct. Although these subscales were highly correlated, there were interesting differences across subscales. Such differences could be examined and linked to the curriculum and teaching methodology in the countries concerned. Reading was again the major domain in PISA 2009, which again reported a composite scale as well as subscales.

In PISA 2003, 2006, 2012 and 2015, when reading was a minor domain and fewer reading items were administered to participating students, a single reading literacy scale was reported based on the overall composite scale (OECD, 2004[128]; OECD, 2007[129]; OECD, 2014[130]). In 2018, reading is again the major domain, and reporting on subscales is again possible.

For PISA 2018, the reporting subscales will be (see also Table 2.1):

  1. 1) Locating information, which is composed of tasks that require students to search for and select relevant texts, and access relevant information within texts;

  2. 2) Understanding, which is composed of tasks that require students to represent the explicit meaning of texts as well as integrate information and generate inferences; and

  3. 3) Evaluating and reflecting, which is composed of tasks that require the student to assess the quality and credibility of information, reflect on the content and form of a text and detect and handle conflict within and across texts.

Interpreting and using the scales

Just as students can be ordered from least to most proficient on a single scale, reading literacy tasks are arranged along a scale that indicates their level of difficulty and the level of skill required to answer the task correctly.

Reading literacy tasks used in PISA vary widely in situation, text format and task requirements, and they also vary in difficulty. This range is captured through what is known as an item map. The task map provides a visual representation of the reading literacy skills demonstrated by students at different points along the scale.

Difficulty is in part determined by the length, structure and complexity of the text itself. However, what the reader has to do with that text, as defined by the question or instruction, also affects the overall difficulty. A number of variables that can influence the difficulty of any reading literacy task have been identified, including the complexity and sophistication of the mental processes integral to the task process (retrieving, interpreting or reflecting), the amount of information the reader needs to assimilate and the familiarity or specificity of the knowledge that the reader must draw on both from within and from outside the text.

Defining levels of reading literacy proficiency

In an attempt to capture this progression of complexity and difficulty in PISA 2000, the composite reading literacy scale and each of the subscales were divided into six levels (Below Level 1, 1, 2, 3, 4 and 5). These levels as they were defined for PISA 2000 were kept for the composite scale used to measure trends in PISA 2009 and 2015. However, newly constructed items helped to improve descriptions of the existing levels of performance and to furnish descriptions of levels of performance above and below those established in PISA 2000. Thus, the scales were extended to Level 6, and Level 1 was divided into Levels 1a and 1b (OECD, 2012[30]).

The levels provide a useful way to explore the progression of reading literacy demands within the composite scale and each subscale. The scale summarises both the proficiency of a person in terms of his or her ability and the complexity of an item in terms of its difficulty. The mapping of students and items on one scale represents the idea that students are more likely to be able to successfully complete tasks mapped at the same level on the scale (or lower), and less likely to be able to successfully complete tasks mapped at a higher level on the scale.

As an example, the reading proficiency scale for the PISA 2015 study is presented in Table 2.4. The left-hand column shows, for each proficiency level, the lower score boundary (i.e. all students who score at or above this boundary perform at this proficiency level or higher), and the percentage of students who are able to perform tasks at the level (on average across OECD countries). The right-hand column, as adapted from OECD (2013[13]), describes what students can do at each level.

Table 2.4. An overview of reading proficiency levels adapted from the descriptions in PISA 2015

Level

What Students Can Do

6

698

1.1%

Readers at Level 6 typically can make multiple inferences, comparisons and contrasts that are both detailed and precise. They demonstrate a full and detailed understanding of one or more texts and may integrate information from more than one text. Tasks may require the reader to deal with unfamiliar ideas in the presence of prominent competing information, and to generate abstract categories for interpretations. Students can hypothesise about or critically evaluate a complex text on an unfamiliar topic, taking into account multiple criteria or perspectives and applying sophisticated understandings from beyond the text. A salient condition for accessing and retrieving tasks at this level is the precision of analysis and fine attention to detail that is inconspicuous in the texts.

5

626

8.4%

At Level 5, readers can locate and organise several pieces of deeply embedded information, inferring which information in the text is relevant. Reflective tasks require critical evaluation or hypothesis-making, drawing on specialised knowledge. Both interpreting and reflecting tasks require a full and detailed understanding of a text whose content or form is unfamiliar. For all aspects of reading, tasks at this level typically involve dealing with concepts that are contrary to expectations.

4

553

29.5%

At Level 4, readers can locate and organise several pieces of embedded information. They can also interpret the nuances of language in a section of text by taking into account the text as a whole. In other interpreting tasks, students demonstrate understanding and application of categories in an unfamiliar context. In addition, students at this level can use formal or public knowledge to hypothesise about or critically evaluate a text. Readers must demonstrate an accurate understanding of long or complex texts whose content or form may be unfamiliar.

3

480

58.6%

Readers at Level 3 can locate, and in some cases recognise the relationship between, several pieces of information that must meet multiple conditions. They can also integrate several parts of a text in order to identify a main idea, understand a relationship or construe the meaning of a word or phrase. They need to take into account many features in comparing, contrasting or categorising. Often the required information is not prominent or there is much competing information; or there are other text obstacles, such as ideas that are contrary to expectations or negatively worded. Reflecting tasks at this level may require connections, comparisons, and explanations, or they may require the reader to evaluate a feature of the text. Some reflecting tasks require readers to demonstrate a fine understanding of the text in relation to familiar, everyday knowledge. Other tasks do not require detailed text comprehension but require the reader to draw on less common knowledge.

2

407

82.0%

Readers at Level 2 can locate one or more pieces of information, which may need to be inferred and may need to meet several conditions. They can recognize the main idea in a text, understand relationships, or construe meaning within a limited part of the text when the information is not prominent and the reader must make low-level inferences. Tasks at this level may involve comparisons or contrasts based on a single feature in the text. Typical reflecting tasks at this level require readers to make a comparison or several connections between the text and outside knowledge, by drawing on personal experience and attitudes.

1a

335

94.3%

Readers at Level 1a can locate one or more independent pieces of explicitly stated information; they can recognise the main theme or author’s purpose in a text about a familiar topic, or make a simple connection between information in the text and common, everyday knowledge. Typically, the required information in the text is prominent and there is little, if any, competing information. The student is explicitly directed to consider relevant factors in the task and in the text.

1b

262

98.7%

Readers at Level 1b can locate a single piece of explicitly stated information in a prominent position in a short, syntactically simple text with a familiar context and text type, such as a narrative or a simple list. Texts in Level 1b tasks typically provide support to the reader, such as repetition of information, pictures or familiar symbols. There is minimal competing information. Level 1b readers can interpret texts by making simple connections between adjacent pieces of information.

Given that the top of the reading literacy scale currently has no bounds, there is arguably some uncertainty about the upper limits of proficiency of extremely high-performing students. However, such students can still be described by the scale as performing tasks at the highest level of proficiency. There is a greater issue for students at the bottom end of the reading literacy scale. Although it is possible to measure the reading proficiency of students performing below Level 1b, at this stage their proficiency – what they can do – cannot be described. In developing new material for PISA 2018, items were designed to measure reading skill and understanding located at or below the current Level 1b.

References

[32] American Press Institute (2014), How Americans get their news, http://www.americanpressinstitute.org/wp-content/uploads/2014/03/The_Media_Insight_Project_The_Personal_News_Cycle_Final.pdf.

[7] Ananiadou, K. and M. Claro (2009), “21st Century Skills and Competences for New Millennium Learners in OECD Countries”, OECD Education Working Papers, No. 41, OECD Publishing, Paris, http://dx.doi.org/10.1787/218525261154.

[121] Artelt, C., U. Schiefele and W. Schneider (2001), “Predictors of reading literacy”, European Journal of Psychology of Education, Vol. XVI/3, pp. 363-383, http://dx.doi.org/10.1007/BF03173188.

[109] Becker, M., N. McElvany and M. Kortenbruck (2010), “Intrinsic and extrinsic reading motivation as predictors of reading literacy: A longitudinal study”, Journal of Educational Psychology, Vol. 102/4, pp. 773-785, http://dx.doi.org/10.1037/a0020084.

[1] Binkley, M. et al. (2011), “Defining Twenty-First Century Skills”, in Assessment and Teaching of 21st Century Skills, Springer Netherlands, Dordrecht, http://dx.doi.org/10.1007/978-94-007-2324-5_2.

[41] Binkley, M., K. Rust and T. Williams (eds.) (1997), Reading literacy in an international perspective, U.S. Department of Education, Washington D.C.

[66] Brand-Gruwel, S., I. Wopereis and Y. Vermetten (2005), “Information problem solving by experts and novices: Analysis of a complex cognitive skill”, Computers in Human Behavior, Vol. 21, pp. 487–508, http://dx.doi.org/10.1016/j.chb.2004.10.005.

[77] Bråten, I. et al. (2011), “The role of epistemic beliefs in the comprehension of multiple expository texts: Toward an integrated model”, Educational Psychologist, Vol. 46/1, pp. 48-70, http://dx.doi.org/10.1080/00461520.2011.538647.

[74] Bråten, I., H. Strømsø and M. Britt (2009), “Trust Matters: Examining the Role of Source Evaluation in Students’ Construction of Meaning within and across Multiple Texts”, Reading Research Quarterly, Vol. 44/1, pp. 6-28, http://dx.doi.org/10.1598/RRQ.41.1.1.

[14] Britt, M., S. Goldman and J. Rouet (eds.) (2013), Reading--From words to multiple texts, Routledge, New York.

[46] Britt, M. and J. Rouet (2012), “Learning with multiple documents: Component skills and their acquisition.”, in Kirby, J. and M. Lawson (eds.), Enhancing the Quality of Learning: Dispositions, Instruction, and Learning Processes, Cambridge University Press, New York, http://dx.doi.org/10.1017/CBO9781139048224.017.

[122] Brown, A., A. Palincsar and B. Armbruster (1984), “Instructing Comprehension-Fostering Activities in Interactive Learning Situations”, in Mandl, H., N. Stein and T. Trabasso (eds.), Learning and Comprehension of Text, Lawrence Erlbaum Associates, Hillsdale, NJ.

[37] Brozo, W. and M. Simpson (2007), Content Literacy for Today’s Adolescents: Honoring Diversity and Building Competence, Merrill/Prentice Hall, Upper Saddle River, NJ.

[64] Cain, K. and J. Oakhill (2008), Children’s Comprehension Problems in Oral and Written Language, Guilford Press, New York.

[94] Cain, K. and J. Oakhill (2006), “Assessment matters: Issues in the measurement of reading comprehension”, British Journal of Educational Psychology, Vol. 76, pp. 697–708, http://dx.doi.org/10.1348/000709905X69807.

[123] Cantrell, S. et al. (2010), “The impact of a strategy-based intervention on the comprehension and strategy use of struggling adolescent readers”, Journal of Educational Psychology, Vol. 102/2, pp. 257-280, http://dx.doi.org/10.1037/a0018212.

[59] Chard, D., J. Pikulski and S. McDonagh (2006), “Fluency: The Link between Decoding and Comprehension for Struggling Readers”, in Rasinski, T., C. Blachowicz and K. Lems (eds.), Fluency Instruction: Research-Based Best Practices, Guilford Press, New York.

[33] Clark, C. (2014), Children’s and Young People’s Writing in 2013: Findings from the National Literacy Trust’s Annual Literacy Survey, National Literacy Trust, London, http://www.literacytrust.org.uk.

[47] Coiro, J. et al. (2008), “Central Issues in New Literacies and New Literacies Research”, in Coiro, J. et al. (eds.), Handbook of Research on New Literacies, Lawrence Erlbaum Associates, New York.

[27] Conklin, J. (1987), “Hypertext: An Introduction and Survey”, Computer, Vol. 20, pp. 17-41, http://dx.doi.org/10.1109/MC.1987.1663693.

[2] Cunningham, A. and K. Stanovich (1997), “Early reading acquisition and its relation to reading experience and ability 10 years later”, Developmental Psychology, Vol. 33/6, pp. 934-945.

[26] Dillon, A. (1994), Designing Usable Electronic Text Ergonomics Aspects of Human Information Usage, Taylor & Francis, London.

[67] Dreher, M. and J. Guthrie (1990), “Cognitive processes in textbook chapter search tasks”, Reading Research Quarterly, Vol. 25/4, pp. 323-339, http://dx.doi.org/10.2307/747694.

[70] Duggan, G. and S. Payne (2009), “Text skimming: the process and effectiveness of foraging through text under time pressure”, Journal of Experimental Psychology: Applied, Vol. 15/3, pp. 228-242, http://dx.doi.org/10.1037/a0016995.

[104] Eason, S. et al. (2013), “Examining the Relationship Between Word Reading Efficiency and Oral Reading Rate in Predicting Comprehension Among Different Types of Readers”, Scientific Studies of Reading, Vol. 17/3, pp. 199-223, http://dx.doi.org/10.1080/10888438.2011.652722.

[28] Foltz, P. (1996), “Comprehension, Coherence and Strategies in Hypertext and Linear Text”, in Rouet, J. et al. (eds.), Hypertext and Cognition, Lawrence Erlbaum Associates, Hillsdale, NJ.

[31] Gartner (2014), Forecast: PCs, Ultramobiles and Mobile Phones, Worldwide, 2011-2018, 4Q14 Update, https://www.gartner.com/doc/2945917/forecast-pcs-ultramobiles-mobile-phones.

[71] Gerjets, P., Y. Kammerer and B. Werner (2011), “Measuring spontaneous and instructed evaluation processes during Web search: Integrating concurrent thinking-aloud protocols and eye-tracking data.”, Learning and Instruction, Vol. 21/2, pp. 220-231, http://dx.doi.org/10.1016/j.learninstruc.2010.02.005.

[55] Goldman, S. (2004), “Cognitive aspects of constructing meaning through and across multiple texts”, in Shuart-Faris, N. and D. Bloome (eds.), Uses of intertextuality in classroom and educational research, Information Age Publishing, Greenwich, CT.

[48] Gray, W. and B. Rogers (1956), Maturity in Reading: Its nature and appraisal, The University of Chicago Press, Chicago, IL.

[96] Grisay, A. and C. Monseur (2007), “Measuring the Equivalence of item difficulty in the various versions of an international test”, Studies in Educational Evaluation, Vol. 33, pp. 69-86, http://dx.doi.org/10.1016/j.stueduc.2007.01.006.

[39] Guthrie, J., S. Klauda and A. Ho (2013), “Modeling the relationships among reading instruction, motivation, engagement, and achievement for adolescents”, Reading Research Quarterly, Vol. 48/1, pp. 9-26, http://dx.doi.org/10.1002/rrq.035.

[118] Guthrie, J. and A. Wigfield (2000), “Engagement and motivation in reading.”, in Kamil, M. et al. (eds.), Handbook of reading research, Lawrence Erlbaum Associates , Mahwah, NJ.

[110] Guthrie, J. et al. (1999), “Motivational and Cognitive Predictors of Text Comprehension and Reading Amount”, Scientific Studies of Reading, http://dx.doi.org/10.1207/s1532799xssr0303_3.

[38] Guthrie, J., A. Wigfield and W. You (2012), “Instructional Contexts for Engagement and Achievement in Reading”, in Christenson, S., A. Reschly and C. Wylie (eds.), Handbook of Research on Student Engagement, Springer Science, New York, http://dx.doi.org/10.1007/978-1-4614-2018-7.

[79] Hacker, D. (1998), “Self-regulated comprehension during normal reading”, in Hacker, D., J. Dunlosky and A. Graesser (eds.), The Educational Psychology Series. Metacognition in Educational Theory and Practice, Lawrence Erlbaum Associates, Mahwah, NJ.

[119] Heckman, J. and T. Kautz (2012), “Hard Evidence on Soft Skills”, Discussion Paper Series, No. 6580, The Institute for the Study of Labor , Bonn, Germany.

[49] Hofstetter, C., T. Sticht and C. Hofstetter (1999), “Knowledge, Literacy, and Power”, Communication Research, Vol. 26/1, pp. 58-80, http://dx.doi.org/10.1177/009365099026001004.

[90] Hubbard, R. (1989), “Notes from the Underground: Unofficial Literacy in One Sixth Grade”, Anthropology & Education Quarterly, Vol. 20/4, pp. 291-307, http://www.jstor.org/stable/3195740.

[6] International Telecommunications Union (2014), Key 2005-2014 ICT data for the world, by geographic regions and by level of development [Excel file], http://www.itu.int/en/ITU-D/Statistics/Pages/publications/mis2014.aspx.

[5] International Telecommunications Union (2014), Measuring the Information Society Report 2014, International Telecommunication Union, Geneva.

[63] Jenkins, J. et al. (2003), “Sources of Individual Differences in Reading Comprehension and Reading Fluency.”, Journal of Educational Psychology, Vol. 95/4, pp. 719-729, http://dx.doi.org/10.1037/0022-0663.95.4.719.

[15] Kamil, M. et al. (eds.) (2000), Handbook of Reading Research. Volume III, Lawrence Erlbaum Associates, Mahwah, NJ.

[20] Kintsch, W. (1998), Comprehension: A paradigm for cognition, Cambridge University Press, Cambridge, MA.

[22] Kirsch, I. (2001), The International Adult Literacy Survey (IALS): Understanding What Was Measured, Educational Testing Service, Princeton, NJ.

[8] Kirsch, I. et al. (2002), Reading for Change Performance and Engagement Across Countries: Results From PISA 2000, OECD, Paris.

[23] Kirsch, I. and P. Mosenthal (1990), “Exploring Document Literacy: Variables Underlying the Performance of Young Adults”, Reading Research Quarterly, Vol. 25/1, pp. 5-30, http://dx.doi.org/10.2307/747985.

[108] Klauda, S. and J. Guthrie (2015), “Comparing relations of motivation, engagement, and achievement among struggling and advanced adolescent readers”, Reading and Writing, Vol. 28, pp. 239–269, http://dx.doi.org/10.1007/s11145-014-9523-2.

[105] Kuhn, M., P. Schwanenflugel and E. Meisinger (2010), “Aligning Theory and Assessment of Reading Fluency: Automaticity, Prosody, and Definitions of Fluency”, Reading Research Quarterly, Vol. 45/2, pp. 230–251, http://dx.doi.org/10.1598/RRQ.45.2.4.

[58] Kuhn, M. and S. Stahl (2003), “Fluency: A review of developmental and remedial practices”, Journal of Educational Psychology, Vol. 95/1, pp. 3–21, http://dx.doi.org/10.1037/0022-0663.95.1.3.

[98] Lafontaine, D. and C. Monseur (2006), Impact of item choice on the measurement of trends in educational achievement, Paper presented at the 2006 Annual AERA meeting, San Francisco, CA, https://convention2.allacademic.com/one/aera/aera06/index.php?click_key=1&cmd=Multi+Search+Search+Load+Publication&publication_id=49435&PHPSESSID=h5edlqu9juanjmlaaa6ece3e74.

[99] Lafontaine, D. and C. Monseur (2006), Impact of test characteristics on gender equity indicators in the Assessment of Reading Comprehension, University of Liège, Liège.

[107] Landerl, K. and C. Reiter (2002), “Lesegeschwindigkeit als Indikator für basale Lesefertigkeiten”, in Wallner-Paschon, C. and G. Haider (eds.), PISA Plus 2000. Thematische Analysen nationaler Projekte, Studien Verlag, Innsbruck.

[120] Legault, L., I. Green-Demers and L. Pelletier (2006), “Why do high school students lack motivation in the classroom? Toward an understanding of academic a motivation and the role of social support”, Journal of Educational Psychology, Vol. 98/3, pp. 567-582, http://dx.doi.org/dx.doi.org/10.1037/0022-0663.98.3.567.

[12] Leu, D. et al. (2015), “The new literacies of online research and comprehension: Rethinking the reading achievement gap”, Reading Research Quarterly, Vol. 50/1, pp. 37–59, http://dx.doi.org/10.1002/rrq.85.

[11] Leu, D. et al. (2013), “New Literacies: A dual-level theory of the changing nature of literacy instruction and assessment”, in Alvermann, D., Norman J. Unrau and R. Ruddell (eds.), Theoretical Models and Processes of Reading, International Reading Association, Newark.

[51] Lundberg, I. (1991), “Reading as an individual and social skill”, in Lundberg, I. and T. Höien (eds.), Literacy in a world of change: perspective on reading and reading disability; proceedings, Literacy Conference at Stavanger Forum, Center for Reading Research/UNESCO.

[72] Mason, L., A. Boldrin and N. Ariasi (2010), “Searching the Web to Learn about a Controversial Topic: Are Students Epistemically Active?”, Instructional Science: An International Journal of the Learning Sciences, Vol. 38/6, pp. 607-633.

[52] McCrudden, M. and G. Schraw (2007), “Relevance and goal-focusing in text processing”, Educational Psychology Review, Vol. 19/2, pp. 113–139, http://dx.doi.org/10.1007/s10648-006-9010-7.

[42] McNamara, D. and J. Magliano (2009), “Toward a Comprehensive Model of Comprehension”, The Psychology of Learning and Motivation, Vol. 51, pp. 297-384, http://dx.doi.org/10.1016/S0079-7421(09)51009-2.

[91] Meyer, B. and G. Rice (1984), “The Structure of Text”, in Pearson, P. et al. (eds.), Handbook of Reading Research, Longman, New York.

[111] Mol, S. and A. Bus (2011), “To read or not to read: a meta-analysis of print exposure from infancy to early adulthood”, Psychological Bulletin, Vol. 137/2, pp. 267-96, http://dx.doi.org/10.1037/a0021890.

[68] Moore, P. (1995), “Information Problem Solving: A Wider View of Library Skills”, Contemporary Educational Psychology, Vol. 20/1, pp. 1-31, http://dx.doi.org/10.1006/ceps.1995.1001.

[112] Morgan, P. and D. Fuchs (2007), “Is there a bidirectional relationship between children’s reading skills and reading motivation?”, Exceptional Children, Vol. 73/2, pp. 165-183, http://dx.doi.org/10.1177%2F001440290707300203.

[50] Morrisroe, J. (2014), Literacy Changes Lives 2014: A new perspective on health, employment and crime, National Literacy Trust, London, http://www.literacytrust.org.uk.

[36] Naumann, J. (2015), “A model of online reading engagement: Linking engagement, navigation, and performance in digital reading”, Computers in Human Behavior, Vol. 53, pp. 263–277, http://dx.doi.org/10.1016/j.chb.2015.06.051.

[43] Oakhill, J., K. Cain and P. Bryant (2003), “The dissociation of word reading and text comprehension: Evidence from component skills”, Language and Cognitive Processes, Vol. 18/4, pp. 443-468, http://dx.doi.org/10.1080/01690960344000008.

[35] OECD (2015), Students, Computers and Learning: Making the Connection, OECD, Paris, http://dx.doi.org/10.1787/19963777.

[130] OECD (2014), PISA 2012 Results: What Students Know and Can Do (Volume I, Revised edition, February 2014): Student Performance in Mathematics, Reading and Science, PISA, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264208780-en.

[3] OECD (2013), OECD Skills Outlook 2013: First Results from the Survey of Adult Skills, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264204256-en.

[13] OECD (2013), PISA 2015 Draft Frameworks, OECD, Paris, http://www.oecd.org/pisa/pisaproducts/pisa2015draftframeworks.htm.

[30] OECD (2012), OECD Internet Economy Outlook 2012, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264086463-en.

[103] OECD (2011), “Do Students Today Read for Pleasure?”, PISA in Focus, No. 8, OECD Publishing, Paris, http://dx.doi.org/10.1787/5k9h362lhw32-en.

[25] OECD (2011), PISA 2009 Results: Students On Line. Digital Technologies and Performance (Volume VI), OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264112995-en.

[117] OECD (2010), PISA 2009 Assessment Framework, OECD, Paris, http://dx.doi.org/10.1787/19963777.

[124] OECD (2010), PISA 2009 Results: Learning to Learn: Student Engagement, Strategies and Practices (Volume III), PISA, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264083943-en.

[129] OECD (2007), PISA 2006: Science Competencies for Tomorrow’s World: Volume 1: Analysis, PISA, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264040014-en.

[128] OECD (2004), Learning for Tomorrow’s World: First Results from PISA 2003, PISA, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264006416-en.

[116] OECD (2002), Reading for Change: Performance and Engagement across Countries: Results from PISA 2000, PISA, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264099289-en.

[24] OECD (2000), Measuring Student Knowledge and Skills. The PISA 2000 Assessment of Reading, Mathematical and Scientific Literacy, OECD, Paris.

[86] O’Reilly, T. and J. Sabatini (2013), Reading for Understanding: How Performance Moderators and Scenarios Impact Assessment Design, ETS Research Report RR-13-31, http://www.ets.org/Media/Research/pdf/RR-13-31.pdf.

[87] Ozuru, Y. et al. (2007), “Influence of question format and text availability on the assessment of expository text comprehension”, Cognition and Instruction, Vol. 25/4, pp. 399-438, http://dx.doi.org/10.1080/07370000701632371.

[17] Perfetti, C. (2007), “Reading ability: Lexical quality to comprehension”, Scientific Studies of Reading, Vol. 11/4, pp. 357-383, http://dx.doi.org/10.1080/10888430701530730.

[16] Perfetti, C. (1985), Reading Ability, Oxford University Press, New York.

[65] Perfetti, C., M. Marron and P. Foltz (1996), “Sources of Comprehension Failure: Theoretical Perspectives and Case Studies”, in Cornoldi, C. and J. Oakhill (eds.), Reading Comprehension Difficulties: Processes and Intervention, Lawrence Erlbaum Associates, Mahwah, NJ.

[56] Perfetti, C., J. Rouet and M. Britt (1999), “Toward Theory of Documents Representation”, in van Oostendorp, H. and S. Goldman (eds.), The Construction of Mental Representations During Reading, Lawrence Erlbaum Associates, Mahwah, NJ.

[113] Pfost, M., T. Dörfler and C. Artelt (2013), “Students’ extracurricular reading behavior and the development of vocabulary and reading comprehension”, Learning and Individual Differences, Vol. 26, pp. 89-102, http://dx.doi.org/10.1016/j.lindif.2013.04.008.

[125] Pressley, M. (2000), “What Should Comprehension Instruction be the Instruction Of?”, in Kamil, M. et al. (eds.), Handbook of reading research, Volume III, Lawrence Erlbaum Associates, Mahwah, NJ.

[100] Rasinski, T. et al. (2005), “Is Reading Fluency a Key for Successful High School Reading?”, Journal of Adolescent & Adult Literacy, Vol. 49/1, pp. 22-27, http://dx.doi.org/10.1598/JAAL.49.1.3.

[102] Rayner, K. et al. (2001), “How Psychological Science Informs the Teaching of Reading”, Psychological Science in the Public Interest, Vol. 2/2, pp. 31-74, http://dx.doi.org/10.1111/1529-1006.00004.

[19] Rayner, K. and E. Reichle (2010), “Models of the Reading Process”, Wiley Interdisciplinary Reviews. Cognitive Science, Vol. 1/6, pp. 787-799, http://dx.doi.org/10.1002/wcs.68.

[40] Reeve, J. (2012), “A Self-determination Theory Perspective on Student Engagement”, in Handbook of research on student engagement, Springer, Boston, MA, http://dx.doi.org/10.1007/978-1-4614-2018-7_7.

[45] Richter, T. and D. Rapp (2014), “Comprehension and Validation of Text Information: Introduction to the Special Issue”, Discourse Processes, Vol. 51/1-2, pp. 1-6, http://dx.doi.org/10.1080/0163853X.2013.855533.

[73] Rieh, S. (2002), “Judgment of Information Quality and Cognitive Quthority in the Web”, Journal of the Association for Information Science and Technology, Vol. 53/2, pp. 145-161, http://dx.doi.org/10.1002/asi.10017.

[126] Rosenshine, B. and C. Meister (1997), “Cognitive strategy instruction in reading”, in Stahl, S. and D. Hayes (eds.), Instructional Models in Reading, Lawrence Erlbaum Associates, Mahwah, NJ.

[9] Rouet, J. (2006), The Skills of Document Use: From Text Comprehension to Web-based Learning, Lawrence Erlbaum Associates, Mahwah, NJ.

[78] Rouet, J. and M. Britt (2014), “Multimedia Learning from Multiple Documents”, in Mayer, R. (ed.), Cambridge handbook of multimedia learning, Cambridge University Press, Cambridge, MA.

[53] Rouet, J. and M. Britt (2011), “Relevance Processes in Multiple Document Comprehension”, in McCrudden, M., J. Magliano and G. Schraw (eds.), Text Relevance and Learning from Text, Information Age, Greenwich, CT.

[69] Rouet, J. and B. Coutelet (2008), “The Acquisition of Document Search Strategies in Grade School Students”, Applied Cognitive Psychology, Vol. 22, pp. 389–406, http://dx.doi.org/10.1002/acp.1415.

[29] Rouet, J. and J. Levonen (1996), “Studying and Learning with Hypertext: Empirical Studies and Their Implications”, in Rouet, J. et al. (eds.), Hypertext and Cognition, Lawrence Erlbaum Associates, Mahwah, NJ.

[82] Rouet, J., Z. Vörös and C. Pléh (2012), “Incidental Learning of Links during Navigation: The Role of Visuo-Spatial Capacity”, Behaviour & Information Technology, Vol. 31/1, pp. 71-81.

[95] Routitsky, A. and R. Turner (2003), Item Format Types and Their Influences on Cross-national Comparisons of Student Performance, Paper presented at the annual meeting of the American Educational Research Association (AERA), April 2003.

[83] Rupp, A., T. Ferne and H. Choi (2006), “How Assessing Reading Comprehension With Multiple-choice Questions Shapes the Construct: A Cognitive Processing Perspective”, Language Testing, Vol. 23/4, pp. 441–474, http://dx.doi.org/10.1191/0265532206lt337oa.

[106] Sabatini, J. and K. Bruce (2009), “PIAAC Reading Components: A Conceptual Framework”, OECD Education Working Papers, No. 33, OECD, Paris, http://www.oecd.org/edu/workingpapers.

[84] Sabatini, J. et al. (2014), “Broadening the Scope of Reading Comprehension Using Scenario-Based Assessments: Preliminary findings and challenges”, L’Année Psychologique, Vol. 114/4, pp. 693-723, http://dx.doi.org/10.4074/S0003503314004059.

[85] Sabatini, J. et al. (2015), “Improving Comprehension Assessment for Middle and High School Students: Challenges and Opportunities”, in Santi, K. and D. Reed (eds.), Improving Reading Comprehension of Middle and High School Students (Literacy Studies), Springer, New York.

[93] Santini, M. (2006), “Web pages, Text types, and Linguistic Features: Some Issues”, International Computer Archive of Modern and Medieval English (CAME), Vol. 30, pp. 67-86.

[101] Scammacca, N. et al. (2007), Interventions for Adolescent Struggling Readers: A Meta-Analysis With Implications for Practice, Center on Instruction at RMC Research Corporation, Portsmouth, NH, http://www.centeroninstruction.org.

[114] Schaffner, E., M. Philipp and U. Schiefele (2016), “Reciprocal Effects Between Intrinsic Reading Motivation and Reading Competence? A Cross-lagged Panel Model for Academic Track and Nonacademic Track Students”, Journal of Research in Reading, Vol. 39/1, pp. 19–36, http://dx.doi.org/10.1111/1467-9817.12027.

[115] Schiefele, U. et al. (2012), “Dimensions of Reading Motivation and Their Relation to Reading Behavior and Competence”, Reading Research Quarterly, Vol. 47/4, pp. 427–463, http://dx.doi.org/10.1002/RRQ.030.

[88] Schroeder, S. (2011), “What readers have and do: Effects of students’ verbal ability and reading time components on comprehension with and without text availability”, Journal of Educational Psychology, Vol. 103/4, pp. 877-896, http://dx.doi.org/10.1037/a0023731.

[97] Schwabe, F., N. McElvany and M. Trendtel (2015), “The School Age Gender Gap in Reading Aachievement: Examining the Influences of Item Format and Intrinsic Reading Motivation”, Reading Research Quarterly, Vol. 50/2, pp. 219–232, http://dx.doi.org/10.1002/rrq.92.

[4] Smith, M. et al. (2000), “What Will Be the Demands of Literacy in the Workplace in the Next Millennium?”, Reading Research Quarterly, Vol. 35/3, pp. 378-383, http://dx.doi.org/10.1598/RRQ.35.3.3.

[18] Snow, C. and the RAND Corporation (2002), Reading for Understanding: Toward an R and D Program in Reading Comprehension, RAND Reading Study Group, Santa Monica, CA, http://www.rand.org/.

[10] Spiro, R. et al. (eds.) (2015), Reading at a Crossroads? Disjunctures and Continuities in Current Conceptions and Practices, Routledge, New York.

[75] Stadtler, M. and R. Bromme (2014), “The Content–Source Integration Model: A Taxonomic Description of How Readers Comprehend Conflicting Scientific Information”, in Rapp, D. and J. Braasch (eds.), Processing Inaccurate Information: Theoretical and Applied Perspectives from Cognitive Science and the Educational Sciences, The MIT Press, Cambridge, MA.

[76] Stadtler, M. and R. Bromme (2013), “Multiple Document Comprehension: An Approach to Public Understanding of Science”, Cognition and Instruction, Vol. 31/2, pp. 122-129, http://dx.doi.org/10.1080/07370008.2013.771106.

[81] Strømsø, H. et al. (2013), “Spontaneous Sourcing Among Students Reading Multiple Documents”, Cognition and Instruction, Vol. 31/2, pp. 176-203, http://dx.doi.org/10.1080/07370008.2013.769994.

[34] UNESCO (2014), Reading in the Mobile Era: A Study of Mobile Reading in Developing Countries, UNESCO, Paris, http://www.unesco.org/open-access/terms-use-ccbysa-en.

[44] van den Broek, P., K. Risden and E. Husbye-Hartmann (1995), “The Role of Readers’ Standards of Coherence in the Generation of Inferences During Reading”, in Lorch, Jr., R. and E. O’Brien (eds.), Sources of Coherence in Text Comprehension, Lawrence Erlbaum Associates, Hilssdale, NJ.

[57] Vidal-Abarca, E., A. Mañá and L. Gil (2010), “Individual Differences for Self-regulating Task-oriented Reading Activities”, Journal of Educational Psychology, Vol. 102/4, pp. 817-826, http://dx.doi.org/10.1037/a0020062.

[60] Wagner, R. et al. (2010), Test of Silent Reading Efficiency and Comprehension, Pro-Ed, Austin, TX.

[127] Waters, H. and W. Schneider (eds.) (2010), Metacognition, Strategy Use, and Instruction, Guilford Press, New York.

[61] Wayman, M. et al. (2007), “Literature Synthesis on Curriculum-Based Measurement in Reading”, The Journal of Special Education, Vol. 41/2, pp. 85-120.

[92] Werlich, E. (1976), A Text Grammar of English, Quelle and Meyer, Heidelberg.

[54] White, S., J. Chen and B. Forsyth (2010), “Reading-Related Literacy Activities of American Adults: Time Spent, Task Types, and Cognitive Skills Used”, Journal of Literacy Research, Vol. 42, pp. 276–307, http://dx.doi.org/10.1080/1086296X.2010.503552.

[89] Wineburg, S. (1991), “On the Reading of Historical Texts: Notes on the Breach Between School and Academy”, American Educational Research Joumal, Vol. 28/3, pp. 495-519.

[80] Winne, P. and A. Hadwin (1998), “Studying as Self-regulated Learning”, in Hacker, D., J. Dunlosky and A. Graesser (eds.), Metacognition in Educational Theory and Practice, Lawrence Erlbaum Associates, Mahwah, NJ.

[62] Woodcock, R., K. McGrew and N. Mather (2001), Woodcock-Johnson III Tests of Achievement, Riverside Publishing, Itasca, IL.

[21] Zwaan, R. and M. Singer (2003), “Text comprehension”, in Graesser, A., M. Gernsbacher and S. Goldman (eds.), Handbook of Discourse Processes, Lawrence Erlbaum Associates, Mahwah, NJ.

Annex 2.A. Main changes in the reading framework, 2000-2015
Annex Table 2.A.1. Main changes in the reading framework, 2000-2015

 

2000

2009

2015

TEXT

 

Format

Continuous, non-continuous, mixed

Same as 2000, plus multiple

Same as 2009

Type

Argumentation, description, exposition, narration, instruction

Same as 2000, plus “transactional”

Same as 2009

Environment

N/A

Authored, message-based

N/A

Medium

N/A

Print, electronic

N/A

Space

N/A

N/A

Fixed, dynamic

SITUATIONS

Educational, personal, professional, public

Same as 2000

Same as 2000

ASPECT

Accessing and retrieving, integrating and interpreting, reflecting and evaluating

Same as 2000, plus “complex”

Same as 2000

Annex 2.B. Sample tasks

Task 1: Sample reading ease and efficiency task.

The sentence-processing items are timed tasks that require the respondent to assess whether a sentence makes sense in terms of the properties of the real world or the internal logic of the sentence. The respondent reads the sentence and circles YES if the sentence makes sense or NO if the sentence does not make sense. This task is adapted from PISA 2012 and PIAAC’s Reading Components sentence processing items.

Annex Figure 2.B.1. Task 1. Sample reading ease and efficiency task
Annex Figure 2.B.1. Task 1. Sample reading ease and efficiency task

Tasks 2-4: Sample scenario with three embedded tasks

In this scenario, students are asked to read three sources: a blog post, the comments section that follows and an article that is referenced by one of the commenters. The articles and comments all discuss space exploration now and in the future. Students are asked to answer several questions that assess different reading processes.

Annex Figure 2.B.2. Task 2. Scanning and locating (single text)
Annex Figure 2.B.2. Task 2. Scanning and locating (single text)
Annex Figure 2.B.3. Task 3. Multiple text inference
Annex Figure 2.B.3. Task 3. Multiple text inference
Annex Figure 2.B.4. Task 4. Evaluating and reflecting
Annex Figure 2.B.4. Task 4. Evaluating and reflecting

Notes

← 1. Some dynamic navigation features were incidentally included in the 2015 assessment. This was a result of the adaptation of trend materials, which were formerly presented in print, for screen presentation. Many of these so-called fixed texts were used in previous cycles. Although they were adapted to mimic the printed texts as closely as possible, they had to be reformatted to the smaller screen size typical of computer displays. Therefore, tabs and other very simple navigation tools were included to let the reader navigate from one page to another.

← 2. In the PISA 2000 reading framework, these text types were subcategories of the continuous text format. In the PISA 2009 cycle, it was acknowledged that non-continuous texts (and the elements of mixed and multiple texts) also have a descriptive, narrative, expository, argumentative or instructional purpose.

End of the section – Back to iLibrary publication page