1996-3777 (online)
1990-8539 (print)
Hide / Show Abstract

A series of reports on the OECD’s Programme for International Student Assessment’s (PISA) periodic testing program on student performance. The reports generally compare student (15 year olds) academic performance across countries, or discuss the methodology used to gather the data.

Also available in French, German
PISA 2006 Technical Report

PISA 2006 Technical Report You or your institution have access to this content

Click to Access: 
  • PDF
  • http://www.keepeek.com/Digital-Asset-Management/oecd/education/pisa-2006-technical-report_9789264048096-en
  • READ
19 Jan 2009
9789264048096 (PDF) ;9789264048089(print)

Hide / Show Abstract

The PISA 2006 Technical Report describes the methodology underlying the PISA 2006 survey. It examines additional features related to the implementation of the project at a level of detail that allows researchers to understand and replicate its analyses. The reader will find a wealth of information on the test and sample design, methodologies used to analyse the data, technical features of the project and quality control mechanisms.
loader image

Expand / Collapse Hide / Show all Abstracts Table of Contents

  • Mark Click to Access
  • Programme for International Student Assessment: An Overview
    The OECD Programme for International Student Assessment (PISA) is a collaborative effort among OECD member countries to measure how well 15-year-old students approaching the end of compulsory schooling are prepared to meet the challenges of today’s knowledge societies. The assessment is forward-looking: rather than focusing on the extent to which these students have mastered a specific school curriculum, it looks at their ability to use their knowledge and skills to meet real-life challenges. This orientation reflects a change in curricular goals and objectives, which are increasingly concerned with what students can do with what they learn at school. 
  • Test design and test development
    This chapter describes the test design for PISA 2006 and the processes by which the PISA consortium, led by ACER, developed the PISA 2006 paper-and-pencil test.
  • The development of the PISA context questionnaires
    In its Call for Tender for PISA 2006, the PISA Governing Board (PGB) established the main policy issues it sought to address in the third cycle of PISA. In particular, the PGB required PISA 2006 to collect a set of basic demographic data as a core component that replicated key questions from the previous cycles. In addition, PISA 2006 needed to address issues related to important aspects of students’ attitudes regarding science, information about students’ experience with science in and out of school, motivation for, interest in and concern about science, and engagement with science-related activities.
  • Sample design
    The desired base PISA target population in each country consisted of 15-year-old students attending educational institutions located within the country, in grades 7 and higher. This meant that countries were to include (i) 15-year-olds enrolled full-time in educational institutions, (ii) 15-year-olds enrolled in educational institutions who attended on only a part-time basis, (iii) students in vocational training types of programmes, or any other related type of educational programmes, and (iv) students attending foreign schools within the country (as well as students from other countries attending any of the programmes in the first three categories). It was recognised that no testing of persons schooled in the home, workplace or out of the country would occur and therefore these students were not included in the international target population.
  • Translation and cultural appropriateness of the test and survey material
    Literature on empirical comparative research refers to translation issues as one of the most frequent problems in cross-cultural surveys. Translation errors are much more frequent than other problems, such as clearly identified discrepancies due to cultural biases or curricular differences. (Harkness, Van de Vijver and Mohler, 2003; Hambleton, Merenda and Spielberger, 2005).
  • Field operations
    PISA was implemented in each country by a National Project Manager (NPM) who implemented the procedures prepared by the consortium. Each NPM typically had several assistants, working from a base location that is referred to throughout this report as a national centre (NC). For the school level operations the NPM coordinated activities with school level staff, referred to as school co-ordinators (SCs). Trained test administrators (TAs) administered the PISA assessment in schools. 
  • Quality Assurance
    It is essential that users of the PISA data have confidence that the data collected through the PISA survey are fit for use for the intended purposes. To ensure this, the various data collection activities have been undertaken in accordance with strict quality assurance procedures. The quality assurance that provides this confidence in the fitness for use of the PISA 2006 data consists of two components. The first is to carefully develop and document procedures that result in data of the desired quality; the second is to monitor and record the implementation of the documented procedures. Should it happen that the documented procedures are not fully implemented, it is necessary to understand to what extent they were not and the likely implications for the data.
  • Survey Weighting and the Calculation of Sampling Variance
    Survey weights were required to analyse PISA data, to calculate appropriate estimates of sampling error and to make valid estimates and inferences. The consortium calculated survey weights for all assessed, ineligible and excluded students, and provided variables in the data that permit users to make approximately unbiased estimates of standard errors, to conduct significance tests and to create confidence intervals appropriately, given the sample design for PISA in each individual country. 
  • Scaling PISA Cognitive Data
    The mixed coefficients multinomial logit model as described by Adams, Wilson and Wang (1997) was used to scale the PISA data, and implemented by ConQuest® software (Wu, Adams & Wilson, 1997).
  • Data Management Procedures
    The PISA assessment establishes certain data collection requirements that are common to all PISA participants. Test instruments include the same test items in all participating countries, and data collection procedures are applied in a common and consistent way amongst all participants to help ensure data quality. Test development is described in Chapter 2, and the data collection procedures are described in this chapter.
  • Sampling Outcomes
    This chapter reports on PISA sampling outcomes. Details of the sample design are given in Chapter 4
  • Scaling Outcomes
    When main study data were received from each participating country, they were first verified and cleaned using the procedures outlined in Chapter 10. Files containing the achievement data were prepared and national-level Rasch and traditional test analyses were undertaken. The results of these analyses were included in the reports that were returned to each participating country (see Chapter 9).
  • Coding and Marker Reliability Studies
    As explained in the first section of this report, on test design (see Chapter 2), a substantial proportion of the PISA 2006 items were open ended and required coding by trained personnel. It was important therefore that PISA implemented procedures that maximised the validity and consistency (both within and between countries) of this coding. Each country coded items on the basis of coding guides prepared by the consortium (see Chapter 2) using the design described in Chapter 6. Training sessions to train countries in the use of the coding guides were held prior to both the field trial and the main study.
  • Data Adjudication
    This chapter describes the process used to adjudicate the implementation of PISA 2006 in each of the participating countries and adjudicated regions. It gives the outcomes of the data adjudication which are mainly based on the following aspects 
  • Proficiency Scale Construction
    The PISA test design makes it possible to use techniques of modern item response modelling (see Chapter 9) to simultaneously estimate the ability of all students taking the PISA assessment, and the difficulty of all PISA items, locating these estimates of student ability and item difficulty on a single continuum. 
  • Scaling Procedures and Construct Validation of Context
    The PISA 2006 context questionnaires included numerous items on student characteristics, student family background, student perceptions, school characteristics and perceptions of school principals. In 16 countries (optional) parent questionnaires were administered to the parents of the tested students. 
  • Validation of the Embedded Attitudinal Scales
    The development processes that are employed by PISA to ensure the cross-national validity of its scales consist of four steps. First, the construct should have well-established theoretical underpinnings. That is the construct should be underpinned by a body of academic literature and it should be supported by leading theorists and academics working in an area. Within PISA this is ensured through the articulation of the constructs in widely discussed and reviewed assessment frameworks. For the embedded interest and embedded support scales the articulation can be found in the Assessing Scientific, Reading and Mathematical Literacy: A Framework for PISA 2006 (OECD 2006) (also see Chapter 2). 
  • International Database
    The PISA international database consists of six data files1: four with student responses, one with school responses and one with parent responses. All are provided in text (or ASCII format) with the corresponding SAS® and SPSS® control files. 
  • Appendices
  • Add to Marked List
Visit the OECD web site