copy the linklink copied! 4. Reporting the results of the Survey of Adult Skills (PIAAC)
This chapter examines the proficiency levels used to report the results of the Survey of Adult Skills (PIAAC). It provides information on the languages used and how results were reported in countries/economies that conducted the survey in more than one language.
This chapter describes how the results from the Survey of Adult Skills (PIAAC) are reported. It shows how the literacy, numeracy and problem-solving items used in the assessment are categorised according to their difficulty, the cognitive strategies required of adults to answer the questions, the real-life contexts in which such problems/questions may arise, and the medium used to deliver the item to the respondent. The chapter also shows how the proficiency levels for each of the three domains are related to the scores, and describes in detail what adults can do at each of the proficiency levels. The chapter concludes with information about the languages in which the test was conducted and the approach to reporting in countries/economies where the assessment was delivered in more than one language.
copy the linklink copied! The proficiency scales
In each of the three domains assessed, proficiency is considered as a continuum of ability involving the mastery of information-processing tasks of increasing complexity. The results are represented on a 500-point scale. At each point on the scale, an individual with a proficiency score of that particular value has a 67% chance of successfully completing test items located at that point.1 This individual will also be able to complete more difficult items (those with higher values on the scale) with a lower probability of success and easier items (those with lower values on the scale) with a greater chance of success.
To illustrate this point, Table 4.1 shows the probability with which a person with a proficiency score of 300 on the literacy scale can successfully complete items of greater and lesser difficulty. As can be seen, a person with a proficiency score of 300 will successfully complete items of this level of difficulty 67% of the time, items with a difficulty value of 250, 95% of the time, and items with a difficulty value of 350, 28% of the time.
copy the linklink copied! Proficiency levels
The proficiency scale in each of the domains assessed can be described in relation to the items that are located at the different points on the scale according to their difficulty. Tables 4.2, 4.3 and 4.4 present the location of the test items used in the Survey of Adult Skills on the difficult scales in the three domains assessed. In addition to the difficulty score, unit name and ID, a description of the key features of the item is provided in relation to the relevant measurement framework.
To help interpret the results, the reporting scales have been divided into “proficiency levels” defined by particular score-point ranges. Six proficiency levels are defined for literacy and numeracy (Levels 1 through 5 plus below Level 1) and four for problem solving in technology-rich environments (Levels 1 through 3 plus below Level 1). These descriptors provide a summary of the characteristics of the types of tasks that can be successfully completed by adults with proficiency scores in a particular range. In other words, they offer a summary of what adults with particular proficiency scores in a particular skill domain can do.
With the exception of the lowest level (below Level 1), tasks located at a particular level can be successfully completed approximately 50% of the time by a person with a proficiency score at the bottom of the range defining the level. In other words, a person with a score at the bottom of Level 2 would score close to 50% in a test made up of items of Level 2 difficulty. A person at the top of the level will get items located at that level correct most of the time. The “average” individual with a proficiency score in the range defining a level will successfully complete items located at that level approximately two-thirds of the time.
Literacy and numeracy
Six proficiency levels are defined for the domains of literacy and numeracy. The score-point ranges defining each level and the descriptors of the characteristics of tasks located at each of the levels can be found in Table 4.5. In the case of literacy and numeracy, the score-point ranges associated with each proficiency level are the same as those that apply in the International Adult Literacy Survey (IALS) and the Adult Literacy and Life Skills Survey (ALL) for document and prose literacy and in ALL for numeracy. However, the descriptors that apply to the proficiency levels in the domains of literacy and numeracy differ between the Survey of Adult Skills (PIAAC) and IALS and ALL.
This is because the domain of literacy in the Survey of Adult Skills replaces the previously separate domains of prose and document literacy used in IALS and ALL, and because the survey defines proficiency levels differently than the other surveys do. An explanation of these changes and their impact is provided in Annex A.
Tables 4.6 and 4.7 show the probability that adults with particular proficiency scores will complete items of different levels of difficulty in the domains of literacy and numeracy. For example, an adult with a proficiency score of 300 in literacy (i.e. the mid-point of Level 3) has a 68% chance of successfully completing items of Level 3 difficulty. He or she has a 29% chance of completing items of Level 4 difficulty and a 90% probability of successfully completing items of Level 2 difficulty.
Problem solving in technology-rich environments
The problem-solving proficiency scale was divided into four levels. The problem solving in technology-rich environments framework (PIAAC Problem Sloving in Technology-Rich Environment, 2009) identifies three main dimensions along which problems vary in quality and complexity. These are (1) the technology dimension, (2) the task dimension and (3) the cognitive dimension. Variations along each of these dimensions contribute to the overall difficulty of a problem. For instance, a problem is likely to be more complex if it involves the combined use of more than one computer application (e.g. e-mail and a spreadsheet); similarly, a problem is more complex if the task is defined in vague terms, as opposed to fully specified. Finally, a problem is likely to be more difficult if the respondent has to generate lots of deductions and inferences than if he or she just has to assemble or match different pieces of explicit information. The relationship between these dimensions and the proficiency levels is presented in Table 4.8. The descriptors of the levels are presented in Table 4.9.
Table 4.10 shows the probability of adults with particular proficiency in problem solving in technology-rich environments completing problem solving items of different levels of difficulty.
A note about the reporting of problem solving in technology-rich environments
The populations for whom proficiency scores for problem solving in technology-rich environments are reported are not identical across countries/economies. Proficiency scores relate only to the proportion of the target population in each participating country that was able to undertake the computer-based version of the assessment, and thus meets the preconditions for displaying competency in this domain.
Four groups of respondents did not take the computer-based assessment,2 those who:
-
indicated in completing the background questionnaire that they had never used a computer (group 1)
-
had some experience with computers but who “failed” the ICT core assessment (see Chapter 3) designed to determine whether a respondent had the basic computer skills necessary to undertake the computer-based assessment (group 2)
-
had some experience with computers but opted not to take the computer-based assessment (group 3)
-
did not attempt the ICT core for literacy-related reasons (group 4).
By definition, a minimum level of competency in the use of computer tools and applications and a minimum level of proficiency in literacy and numeracy is required in order to display proficiency in problem solving in technology-rich environments. Individuals in groups 1 and 2 are, thus, treated as not meeting the necessary preconditions for displaying proficiency and have no proficiency score in the domain of problem solving in technology-rich environments.
Respondents who did not attempt the ICT core for literacy-related reasons (group 4) have not been attributed a problem-solving score due to lack of sufficient information.
Respondents who opted not to take the computer-based assessment (group 3), however, represent a different category. They are individuals who, on their own initiative, decided to take the paper-and-pencil version of the assessment without going through the process designed to direct respondents to the computer-based or paper pathways of the assessment. As a result, it is not known whether or not they possessed the computer skills necessary to complete the computer-based assessment.
Three options for how to treat this group were considered: imputing their proficiency in problem solving on the basis of their proficiency in literacy and numeracy and their background characteristics; treating them as non-respondents; or reporting them as a separate category of the group that could not display competency. The latter option was adopted. Imputation was rejected on the grounds that refusals appeared to have different characteristics to respondents taking the computer-based assessment pathway. In fact, they appeared to be more similar to the respondents who did not have computer skills than to those who took the computer-based assessment. The option of treating them as non-respondents was rejected for similar reasons.
In reporting the results concerning problem solving in technology-rich environments, the following approach was adopted:
-
When reporting proficiency in problem solving in technology-rich environments on the continuous scale at the country level, the proportion of the population displaying proficiency is reported in conjunction with country-level statistics (e.g. means, standard deviations, etc).
-
When reporting distributions of the population by proficiency levels, information is presented for the entire adult population as a whole (i.e. those displaying proficiency plus those not displaying proficiency). The number or proportion of the population not displaying proficiency is always reported when results are presented by proficiency level.
copy the linklink copied! Test languages and reporting
In each participating country/economy, the Survey of Adult Skills was administered in the official national language(s) of the country and, in some cases, in a widely used language in addition to the national language(s). A small number of countries/economies administered the cognitive assessments in the national language only but administered the background questionnaire in the national language and a widely spoken language. The objective there was to minimise the number of respondents who failed to provide information for language-related reasons. Table 4.11 shows the languages in which the survey was administered.
For those countries/economies that tested in more than one language, results are presented as a single proficiency score. In other words, the mean proficiency score for literacy in Estonia, for example, is the mean proficiency of Estonian adults in reading in either Estonian or Russian. In only one country, Canada, was the sample designed to allow for reliable proficiency estimates in each of the languages in which the test was administered (in this case, English and French). However, as is the case for all other countries in which the test was administered in more than one language, Canadian results are presented in the international report in the form of a single proficiency estimate rather than as separate estimates for English and French speakers.
The Survey of Adult Skills was designed to assess the proficiency of the adult population in reading, in working with numbers, and in solving problems in the language(s) that are most relevant to and/or commonly used in the economic and civic life (e.g. in interaction with public bodies and institutions, in educational institutions) of a participating country. Therefore, poor performance in the test language(s) among non-native speakers of those languages, such as immigrants and their children, is not necessarily indicative of poor performance, as such. In the case of non-native speakers of the test language(s), low proficiency cannot be assumed to indicate low proficiency in their native language. A Turkish immigrant in Germany, for example, may display poor skills in the test language (German) but be a proficient reader and have good problem-solving skills when working in Turkish.
The sample for the Russian Federation does not include the population of the Moscow municipal area. The data published, therefore, do not represent the entire resident population aged 16-65 in the Russian Federation but rather the population of the Russian Federation excluding the population residing in the Moscow municipal area.
More detailed information regarding the data from the Russian Federation as well as that of other countries can be found in the Technical Report of the Survey of Adult Skills, Third Edition (OECD, 2019).
References
OECD (2012), Literacy, Numeracy and Problem Solving in Technology-Rich Environments: Framework for the OECD Survey of Adult Skills, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264128859-en.
PIAAC Expert Group in Problem Solving in Technology-Rich Environments (2009), “PIAAC Problem Solving in Technology-Rich Environments: A Conceptual Framework”, OECD Education Working Papers, No. 36, OECD Publishing, Paris, http://dx.doi.org/10.1787/220262483674.
Notes
← 1. This differs from the approach used in IALS and ALL in which a value of 0.80 was used to locate items and test takers on the relevant scales. Further information on the change in approach and its impact is provided in Annex A.
← 2. Defined as taking, at a minimum, the core literacy and numeracy assessments on the computer.
Metadata, Legal and Rights
https://doi.org/10.1787/f70238c7-en
© OECD 2019
The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at http://www.oecd.org/termsandconditions.