4228.0 - Programme for the International Assessment of Adult Competencies, Australia, 2011-2012 Quality Declaration 
ARCHIVED ISSUE Released at 11:30 AM (CANBERRA TIME) 15/02/2013   
   Page tools: Print Print Page Print all pages in this productPrint All

APPENDIX SCORES AND SKILL LEVELS


CALCULATION OF SCORES

For each skill domain, proficiency scores are derived on a scale ranging from 0 to 500 points. Item Response Theory is used so that the score reflects the percentage of items in the skill domain that the respondent answered correctly, as well as the probability of the respondent (or persons with similar characteristics) successfully completing tasks with a similar level of difficulty. For PIAAC a response probability (RP) value of 0.67 was chosen, meaning that the respondent (or persons with similar characteristics) had a 67 per cent chance of successfully completing tasks with a similar level of difficulty.

For each respondent in PIAAC, ten plausible values (scores) were generated for the domains measured. While simple population estimates for any domain can be produced by choosing at random only one of the ten plausible values, this publication uses an average of the ten values. For example in order to report an estimate of the total number of people at Level 1 for literacy, the weighted estimate of the number of respondents at Level 1 for each of the ten plausible values for literacy individually, was calculated. The ten weighted estimates were then summed. Finally, this result was divided by ten to obtain the estimate of the total number of people at Level 1 for literacy. The process was repeated for each skill level.

This process must be performed for each skill level by each variable category (e.g. males) when producing estimates for other tabulations. For example in order to report an estimate of the total number of males at Level 1 for literacy, the weighted estimate of the number of males at Level 1 for each of the ten plausible values for literacy individually was calculated. Then the ten weighted estimates were summed. Finally, this result was divided by ten to obtain the estimate of the total number of males at Level 1 for literacy. The process was then repeated for each skill level.

All estimates presented in this publication are obtained by using all ten plausible values in combination, as described above.

In order to minimise respondent burden, respondents did not complete exercises in all three of the skill domains. Respondents completed exercise tasks in only one or two of these domains, depending on the assessment path they followed. Refer to the appendix titled Pathways through the self-enumerated exercise for further information about the possible assessment paths. To address this, PIAAC used multiple imputation methodology to obtain proficiency scores for each respondent for the skill domains for which the respondent was not required to do an exercise. Problem solving in technology-rich environment scores were not imputed for respondents who were sequenced to the paper-based Core booklet (i.e. they had no computer experience, or they did not agree to do the exercise on the computer, or they did not pass the computer-based Core Stage 1). The effect of the significant imputation variability, due to the use of multiple possible assessment tasks and the complex scaling procedures on the estimation, can be reliably estimated and is included in the calculated standard errors (SEs). See the Data quality (Technical Note) for further information about the reliability of the estimates.

To facilitate analysis, these continuous scores have been grouped into five skill levels for the literacy and numeracy skill domains, and three levels for the problem solving in technology-rich environments skill domain, with Level 1 being the lowest measured level. The levels indicate specific sets of abilities, and therefore, the thresholds for the levels are not equidistant. As a result, the ranges of scores in each level are not identical.

The relatively small proportions of respondents who are assessed as being at Level 5 often resulted in unreliable estimates of the number of people at this level. For this reason, whenever results are presented by skill level, Levels 4 and 5 are combined.


LEVELS OF DIFFICULTY

The skill level cut-off points shown for literacy and numeracy domains in this publication are preliminary and may change. At this point in time, skill level cut-off points have not been set for the problem solving in technology-rich environments domain.

Further information to assist with the interpretation of the skill levels will be available in the international report published by the OECD in October 2013.

Literacy

For PIAAC, literacy is defined as understanding, evaluating, using and engaging with written texts to participate in society, to achieve one's goals, and to develop one's knowledge and potential.

Level 1 (0 to below 225)

Most of the tasks at this level require the respondent to read relatively short digital or print continuous, non-continuous, or mixed texts to locate a single piece of information which is identical to or synonymous with the information given in the question or directive. Some tasks may require the respondent to enter personal information onto a document, in the case of some non-continuous texts. Little, if any, competing information is present. Some tasks may require simple cycling through more than one piece of information. Knowledge and skill in recognising basic vocabulary, evaluating the meaning of sentences, and reading of paragraph text is expected.

Level 2 (From 225 to below 275)

At this level the complexity of text increases. The medium of texts may be digital or printed, and texts may be comprised of continuous, non-continuous, or mixed types. Tasks in this level require respondents to make matches between the text and information, and may require paraphrase or low-level inferences. Some competing pieces of information may be present. Some tasks require the respondent to:

  • cycle through or integrate two or more pieces of information based on criteria
  • compare and contrast or reason about information requested in the question, or
  • navigate within digital texts to access-and-identify information from various parts of a document.
Level 3 (From 275 to below 325)

Texts at this level are often dense or lengthy, including continuous, non-continuous, mixed, or multiple pages. Understanding text and rhetorical structures become more central to successfully completing tasks, especially in navigation of complex digital texts. Tasks require the respondent to identify, interpret, or evaluate one or more pieces of information, and often require varying levels of inferencing. Many tasks require the respondent construct meaning across larger chunks of text or perform multi-step operations in order to identify and formulate responses. Often tasks also demand that the respondent disregard irrelevant or inappropriate text content to answer accurately. Competing information is often present, but it is not more prominent than the correct information.

Level 4 (From 325 to below 375)

Tasks at this level often require respondents to perform multiple-step operations to integrate, interpret, or synthesize information from complex or lengthy continuous, non-continuous, mixed, or multiple type texts. Complex inferences and application of background knowledge may be needed to perform successfully. Many tasks require identifying and understanding one or more specific, noncentral ideas in the text in order to interpret or evaluate subtle evidence-claim or persuasive discourse relationships. Conditional information is frequently present in tasks at this level and must be taken into consideration by the respondent. Competing information is present and sometimes seemingly as prominent as correct information.

Level 5 (From 375 to 500)

At this level, tasks may require the respondent to search for and integrate information across multiple, dense texts; construct syntheses of similar and contrasting ideas or points of view; or evaluate evidenced based arguments. Application and evaluation of logical and conceptual models of ideas may be required to accomplish tasks. Evaluating reliability of evidentiary sources and selecting key information is frequently a key requirement. Tasks often require respondents to be aware of subtle, rhetorical cues and to make high-level inferences or use specialised background knowledge.

Numeracy

For PIAAC, numeracy is defined as the ability to access, use, interpret, and communicate mathematical information and ideas, in order to engage in and manage the mathematical demands of a range of situations in adult life. This definition should be paired with the definition of numerative behaviour which is managing a situation or solving a problem in a real context, by responding to mathematical content/information/ideas represented in multiple ways.

Level 1 (0 to below 225)

Tasks in this level require the respondent to carry out basic mathematical processes in common, concrete contexts where the mathematical content is explicit with little text and minimal distractors. Tasks usually require one-step or simple processes involving e.g. counting; sorting; performing basic arithmetic operations; understanding simple percents such as 50%; locating and identifying elements of simple or common graphical or spatial representations.

Level 2 (From 225 to below 275)

Tasks in this level require the respondent to identify and act upon mathematical information and ideas embedded in a range of common contexts where the mathematical content is fairly explicit or visual with relatively few distractors. Tasks tend to require the application of two or more steps or processes involving e.g. calculation with whole numbers and common decimals, percents and fractions; simple measurement and spatial representation; estimation; interpretation of relatively simple data and statistics in texts, tables and graphs.

Level 3 (From 275 to below 325)

Tasks in this level require the respondent to understand mathematical information which may be less explicit, embedded in contexts that are not always familiar and represented in more complex ways. Tasks require several steps and may involve the choice of problem-solving strategies and relevant processes. Tasks tend to require the application of e.g. number sense and spatial sense; recognising and working with mathematical relationships, patterns, and proportions expressed in verbal or numerical form; interpretation and basic analysis of data and statistics in texts, tables and graphs.

Level 4 (From 325 to below 375)

Tasks in this level require the respondent to understand a broad range of mathematical information that may be complex, abstract or embedded in unfamiliar contexts. These tasks involve undertaking multiple steps and choosing relevant problem-solving strategies and processes. Tasks tend to require analysis and more complex reasoning about e.g. quantities and data; statistics and chance; spatial relationships; change, proportions and formulas. Tasks in this level may also require comprehending arguments or communicating well-reasoned explanations for answers or choices.

Level 5 (From 375 to 500)

Tasks in this level require the respondent to understand complex representations and abstract and formal mathematical and statistical ideas, possibly embedded in complex texts. Respondents may have to integrate multiple types of mathematical information where considerable translation or interpretation is required; draw inferences; develop or work with mathematical arguments or models; justify, evaluate and critically reflect upon solutions or choices.

Problem solving in technology-rich environments (PSTRE)

For PIAAC, PSTRE is defined as using digital technology, communication tools and networks to acquire and evaluate information, communicate with others and perform practical tasks. PIAAC focuses on abilities to solve problems for personal, work and civic purposes by setting up appropriate goals and plans, accessing and making use of information through computers and computer networks. At the time of publication, the cut-off points for the PSTRE skill levels were not available.

Level 1 (yet to be determined)

At this level, tasks typically require the use of widely available and familiar technology applications, such as email software or web browser. There is little or no navigation required to access the information or commands required to solve the problem. The problem may be solved regardless of one's awareness and use of specific tools and functions (e.g. a sort function). The task involves few steps and a minimal number of operators. At a cognitive level, the person can readily infer the goal from the task statement; problem resolution requires one to apply explicit criteria; there are few monitoring demands (e.g. the person does not have to check whether they have used the adequate procedure or made progress toward the solution). Identifying contents and operators can be done through simple match; only simple forms of reasoning, e.g. assigning items to categories are required; there is no need to contrast or integrate information.

Level 2 (yet to be determined)

At this level, tasks typically require the use of both generic and more specific technology applications. For instance, the person may have to make use of a novel online form. Some navigation across pages and applications is required to solve the problem. The use of tools (e.g. a sort function) can facilitate the resolution of the problem. The task may involve multiple steps and operators. In terms of cognitive processing, the problem goal may have to be defined by the person, though the criteria to be met are explicit. There are higher monitoring demands. Some unexpected outcomes or impasses may appear. The task may require evaluating the relevance of a set of items to discard distractors. Some integration and inferential reasoning may be needed.

Level 3 (yet to be determined)

At this level, tasks typically require the use of both generic and more specific technology applications. Some navigation across pages and applications is required to solve the problem. The use of tools (e.g., a sort function) is required to make progress toward the solution. The task may involve multiple steps and operators. In terms of cognitive processing, the problem goal may have to be defined by the person, and the criteria to be met may or may not be explicit. There are typically high monitoring demands. Unexpected outcomes and impasses are likely to occur. The task may require evaluating the relevance and the reliability of information in order to discard distractors. Integration and inferential reasoning may be needed to a large extent.


COMPARABILITY OF TIME SERIES

Data released in the previous ALLS and SAL publications are not comparable with PIAAC data for the following reasons:

  • The literacy and numeracy scores previously published for ALLS and SAL have been remodelled to make them consistent with PIAAC. These scores were originally based on a model with a response probability (RP) value of 0.8 but are now based on a model with a RP value of 0.67. The latter value was used in PIAAC to achieve consistency with the OECD survey Programme for International Student Assessment (PISA), in the description of what it means to be performing at a particular level of proficiency. The new RP value does not affect the score that was calculated for a respondent. However, it does affect the interpretation of the score. Therefore, users of this data should refer to the new skill level descriptions provided (above) in this PIAAC publication when performing time-series comparisons.
  • The prose and document literacy scales from ALLS and SAL have been combined to produce a single literacy scale which is comparable to the PIAAC literacy scale.
  • The numeracy scores from ALLS have been recalculated using a model that incorporates the results of all countries that participated in ALLS. (The previous model was based only on countries that participated in the first round of ALLS). This has resulted in some minor changes to the ALLS numeracy scores. SAL did not collect a numeracy domain which is comparable with ALLS and PIAAC.
These remodelled literacy scores (from ALLS and SAL) and numeracy scores (from ALLS) are included in additional data cubes to be appended later in 2013.