The ABS will be closed from 12.00pm, 24 December 2024 and will reopen at 9.00am, 2 January 2025. During this time there will be no statistical releases and our support functions will be unavailable. The ABS wishes you a safe and happy Christmas.

Programme for the International Assessment of Adult Competencies, Australia methodology

Latest release
Reference period
2011 - 2012
Released
9/10/2013
Next release Unknown
First release

Explanatory notes

Introduction

1 This publication contains results from the Australian component of the Programme for the International Assessment of Adult Competencies (PIAAC) conducted in 24 countries around the world. The PIAAC survey was enumerated throughout Australia from October 2011 to March 2012 with funding provided by the then Australian Government Department of Education, Employment and Workplace Relations.

2 PIAAC is an international survey coordinated by the Organisation for Economic Co-operation and Development (OECD). PIAAC provides information on skills and competencies for people aged 15 to 74 years in the three domains of:

  • literacy;
  • numeracy; and
  • problem solving in technology-rich environments (PSTRE).
     

3 PIAAC is the third survey of international comparisons of adult proficiency skills in specific domains conducted in Australia. Its predecessors were the Adult Literacy and Life Skills Survey (ALLS) 2006 and Survey of Aspects of Literacy (SAL) 1996 (internationally known as the International Adult Literacy Survey (IALS)). PIAAC expands on these previous surveys by assessing skills in the domain of 'problem solving in technology-rich environments' and by asking questions specifically about skill use at work.

4 The literacy and numeracy scores previously released in the ALLS and SAL publications are not comparable with PIAAC data for reasons which are listed in the Comparability of Time Series section below. Data based on remodelled literacy scores (from ALLS and SAL) and numeracy scores (from ALLS) are included in additional data cubes to allow direct comparison. Caution however is advised when comparing results from the PIAAC with the earlier 1996 SAL and the 2006 ALLS. While the data from previous surveys has been re-modelled which should facilitate comparability over time, analysis undertaken by the ABS and internationally has shown that in some cases the observed trend is difficult to reconcile with other known factors and is not fully explained by sampling variability. Further analysis is needed to better understand the cause of these variations before drawing conclusions about the trend. For more information, see: The Survey of Adult Skills: Reader's Companion (Organisation for Economic Co-operation and Development, 2013); Technical Report for the Survey of Adult Skills - PIAAC (Organisation for Economic Co-operation and Development, 2013); and Skills in Canada: First Results from the Programme for the International Assessment of Adult Competencies - PIAAC (Statistics Canada 2013).

5 Data from PIAAC, ALLS and SAL are used to inform on the literacy, numeracy and problem-solving in technology-rich environments skills of Australian adults and the relationship between skills and education, employment, income, and demographic characteristics. Data is used for a range of purposes including the Council of Australian Governments (COAG) National Agreement for Skills and Workforce Development.

6 To analyse the relationship between the assessed competencies with social and economic well-being, PIAAC collected information on topics including:

  • general demographic information including income;
  • participation in education and training activities;
  • participation in labour force activities;
  • self-perception of literacy, numeracy and information communication technology (ICT) skill use at work and everyday life;
  • self-perception of generic skills used at work;
  • volunteering, trust and health;
  • language background; and
  • parental background.
     

7 Twenty-four countries participated in the PIAAC survey internationally. This publication contains Australia data only. The OECD published international results on 8 October 2013 in the OECD Skills Outlook 2013: First Results from the Survey of Adult Skills. The report is available from the OECD website at www.oecd.org.

Scope

8 The scope of the survey is restricted to people aged 15 to 74 years who were usual residents of private dwellings and excludes:

  • diplomatic personnel of overseas governments;
  • members of non-Australian defence forces (and their dependants) stationed in Australia;
  • overseas residents who have not lived in Australia, or do not intend to do so, for a period of 12 months or more;
  • people living in very remote areas; and
  • people living in Census Collection Districts (CDs) which contained Discrete Indigenous Communities.
     

Coverage

9 Households where all of the residents were less than 18 years of age were excluded from the survey because the initial screening questions needed to be answered by a responsible adult (who was aged 18 years or over).

10 If a child aged 15 to 17 years was selected, they were interviewed with the consent of a parent or responsible adult.

Data collection

11 Data was collected by trained ABS interviewers who conducted computer-assisted personal interviews.

12 A series of screening questions were asked of a responsible adult in a selected household to determine whether any of the residents of the household were in scope for the survey.

13 One resident of the household, who was in scope, was randomly selected to be interviewed. This respondent was asked a background questionnaire to obtain general information on topics including education and training, employment, income and skill use in literacy, numeracy, and ICT.

14 If a child aged 15 to 17 years was selected, the parent or responsible adult was asked questions about the household's income.

15 For language problems, if acceptable to the respondent, an interpreter could assist with the background questionnaire, but the self-enumerated exercise was not completed.

Self-enumerated exercise

16 After the background questionnaire was completed, the respondent undertook a self-enumerated exercise. This contained tasks to assess their literacy, numeracy or problem solving skills in technology-rich environments. The exercise tasks were based on activities that adults do in their daily lives such as following instructions on labels, interpreting charts and graphs, measuring with a ruler, using email, internet searches and navigating websites. Tasks were at varying levels of difficulty.

17 The exercise could be completed at a separate time to the background questionnaire.

18 Respondents either completed the exercise on the notebook computer (with a mouse attached) or in paper-booklets. All respondents first took a core exercise to assess their capacity to undertake the main exercise. Those who passed the core stage proceeded to the main exercise. Those who failed the core stage were directed to the Reading Components booklet, which was designed to measure basic reading skills. Refer to the appendix titled Pathways through the self-enumerated exercise for further information about this process.

19 All respondents were provided with a pencil, ruler, notepad and calculator to use during the exercise. There were no time limits, and the respondent was not allowed to receive any assistance from others.

20 The role of the interviewer during the self-enumerated exercise was to discreetly monitor the respondent's progress, and to encourage them to complete as many of the tasks as possible.

Observation module

21 When the interview was complete and the interviewer had left the household, the interviewer used the computer to answer a series of questions which collected information about the interview setting such as any events that might have interrupted or distracted the respondent during the exercise.

Scoring the exercise tasks

22 At the completion of the interview, if the respondent had a paper exercise booklet, it was forwarded to the Australian Bureau of Statistics (ABS). The core and main exercise booklets were marked by trained scorers, and the responses from the Reading Components booklets were recorded. A sample of the booklets were independently re-scored by a different scorer to assess the accuracy of the scoring.

23 The responses from the computer-based and paper-based exercises were used to calculate scores for each of the skill domains completed by the respondent. The derivation of the scores was performed by Educational Testing Service (ETS) of Princeton USA (who also performed this task for the ALLS and SAL surveys). Refer to the appendix titled Scores and skill levels for further information about the calculations of the scores.

Score imputation

24 In order to minimise respondent burden, respondents did not complete exercises in all three of the skill domains. Respondents completed exercise tasks in only one or two of these domains, depending on the assessment path they followed. To address this, PIAAC used multiple imputation methodology to obtain proficiency scores for each respondent for the skill domains for which the respondent was not required to do an exercise. Problem solving in technology-rich environments scores were not imputed for respondents who were sequenced to the paper-based Core booklet (i.e. they had no computer experience, or they did not agree to do the exercise on the computer, or they did not pass the computer-based Core Stage 1).

Sample design

25 PIAAC was designed to provide reliable estimates at the national level for five-year age groups, and for each state and territory.

26 Dwellings included in the survey in each state and territory were selected at random using a multi-stage area sample. This sample included only private dwellings from the geographic areas covered by the survey.

27 The initial sample for PIAAC consisted of 14,442 private dwellings. Of the 11,532 households that remained in the survey after sample loss, 8,446 (73%) were fully responding or provided sufficient detail for scores to be determined.

Estimation method

​​​​​​​Weighting

28 Weighting is the process of adjusting results from a sample survey to infer results for the total population. To do this, a 'weight' is allocated to each enumerated person. The weight is a value which indicates how many people in the population are represented by the sample person.

29 The first step in calculating weights for each unit is to assign an initial weight, which is the inverse of the probability of the unit being selected in the survey. For example, if the probability of a person being selected in the survey was 1 in 300, then the person would have an initial weight of 300 (that is, they represent 300 people).

Non-response adjustment

30 Non-response adjustments were made to the initial person-level weights with the aim of representing those people in the population that did not respond to PIAAC. Two adjustment factors were applied:

  • a literacy-related non-response adjustment, which was aimed at ensuring survey estimates represented those people in the population that had a literacy or language related problem and could not respond to the survey (these people cannot be represented by survey respondents because their reason for not completing the survey is directly related to the survey outcome, however they are part of the PIAAC target population); and
  • a non-literacy-related non-response adjustment, which was aimed at ensuring survey estimates represented those people in the population that did not have a literacy or language related problem but did not respond to the survey for some other reason.
     

Population benchmarks

    31 After the non-response adjustment, the weights were adjusted to align with independent estimates of the population, referred to as 'benchmarks', in designated categories of sex by age by state by area of usual residence. This process is known as calibration. Weights calibrated against population benchmarks ensure that the survey estimates conform to the independently estimated distributions of the population described by the benchmarks, rather than to the distribution within the sample itself. Calibration to population benchmarks helps to compensate for over or under-enumeration or particular categories of people which may occur due to either the random nature of sampling or non-response.

    32 The survey was calibrated to the in scope estimated resident population (ERP).

    33 Further analysis was undertaken to ascertain whether benchmark variables, in addition to geography, age and sex, should be incorporated into the weighting strategy. Analysis showed that including only these variables in the weighting approach did not adequately compensate for undercoverage in the PIAAC sample for variables such as highest educational attainment and labour force status, when compared to other ABS surveys. As these variables were considered to have possible association with adult literacy additional benchmarks were incorporated into the weighting process.

    34 The benchmarks used in the calibration of final weights for PIAAC were:

    • state by highest educational attainment;
    • state by sex by age by labour force status; and
    • state by part of state by age by sex.
       

    35 The education and labour force benchmarks, were obtained from other ABS survey data. These benchmarks are considered 'pseudo-benchmarks' as they are not demographic counts and they have a non-negligible level of sample error associated with them. The 2011 Survey of Education and Work (people aged 15 to 64 years) was used to provide a pseudo-benchmark for educational attainment. The monthly Labour Force Survey (aggregated data from November 2011 to March 2012) provided the pseudo-benchmark for labour force status. The sample error associated with these pseudo-benchmarks was incorporated into the standard error estimation.

    36 The process of weighting ensures that the survey estimates conform to persons benchmarks per state, part of state, age and sex. These benchmarks are produced from estimates of the resident population derived independently of the survey. Therefore the PIAAC estimates do not (and are not intended to) match estimates for the total Australian resident population (which include people and households living in non-private dwellings, such as hotels and boarding houses, and in very remote parts of Australia) obtained from other sources.

    Estimation

    37 Survey estimates of counts of people are obtained by summing the weights of people with the characteristic of interest.

    38 Note that although the literacy-related non-respondent records (154 people) were given a weight, plausible values were not generated for this population. This population is included in the "missing" category. These people are likely to have low levels of literacy and numeracy in English.

    Reliability of estimates

    39 All sample surveys are subject to error which can be broadly categorised as either sampling error or non-sampling error.

    40 Sampling error is the difference between the published estimates, derived from a sample of people, and the value that would have been produced if all people in scope of the survey had been included.

    41 Non-sampling error may occur in any collection, whether it is based on a sample or a full count such as a census. Sources of non-sampling error include non-response, errors in reporting by respondents or recording answers by interviewers, and errors in coding and processing data. Every effort was made to reduce the non-sampling error by careful design and testing of the questionnaire, training and supervision of interviewers, follow-up of respondents, and extensive editing and quality control procedures at all stages of data processing.

    42 In contrast to most other ABS surveys, the PIAAC estimates also include significant imputation variability, due to the use of multiple possible assessment modules and the complex proficiency scaling procedures. The effect of the plausible scoring methodology on the estimation can be reliably estimated and is included in the calculated RSEs. This is covered in more detail in the Data quality (Technical Note).

    Seasonal factors

    43 The estimates are based on information collected from October 2011 to March 2012, and due to seasonal factors they may not be representative of other time periods in the year. For example, employment is subject to seasonal variation through the year. Therefore, the PIAAC results for employment could have differed if the survey had been conducted over the whole year or in a different part of the year.

    Data quality

    44 Information recorded in this survey is essentially 'as reported’ by respondents and hence may differ from that which might be obtained from other sources or via other methodologies. This factor should be considered when interpreting the estimates in this publication.

    45 Information was collected on the respondents' perception of various topics such as their employment status, health status, skill use and aspects of their job. Perceptions are influenced by a number of factors and can change quickly. Care should therefore be taken when analysing or interpreting these data.

    46 For each competency, proficiency is measured on a scale ranging from 0 to 500 points. To facilitate analysis, these continuous scores have been grouped into six skill levels for the literacy and numeracy skill domains, and four skill levels for the problem solving in technology-rich environments skill domain, with below Level 1 being the lowest measured level of literacy. The relatively small proportions of respondents who are assessed as being at Level 5 for the literacy and numeracy skill domains, often results in unreliable estimates of the number of people at this level. For this reason, when results are presented by skill level, Levels 4 and 5 are usually combined. Similarly, for many tables it has been necessary to combine Levels 2 and 3 in Problem Solving in Technology-rich environments tables.

    'Don't know', 'Refused' and 'Not stated or inferred' categories

    47 For a number of PIAAC data items, some respondents were unwilling or unable to provide the required information. When this occurred, the missing response was recorded as either 'don't know', 'refused' or 'not stated or inferred'. These categories are not explicitly shown in the publication tables, but have been included in the totals with footnotes provided to note these inclusions. Proportions shown in the tables are based on totals which include these categories.

    48 Listed below are data items where responses coded to the 'don't know' or 'refused' category were higher than 1%:

    • 'Current work earnings from wage or salary - annual gross pay' data item, 2.5% of people (approximately 424,000) had responses of 'don't know' or 'refused';
    • 'Current work earnings from business - last financial year' data item, 1.4% of people (approximately 234,000) had responses of 'don't know' or 'refused';
    • 'Level of highest educational qualification of mother or female guardian (ISCED)', 8.1% of people (1.4 million) had responses of 'don't know' or 'refused'; and
    • 'Level of highest educational qualification of father or male guardian (ISCED)', 9.7% of people (1.6 million) had responses of 'don't know' or 'refused'.
       

    49 Aside from the items listed above, the proportions of responses of 'don't know' or 'refused' did not exceed 1% for any other data item, with the vast majority being less than 0.5%.

    'Missing' category

    50 Some respondents were unable to complete the background questionnaire as they were unable to speak or read the language of the assessment, in Australia's case, English; had difficulty reading or writing; or had a learning or mental disability. In the case of the background questionnaire, there was no one present (either the interviewer or another person) to translate into the language of the respondent or answer on behalf of the respondent. In the case of these respondents, only their age, sex and geographical details are known. Non-respondents represented 2% of the total population. While the proficiency of this group is likely to vary between countries, in most cases, these people are likely to have low levels of proficiencies in the language of the country concerned.

    Level of education

    Level of highest educational attainment (ASCED)

    51 Level of highest educational attainment was derived from information on highest year of school completed and level of highest non-school qualification. The derivation process determines which of the 'non-school' or 'school' attainments will be regarded as the highest. Usually the higher ranking attainment is self-evident, but in some cases some secondary education is regarded, for the purposes of obtaining a single measure, as higher than some certificate level attainments.

    52 The following decision table is used to determine which of the responses to questions on highest year of school completed (coded to ASCED Broad Level 6) and level of highest non-school qualification (coded to ASCED Broad Level 5) is regarded as the highest. It is emphasised that this table was designed for the purpose of obtaining a single value for level of highest educational attainment and is not intended to convey any other ordinality.

    Decision Table: Level of Highest Educational Attainment (ASCED level of education codes)
    Highest year of school completed Level of highest non-school qualification
    Certificate n.f.d. (500)Certificate III or IV n.f.d. (510)Certificate IV (511)Certificate III (514)Certificate I or IIn.f.d. (520)Certificate II (521)Certificate I (524)
    Secondary Education n.f.d. (600)Secondary Education n.f.d.Certificate III or IV n.f.d.Certificate IVCertificate IIICertificate I or II n.f.d.Certificate IICertificate I
    Senior Secondary Education n.f.d. (610)Secondary Education n.f.d.Certificate III or IV n.f.d.Certificate IVCertificate IIISecondary Education n.f.d.Secondary Education n.f.d.Secondary Education n.f.d.
    Year 12 (611)Year 12Certificate III or IV n.f.d.Certificate IVCertificate IIIYear 12Year 12Year 12
    Year 11 (613)Year 11Certificate III or IV n.f.d.Certificate IVCertificate IIIYear 11Year 11Year 11
    Junior Secondary Education n.f.d. (620)Junior Secondary Education n.f.d.Certificate III or IV n.f.d.Certificate IVCertificate IIICertificate I or II n.f.d.Certificate IICertificate I
    Year 10 (621)Year 10Certificate III or IV n.f.d.Certificate IVCertificate IIIYear 10Year 10Year 10
    Year 9 (622)Year 9Certificate III or IV n.f.d.Certificate IVCertificate IIICertificate I or II n.f.d.Certificate IICertificate I
    Year 8 (623)Year 8Certificate III or IV n.f.d.Certificate IVCertificate IIICertificate I or II n.f.d.Certificate IICertificate I
    Year 7 (624)Year 7Certificate III or IV n.f.d.Certificate IVCertificate IIICertificate I or II n.f.d.Certificate IICertificate I

     

    53 The decision table is also used to rank the information provided in a survey about the qualifications and attainments of a single individual. It does not represent any basis for comparison between differing qualifications. For example, a respondent whose highest year of school completed was Year 12, and whose level of highest non-school qualification was a Certificate III, would have those responses crosschecked on the decision table and would as a result have their level of highest educational attainment output as Certificate III. However, if the same respondent answered 'certificate' to the highest non-school qualification question, without any further detail, it would be crosschecked against Year 12 on the decision table as Certificate not further defined. The output would then be Year 12. The decision table, therefore, does not necessarily imply that one qualification is 'higher' than the other. For more details, see Education Variables (cat. no. 1246.0).

    54 Once the ASCED coding was complete, a concordance was applied to obtain the data item 'Level of highest qualification completed - ISCED' which is an international standard classification of education.

    Current study level (ASCED) and incomplete study level (ASCED)

    55 Level of education of current study was derived using the decision table displayed above, taking into account level of education of school study in current year and level of education of non-school study in current year for people who are undertaking concurrent qualifications.

    56 Once the ASCED coding was complete, a concordance was applied to obtain the data items 'Level of qualification currently studying for - ISCED' and 'Level of incomplete qualification - ISCED'.

    Labour force status

    57 The international PIAAC survey's concept of labour force status is defined in a slightly different way to that used in the ABS Labour Force Survey. The definition of the 'Employed' category in the international and the Australian data item are essentially the same. However, there is a subtle difference in the concept of 'Unemployed', which in turn impacts on the estimates for 'Out of labour force'. The labour force status data presented in the tables of this publication contain labour force data which is more closely aligned with the Australian definitions used in the ABS Labour Force Survey.

    Unemployed - international data item definition

    58 People aged 15 to 74 years who were not employed, and:

    • had actively looked for full-time or part-time work at any time in the four weeks up to the end of the reference week and were available for work within two weeks; or
    • will be starting a job within three months and could have started within two weeks had the job been available then.
       

    Unemployed - Australian data item definition

      59 People aged 15 to 74 years who were not employed, were available for work in the reference week, and at any time in the four weeks up to the end of the reference week:

      • had actively looked for full-time or part-time work; or
      • were waiting to start a new job.
         

      Data comparability

      Comparability of time series

      60 As noted above (paragraph 4), data previously released in the ALLS and SAL publications are not directly comparable with PIAAC data. The reasons for this are:

      • The literacy and numeracy scores previously published for ALLS and SAL have been remodelled to make them consistent with PIAAC. These scores were originally based on a model with a response probability (RP) value of 0.8 but are now based on a model with a RP value of 0.67. The latter value was used in PIAAC to achieve consistency with the OECD survey Programme for International Student Assessment (PISA), in the description of what it means to be performing at a particular level of proficiency. The new RP value does not affect the score that was calculated for a respondent. However, it does affect the interpretation of the score. Therefore, users of these data should refer to the new skill level descriptions provided in the appendix Scores and skill levels when performing time-series comparisons;
      • The prose and document literacy scales from ALLS and SAL have been combined to produce a single proficiency scale which is comparable to the PIAAC proficiency scale; and
      • The numeracy scores from ALLS have been recalculated using a model that incorporates the results of all countries that participated in ALLS. (The previous model was based only on countries that participated in the first round of ALLS.) This has resulted in some minor changes to the ALLS numeracy scores. SAL did not collect a numeracy domain which is comparable with ALLS and PIAAC.
         

      However, as noted in paragraph 4 caution is advised in the use of time-series, even based on the remodelled data.

      61 Data from ALLS and SAL based on these remodelled literacy scores (from ALLS and SAL) and numeracy scores (from ALLS) are included in additional data cubes.

      62 The problem solving in technology-rich environments competency is a new edition in PIAAC and is not comparable to the problem solving scale derived in ALLS.

      63 PIAAC was not designed to assess health literacy preventing any comparison with ALLS on that skill domain.

      64 To ensure comparability between the previous surveys, 60% of the literacy and numeracy tasks used in the PIAAC exercise were previously used in the ALLS and SAL surveys. However, in PIAAC most respondents completed the exercises on a computer (70%), rather than a paper-based exercise (30%). In ALLS and SAL, all respondents completed paper-based exercises. This may impact on the comparability of estimates.

      65 PIAAC includes new questions for respondents who were employed or had recent work experience about:

      • the frequency of use of a number of generic skills used in the workplace, including communication, presentation and team-working skills; and
      • skill practices at work, specifically reading, writing, mathematics and ICT skill activities at work which are considered important drivers of skills acquisition and the questions are designed to complement what is being measured in the exercise.
         

      66 For each respondent in PIAAC, ten plausible values (scores) were generated for the domains measured (whereas for ALLS and SAL only five plausible values were generated). While simple population estimates for any domain can be produced by choosing at random only one of the ten plausible values, this publication uses an average of the ten values. For example in order to report an estimate of the total number of people at Level 1 for literacy, the weighted estimate of the number of respondents at Level 1 for each of the ten plausible values for literacy individually, was calculated. The ten weighted estimates were then summed. Finally, this result was divided by ten to obtain the estimate of the total number of people at Level 1 for literacy. The process was repeated for each skill level. Refer to the appendix titled Scores and skill levels for further information about the calculation of estimates using all ten plausible values in combination.

      67 Changes to the scope and coverage of PIAAC from ALLS and SAL are:

      • overseas residents who have lived in Australia, or intend to do so, for a period of 12 months or more are included in the scope for PIAAC and ALLS, but were excluded for SAL;
      • people living in Collection Districts which contain Discrete Indigenous Communities were excluded from the scope of PIAAC, but were included for ALLS and SAL if they were not in a very remote area; and
      • households where all of the residents were less than 18 years of age were excluded from PIAAC coverage, but were included in ALLS and SAL.
         

      68 The full and part literacy non-response records (154 people) were weighted but not given plausible scores for PIAAC. Other part non-response (3) records were weighted and given plausible scores for PIAAC. However, similar records were treated as non-respondents for ALLS and SAL.

      Comparability with other ABS surveys

      69 PIAAC collected data across a range of topics, some of which have been included in previous ABS surveys. Where possible, question modules from existing surveys were used in the PIAAC questionnaire to facilitate comparison with other surveys. However, given PIAAC is part of an international survey, there was a requirement to use internationally developed question modules to ensure the results are comparable with data from other countries involved in the survey.

      70 Additionally, PIAAC is a sample survey and its results are subject to sampling error. As such, PIAAC results may differ from other sample surveys, which are also subject to sampling error. Users should take account of the RSEs on PIAAC estimates and those of other survey estimates where comparisons are made.

      71 Differences in PIAAC estimates, when compared with the estimates of other surveys, may also result from:

      • differences in scope and/or coverage;
      • different reference periods reflecting seasonal variations;
      • non-seasonal events that may have impacted on one period but not another; and
      • underlying trends in the phenomena being measured.
         

      72 Finally, differences can occur as a result of using different collection methodologies. This is often evident in comparisons of similar data items reported from different ABS collections where, after taking account of definition and scope differences and sampling error, residual differences remain. These differences often have to do with the mode of the collections, such as whether data are collected by an interviewer or self-enumerated by the respondent, whether the data are collected from the respondent themselves or from a proxy respondent. Differences may also result from the context in which questions are asked, that is where in the interview the questions are asked and the nature of preceding questions. The impacts on data of different collection methodologies are difficult to quantify. As a result, every effort is made to minimise such differences.

      Classifications

      73 Country of birth data are classified according to the Standard Australian Classification of Countries (SACC), Second Edition, 2008 (cat. no. 1269.0).

      74 Geography data (State/territory) are classified according to the Australian Statistical Geography Standard (ASGS): Volume 1 - Main Structure and Greater Capital City Statistical Areas, July 2016 (cat. no. 1270.0.55.001).

      75 Languages data are classified according to the Australian Standard Classification of Languages (ASCL), 2005-06 (cat. no. 1267.0).

      76 Education data are classified according to the Australian Standard Classification of Education (ASCED), 2001 (cat. no. 1272.0). Coding was based on the level and field of education as reported by respondents and recorded by interviewers. From the ASCED coding, the level of education was also classified according to the International Standard Classification of Education (ISCED), 1997. For an example of a broad level concordance between these two classifications, see Australian Standard Classification of Education (ASCED), 2001 (cat. no. 1272.0).

      77 Occupation data are classified according to the ANZSCO - Australian and New Zealand Standard Classification of Occupations (cat. no. 1220.0). From the ANZSCO coding, occupation was also classified according to the International Standard Classification of Occupations (ISCO), 2008.

      78 Industry data are classified according to the Australian and New Zealand Standard Industrial Classification (ANZSIC), 2006 (Revision 1.0) (cat. no. 1292.0). From the ANZSIC, industry was also classified according to the International Standard Industrial Classification of All Economic Activities (ISIC), Rev.4, 2008.

      Products and services

      Data cubes

      79 Data cubes (spreadsheet) containing tables produced for this publication are available from the Data downloads section of the publication. Estimates, proportions and the corresponding relative standard errors (RSEs) and margin of errors (MOEs) are presented for each table.

      80 Additional data cubes containing state and territory data are to be appended to this product in 2014. Users can subscribe to receive Email Notifications to be advised when updates are available for this product. From the attached link, select 4. Social Statistics, sub-topic 42. Education, then select the product 4228.0 Programme for the International Assessment of Adult Competencies.

      Microdata

      81 For users who wish to undertake more detailed analysis of the survey data, a basic confidentialised unit record data file (CURF) is available on CD-ROM from Microdata: Programme for the International Assessment of Adult Competencies (PIAAC) (cat. no. 4228.0.30.001).

      82 Further information about microdata is available from the Microdata Entry Page on the ABS web site.

      Data available on request

      83 In addition to the statistics provided in this publication, the ABS may have other relevant data available on request. Subject to confidentiality and sampling variability constraints, tabulations can be produced from the survey on a fee-for-service basis. Inquiries should be made to the National Information and Referral Service on 1300 135 070. A spreadsheet containing a complete list of the data items available from PIAAC can be accessed from the Data downloads section.

      Acknowledgements

      84 ABS publications draw extensively on information provided freely by individuals, businesses, governments and other organisations. Their continued cooperation is very much appreciated; without it, the wide range of statistics published by the ABS would not be available. Information received by the ABS is treated in strict confidence as required by the Census and Statistics Act 1905.

      Next survey

      85 The OECD proposes to conduct the PIAAC survey internationally every ten years. The next PIAAC survey is therefore proposed to be conducted in 2021.

      Related publications

        86 The OECD published international results on 8 October 2013 in OECD Skills Outlook 2013: First Results from the Survey of Adult Skills. The report is available from the OECD website at www.oecd.org.

        87 The OECD publication titled 'Literacy, Numeracy and Problem Solving in Technology-Rich Environments - Framework for the OECD Survey of Adult Skills' provides further information about the PIAAC survey. This publication, as well as further background and conceptual information about the PIAAC survey, is available from the OECD website at www.oecd.org.

        88 The Education and Training Topics @ a Glance page contains a wealth of information and useful references. This site can be accessed through the ABS website.

        Appendix - pathways through the self-enumerated exercise

        Show all

        Appendix - scores and skill levels

        Show all

        Technical note - data quality

        Reliability of the estimates

        1 Two types of error are possible in an estimate based on a sample survey: sampling error and non-sampling error. Since the estimates in this publication are based on information obtained from a sample, they are subject to sampling variability. That is, due to randomness in the composition of the sample, the estimates may differ from those population values that would have been produced if all dwellings had been included in the survey. One measure of the likely difference is given by the standard error (SE). There are about two chances in three (67%) that a sample estimate will differ by less than one SE from the number that would have been obtained if all dwellings had been included, and about 19 chances in 20 (95%) that the difference will be less than two SEs.

        2 In contrast to most other Australian Bureau of Statistics (ABS) surveys, PIAAC estimates also include significant imputation variability, due to the use of multiple possible assessment tasks and the complex scaling procedures. The effect of this on the estimation can be reliably estimated and is included in the calculated SEs. An accepted procedure for estimating the imputation variance using plausible values is to measure the variance of the plausible values (with an appropriate scaling factor) as follows:

        \(\large{v a r_{i m p}\left(\hat{\theta}_{m ea n}\right)=\left(1+\frac{1}{M}\right) \frac{\sum_{i=1}^{M}\left(\hat{\theta}_{i}-\hat{\theta}_{{mean}}\right)^{2}}{M-1}}\)

        where:

        \(\large{\widehat{\theta}_{\text {mean}}=\text{ the mean estimate of the plausible values}}\)

        \(\large{i=1-10 \text{ respectively, for the plausible values } \hat{\theta}_{1} \text{ to } \tilde{\theta}_{10}} \)

        \(\large{M=\text { the total number of plausible values used }(\mathrm{M}=10 \text { for PIAAC })}\)

        3 Together, the sampling variance and imputation variance can be added to provide a suitable measure of the total variance, and total SE. This SE indicates the extent to which an estimate might have varied by chance because only a sample of persons was included, and/or because of the significant imputation used in the literacy scaling procedures.

        4 There are a number of more convenient ways of expressing the sampling variability than the SE. One is called the relative standard error (RSE), which is obtained by expressing the SE as a percentage of the estimate:

        \(\large{R S E \%=\left(\frac{\text{SE}}{\text { Estimate }}\right) \times 100}\)

        5 Another way of expressing the sampling variability is called the margin of error (MOE) which may be more useful for proportion estimates, in particular where the estimated proportion is large or small. MOEs are provided for all proportion estimates at the 95% confidence level. At this confidence level the MOE indicates that there are about 19 chances in 20 that the estimate will differ by less than the specified MOE from the population value. The 95% margin of error is obtained by multiplying the SE by 1.96:

        \(\large MOE=SE \times 1.96\)

        6 The estimate combined with the MOE defines a range, known as a confidence interval, which is expected to include the true population value with a given level of confidence, known as a confidence interval. The confidence interval can easily be constructed from the MOE of the same level of confidence by taking the estimate plus or minus the MOE of the estimate. This range should be considered when using the estimates to make assertions about the population or to inform decisions.

        7 Whilst the MOEs in this publication are calculated at the 95% confidence level, they can easily be converted to a 90% confidence level by multiplying the MOE by 1.654/1.96 or to a 99% confidence level by multiplying by a factor of 2.576/1.96.

        8 The 95% MOE can also be calculated from the RSE by:

        \(\large{M O E=\left(\frac{R S E \% \times Estimate}{100}\right) \times 1.96}\)

        9 Sampling error for estimates from PIAAC 2011-2012 have been calculated using the Jackknife method of variance estimation. This involves the calculation of 60 'replicate' estimates based on 60 different subsamples of the obtained sample. The variability of estimates obtained from these subsamples is used to estimate the sample variability surrounding the estimate.

        10 A data cube (spreadsheet) containing tables produced for this publication and the calculated RSEs for each of the estimates, and MOEs for each proportion estimate, is available from the Data downloads section of the publication.

        11 Estimates with RSEs less than 25% are considered sufficiently reliable for most purposes. Estimates with RSEs between 25% to 50% have been included and are annotated to indicate they are subject to high sample variability relative to the size of the estimate and should be used with caution. In addition, estimates with RSEs greater than 50% have also been included and annotated to indicate they are usually considered unreliable for most purposes. All cells in the data cube with RSEs greater than 25% contain a comment indicating the size of the RSE. These cells can be identified by a red indicator in the corner of the cell. The comment appears when the mouse pointer hovers over the cell.

        Calculation of Standard Error

        12 SEs can be calculated using the estimates (counts or proportions) and the corresponding RSEs. For example, the estimated number of persons in Australia aged 15 to 74 years that have scores at Level 3 on the literacy scale is 6,379,600 and the RSE for this estimate is 1.8%. The SE is calculated by:

        \(\large{\begin{aligned} S E \ of \ estimate &=\left(\frac{R S E}{100}\right) \times estimate \\ &=\left(\frac{1.8}{100}\right) \times 6,379,600 \\ &=0.018 \times 6,379,600 \\ &=114,800 \ (\text {rounded to nearest } 100) \\ \ \end{aligned}}\)

        13 Therefore, there are about two chances in three that the actual number of persons that have scores at Level 3 on the literacy scale is in the range of 6,264,800 to 6,494,400 and about 19 chances in 20 that the value was in the range 6,150,000 to 6,609,200. This example is illustrated in the diagram below.

        Calculation of standard error example

        Proportion and percentages

        14 Proportions and percentages formed from the ratio of two estimates are also subject to sampling errors. The size of the error depends on the accuracy of both the numerator and the denominator. A formula to approximate the RSE of a proportion is given below. The formula is only valid when the numerator is a subset of the denominator:

        \(\large{RSE\left(\frac{x}{y}\right) \approx \sqrt{[R S E(x)]^{2}-[R S E(y)]^{2}}}\)

        15 The proportion of Australians aged 15 to 74 years who have scores at Level 3 on the literacy scale is 39% and the associated RSE is 2.4% and associated MOE is +/- 1.8 percentage points. Hence there are about two chances in three that the true proportion of Australians aged 15 to 74 years who have scores at Level 3 on the literacy scale is between 38.1% and 39.9%, and 19 chances in 20 that the true proportion is within 1.8 percentage points from 39%, that is between the interval 37.2% and 40.8%.

        16 The RSEs of proportions within the data cubes have been provided. Calculations of RSEs for other proportions using the above formula should be seen as only indicative of the true RSE.

        Differences

        17 Published estimates may also be used to calculate the difference between two survey estimates (numbers or proportions). Such an estimate is also subject to sampling error. The sampling error of the difference between two estimates depends on their SEs and the relationship (correlation) between them. An approximate SE of the difference between two estimates (x-y) may be calculated by the following formula:

        \(\large{S E(x-y) \approx \sqrt{[S E(x)]^{2}+[S E(y)]^{2}}}\)

        18 An approximate MOE of the difference between two estimates (x-y) may be calculated by the following formula:

        \(\large{MOE(x-y) \approx \sqrt{[M O E(x)]^{2}+[M O E(y)]^{2}}}\)

        19 These formula will only be exact for differences between separate and uncorrelated characteristics or sub populations and only provides an indication for the differences likely to be of interest in this publication.

        Significance testing

        20 A statistical significance test for any comparisons between estimates can be performed to determine whether it is likely that there is a difference between two corresponding population characteristics. The approximate standard error of the difference between two corresponding estimates (x and y) can be calculated using the formula in paragraph 17. The standard error is then used to create the following test statistic:

        \(\Large{\left(\frac{|x-y|}{S E(x-y)}\right)}\)

        21 If the value of this test statistic is greater than 1.96 then there is evidence, with a 95% level of confidence, of a statistically significant difference in the two populations with respect to that characteristic. Otherwise, it cannot be stated with confidence that there is a real difference between the populations with respect to that characteristic. Any calculations using the above formula should be seen as only indicative of a statistically significant difference.

        Glossary

        Show all

        Quality declaration - summary

        Institutional environment

        Relevance

        Timeliness

        Accuracy

        Coherence

        Interpretability

        Accessibility

        Abbreviations

        Show all

        Back to top of the page