4720.0 - National Aboriginal and Torres Strait Islander Social Survey: Users' Guide, 2008
ARCHIVED ISSUE Released at 11:30 AM (CANBERRA TIME) 26/02/2010
Page tools: Print Page Print All | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
INTERPRETATION OF RESULTS
Every effort has been made to minimise such issues through the development and use of culturally appropriate survey methodology. For a number of survey data items, some respondents were unwilling or unable to provide the required information. Where responses for a particular data item were missing for a person or household they were recorded in a 'not known', 'not stated' or 'refusal' category for that data item. Where these categories apply, they are listed in the data item list. This chapter provides information on:
RELIABILITY OF SURVEY ESTIMATES All sample surveys are subject to error, which can be broadly categorised as either: Sampling error occurs because only a small proportion of the total population is used to produce estimates that represent the whole population. Sampling error can be reliably measured as it is calculated based on the scientific methods used to design surveys. Non-sampling error may occur in any data collection, whether it is based on a sample or a full count (eg Census). Non-sampling error may occur at any stage throughout the survey process. Examples of non-sampling error include:
Sampling and non-sampling errors should be considered when interpreting results of the survey. Sampling errors are considered to occur randomly, whereas non-sampling errors may occur randomly and/or systematically. Achieved sample The table below provides the number of fully responding persons in each state and territory for both the community and non-community sample. More information on sample design is provided in the Survey design chapter.
SAMPLING ERROR Sampling error is the expected random difference that could occur between the published estimates, derived from using a sample of persons, and the value that would have been produced if all persons in scope of the survey had been enumerated. The size of the sampling error associated with an estimate depends on the following factors:
For more information on sample size and design see the Survey design chapter. Measures of sampling error A measure of the sampling error for a given estimate is provided by the Standard Error (SE), which is the extent to which an estimate might have varied by chance because only a sample of persons was obtained. There are about two chances in three that a sample estimate will differ by less than one SE from the figure that would have been obtained if all persons had been included in the survey, and about 19 chances in 20 that the difference will be less than two SEs. Another measure is Relative Standard Error (RSE), which is the SE expressed as a percentage of the estimate. The RSE is a useful measure as it provides an immediate indication of the percentage errors likely to have occurred due to sampling, and therefore avoids the need to refer also to the size of the estimate. The smaller the estimate, the higher the RSE. Very small estimates are subject to such high SEs (relative to the size of the estimate) as to detract seriously from their value for most reasonable uses. Only estimates with RSEs of less than 25% are considered sufficiently reliable for most purposes. RSEs for all estimates published in the National Aboriginal and Torres Strait Islander Social Survey, 2008 (cat. no. 4714.0) are available from the ABS website in spreadsheet format. Imprecision due to sampling variability, which is measured by the SE, should not be confused with inaccuracies that may occur because of imperfections in reporting by respondents and interviewers, or random errors made in coding and processing of survey data. These types of inaccuracies contribute to the total non-sampling error and may occur in any enumeration. The potential for random non-sampling error adds to the uncertainty of the estimates caused by sampling variability. However, it is not usually possible to quantify either the random or systematic non-sampling errors. Standard errors of proportions and percentages Proportions and percentages formed from the ratio of two estimates are subject to sampling errors. The size of the error depends on the accuracy of both the numerator and the denominator. The RSEs of proportions and percentages for the publication, National Aboriginal and Torres Strait Islander Social Survey, 2008 (cat. no. 4714.0) were calculated using the full delete-a-group jackknife technique, which is described in 'Replicate weights and directly calculated standard errors'. RSEs for all estimates in the summary publication are available in spreadsheet format from the ABS website. For proportions where the denominator is an estimate of the number of persons in a group and the numerator is the number of persons in a sub-group of the denominator group, a formula to approximate the RSE of the proportion x/y is given by: From the above formula, the estimated RSE of the proportion or percentage will be lower than the RSE of the estimate of the numerator. Replicate weights and directly calculated standard errors Standard errors (SEs) on estimates from this survey were obtained through the delete-a-group jackknife variance technique. In this technique, the full sample is repeatedly subsampled by successively dropping random groups of households and then the remaining records are reweighted to the survey benchmark population. Through this technique, the effect of the complex survey design and estimation methodology on the accuracy of the survey estimates is stored in the replicate weights. For the 2008 NATSISS, this process was repeated 250 times to produce 250 replicate weights for each sample unit. The distribution of the 250 replicate estimates based on the full sample estimate is then used to directly calculate the standard error for each full sample estimate. The use of directly calculated SEs for each survey estimate, rather than SEs based on models, provides more information on the sampling variability inherent in a particular estimate. Therefore, directly calculated SEs for estimates of the same magnitude, but from different sample units, generally result in different SE estimates. For more information see Appendix 2: Replicate weights technique. Comparison of estimates Published estimates may also be used to calculate the difference between two survey estimates. Such an estimate is subject to sampling error. The sampling error of the difference between two estimates depends on their SEs and the relationship (correlation) between them. An approximate SE of the difference between two estimates (x-y) may be calculated by the following formula: While the above formula will be exact only for differences between separate and uncorrelated (unrelated) characteristics of sub-populations, it is expected that it will provide a reasonable approximation for all differences likely to be of interest in this survey. Significance testing For comparing population characteristics between surveys or between populations within a survey it is useful to determine whether apparent differences are 'real' differences between the corresponding population characteristics or simply the product of differences between the survey samples. One way to examine this is to determine whether the difference between the estimates is statistically significant. This is done by calculating the standard error of the difference between two estimates (x and y) and using that to calculate the test statistic using the formula below: A test statistic sets out to measure the probability that the observed difference has occurred due to sample error alone. If the value of the tested statistic is greater than 1.96 then there is a 95% certainty that there is a statistically significant difference between the two populations with respect to the particular characteristic. Significance testing is concerned solely with sample error, that is, variations in estimates resulting from taking a sample of households rather than a Census. Significance testing does not take into account non-sampling errors, which are often encountered when interpreting statistical output. NON-SAMPLING ERROR Every effort was made to minimise non-sampling error by careful design and testing of questionnaires, intensive training of interviewers, and extensive editing and quality control procedures at all stages of data processing. However, errors can be made in giving and recording information during an interview and these may occur regardless of whether the estimates are derived from a sample or from a full count (eg Census). Inaccuracies of this type are referred to as non-sampling errors. The major sources of non-sampling error are:
These sources of random and/or systematic non-sampling error are discussed in more detail below. Field coverage errors Some dwellings may have been inadvertently included or excluded. For example, if it was unclear whether the dwelling was private or non-private. In order to prevent this type of error, dwelling listings are constantly updated. Additionally, some people may have been inadvertently included or excluded because of difficulties in applying the scope rules. For example, identification of a household's usual residents or treatment of some visitors. For more information on scope and coverage see the Survey design chapter. Response errors In this survey response errors may have arisen from three main sources:
Errors may have been caused by misleading or ambiguous questions, inadequate or inconsistent definitions of terminology, or by poor overall survey design (eg context effects, where responses to a question are directly influenced by the preceding question/s). In order to overcome these types of issues, individual questions and the overall questionnaire were tested before the survey was enumerated. Testing included:
More information on pre- and field testing is provided in the Survey design chapter. As a result of testing, modifications were made to:
In considering modifications it was sometimes necessary to balance better response to a particular item/topic against increased interview time, effects on other parts of the survey and the need to minimise changes to ensure international comparability. Therefore, in some instances it was necessary to adopt a workable/acceptable approach rather than an optimum approach. Although changes would have had the effect of minimising response errors due to questionnaire design and content issues, some will have inevitably occurred in the final survey enumeration. Response errors may also have occurred due to the length of the survey interview because of interviewer and/or respondent fatigue (ie loss of concentration). While efforts were made to minimise errors arising from deliberate misreporting or non-reporting, some instances will have inevitably occurred. Accuracy of recall may also have led to response error, particularly in relation to the lifetime questions. Information in this survey is essentially 'as reported', and therefore may differ from information available from other sources or collected using different methodologies. Responses may be affected by imperfect recall or individual interpretation of survey questions, particularly when a person was asked to reflect on experiences in the 12 months prior to interview. The questionnaire was designed to strike a balance between minimising recall errors and ensuring the data was meaningful, representative (from both respondent and data use perspectives) and would yield sufficient observations to support reliable estimates. It is possible that the reference periods did not suit every person for every topic, and that difficulty with recall may have led to inaccurate reporting in some instances. A further source of response error is lack of uniformity in interviewing standards. To ensure uniform interviewing practices and a high level of response accuracy, extensive interviewer training was provided. An advantage of using Computer Assisted Interviewing (CAI) technology to conduct survey interviews is that it potentially reduces non-sampling error. More information on interviews, interviewer training, the survey questionnaire and CAI is provided in the Survey design chapter. Response errors may have also occurred due to language or reading difficulties. In some instances, a proxy interview was conducted on behalf of another person who was unable to complete the questionnaire themselves due to language problems and where an interpreter was unable to be organised. A proxy interview was only conducted where another person in the household (aged 15 years or over) was considered suitable. The proxy may also have been a family member who did not live in the selected household, but lived nearby. A proxy arrangement was only undertaken with agreement from the selected person, who were first made aware of the topics covered in the questionnaire. Aside from difficulties in understanding English verbally/orally, there may have been difficulties in understanding written English. The 2008 NATSISS incorporated the extensive use of prompt cards, as pre-testing indicated that these could aid interpretation by selected persons. People were asked if they would prefer to read the cards themselves or have them read out by the interviewer. It is possible that some of the terms or concepts used on the prompt cards were unfamiliar and may have been misinterpreted, or that a response was selected due to its position on the prompt card. Some respondents may have provided responses that they felt were expected, rather than those that accurately reflected their own situation. Every effort has been made to minimise such issues through the development and use of culturally appropriate survey methodology. Non-uniformity of interviewers themselves is also a potential source of error, in that the impression made upon respondents by personal characteristics of individual interviewers such as age, sex, appearance and manner, may influence the answers obtained. Errors in processing Errors may occur during data processing, between the initial collection of the data and final compilation of statistics. These may be due to a failure of computer editing programs to detect errors in the data, or during the manipulation of raw data to produce the final survey data files. For example, when creating new data items from raw survey data (eg coding of occupation data items to the standard classification), during the estimation procedures or when weighting the data file. To minimise the likelihood of these errors occurring a number of processes were used, including:
UNDERCOVERAGE Undercoverage is one potential source of non-sampling error and is the shortfall between the population represented by the achieved sample and the in-scope population. It can introduce bias into the survey estimates. However, the extent of any bias depends upon the magnitude of the undercoverage and the extent of the difference between the characteristics of those people in the coverage population and those of the in-scope population. Briefly, the measures taken to address potential bias due to undercoverage were:
More detailed information on undercoverage is provided in the following segments. Rates of undercoverage Undercoverage rates can be estimated by calculating the difference between the sum of the initial weights of the sample and the population count. If a survey has no undercoverage, then the sum of the initial weights of the sample would equal the population count (ignoring small variations due to sampling error). For more information on weighting refer to the Survey design chapter. The 2008 NATSISS has a relatively large level of undercoverage when compared to other ABS surveys. There was also an increase in undercoverage compared to previous ABS Indigenous surveys. For example, the estimated undercoverage in the 2004-05 National Aboriginal and Torres Strait Islander Health Survey was 42%. The estimated undercoverage rate for the Monthly Population Survey for private dwellings is on average 12% and the non-response rate is 3.5%. The overall undercoverage rate for the 2008 NATSISS is approximately 53% of the in-scope population at the national level. This rate varies across the states and territories, as shown in the table below.
Of the national undercoverage rate, 6% is due to planned frame exclusions where analysis has shown that the impact of any bias is minimal. More information on these exclusions is provided below. Potential sources of undercoverage Undercoverage may occur due to a number of factors, including:
Each of these factors are outlined in more detail in the following paragraphs. To assist interpretation, a diagrammatical representation of the potential sources of undercoverage is shown below. Frame exclusions Frame exclusions were incorporated into the 2008 NATSISS to manage the cost of enumerating areas with a small number of Indigenous persons. There were also unplanned exclusions on the non-community frame, due to an error in identifying private dwellings during the creation of the frame. This error resulted in the undercoverage of some discrete Indigenous communities, which were supposed to be represented in the survey's non-community sample. An adjustment was applied to the weights to account for this error. More information on this adjustment is provided in the weighting segment of the Survey design chapter. At the national level it is estimated that 8.5% of the in-scope population was excluded from the frame, that is, they did not have a chance of selection. Part of this exclusion represents an estimate of the people who had moved to areas out on coverage since the 2006 Census. The number of people who moved may be higher than estimated and could account for a portion of the higher than expected non-identification estimate, which is discussed below. Further information on scope and coverage is provided in the Survey design chapter. Non-response Non-response may occur when people cannot or will not cooperate, or cannot be contacted during the enumeration period. Unit and item non-response by persons/households selected in the survey will affect both sampling and non-sampling error. The loss of information on persons and/or households (unit non-response) and on particular questions (item non-response) reduces the effective sample and increases both sampling error and and the likelihood of incurring response bias. The size of any non-response bias depends on the level of non-response and the extent of the difference between the characteristics of those people who responded to the survey and those who did not, as well as the extent to which non-response adjustments can be made during estimation through the use of benchmarks. To maximise response rates and reduce the impact of non-response, the following methods were adopted in this survey:
In the 2008 NATSISS, non-response accounts for a portion of overall undercoverage. The two components of non-response were:
Of the households screened in non-community areas, approximately 89% of Indigenous households responded. This assumes that response to the screening question is not related to the Indigenous status of the household. Of households who responded to the screening question, approximately 2.5% were identified as having an Indigenous usual resident. Of these identified households, 83% then responded to the survey. In discrete Indigenous communities, 78% of selected in-scope households were fully responding. In developing the survey weights, information available for responding and non-responding households (who provided partial information) were used by the ABS to conduct quantitative investigations into non-response adjustments. No non-response adjustment, apart from benchmarking, was made to the weights as indications were that non-response had a negligible impact on the estimates. Response rates Response rates reflect the number of people who responded to the survey divided by the number of people in the sample, expressed here as a percentage. The response rate for the 2008 NATSISS was 82% nationally. Response rates are only one measure of the quality of this survey, therefore other components of undercoverage should be taken into account when analysing survey results. The tables below provide the achieved sample and response rates for each state and territory, as well as the response rates for the community and non-community samples.
Non-identification as Indigenous Non-identification of Indigenous households during the screening process may have occurred due to:
The under-identification of Indigenous persons in non-community areas is estimated to be up to 31% of those screened. This estimate is the remaining level of undercoverage when all other known sources of undercoverage have been removed. Part of this percentage is likely to be due to other factors which are unknown. It is not possible to measure the potential bias induced by non-identification, as there is no information available for people who weren't identified as Indigenous. However, the adjustment applied in the weighting process and the calibration to the benchmarks should reduce potential bias. Issues arising in the field Known undercoverage, due to other issues arising in the field, included sample being excluded due to:
The estimated undercoverage due to these issues was 3.7% at the national level. The undercoverage induced by the Monthly Population Survey should have minimal impact on the estimates as the process of avoiding overlap is random. Comparisons to other data sources Given the high undercoverage rate, the analysis undertaken to ensure that results from the 2008 NATSISS were consistent with other data sources was more extensive than usual. The characteristics of the 2008 NATSISS respondents were compared to a number of ABS collections, including:
From this analysis, it was determined that some of the respondent characteristics from the initial weighted data did not align well with other ABS estimates in some states and territories. In particular, some of the social outcomes in the NT differed to those anticipated. The estimates were also higher than expected for:
Further analysis indicated that the community sample was having a greater influence on the estimates than would reasonably be expected. As a result, additional benchmarks (community/non-community) were incorporated into the weighting strategy to ensure that each sample of the population were appropriately represented. This improved the consistency between estimates of NATSISS with other ABS collections. Each step in the weighting process was then thoroughly assessed to ensure that it was not biasing the results. More information on data confrontation and on weighting, benchmarking and estimation is provided in the Survey design chapter. SEASONAL EFFECTS The estimates from the survey are based on information collected from August 2008 to April 2009, and due to seasonal effects they may not be fully representative of other time periods in the year. For example, the 2008 NATSISS asked people if they had participated in any physical, sporting, community or social activities in the three months prior to interview. Involvement in particular activities may be subject to seasonal variation through the year. Therefore, the results could have differed if the survey had been conducted over the whole year or in a different part of the year. AGE STANDARDISATION Age standardisation techniques were applied to some data in the summary publication, National Aboriginal and Torres Strait Islander Social Survey, 2008 (cat. no. 4714.0), to remove the effect of the differing age structures in comparisons between Indigenous and non-Indigenous populations. The age structure of the Indigenous population is considerably younger than that of the non-Indigenous population. As age is strongly related to many health measures, as well as labour force status, estimates of prevalence which do not take account of age may be misleading. The age standardised estimates of prevalence are those rates that 'would have occurred' should the Indigenous and non-Indigenous populations both have the standard age composition. The summary publication used the direct age standardisation method. Estimates of age standardised rates were calculated using the following formula: Cdirect = the age standardised rate for the population of interest; a = the age categories that have been used in the age standardisation; Ca = the estimated rate for the population being standardised in age category a; and Psa = the proportion of the standard population in age category a. An alternative technique for analysing characteristics in populations that have different age structures is to compare the distribution of the variable of interest by age group. For this approach, unadjusted (ie not age standardised) data could be output in 10-year age ranges. Age standardisation may not be appropriate for particular variables, even though the populations to compare have different age distributions and the variables in question are related to age. It is also necessary to check that the relationship between the variable of interest and age is broadly consistent across the populations. If the rates vary differently with age in the two populations then there is evidence of an interaction between age and population, and as a consequence age standardised comparisons are not valid. COMPARISON TO THE 2002 NATSISS Overview The ABS previously conducted the National Aboriginal and Torres Strait Islander Social Survey (NATSISS) in 2002. A National Aboriginal and Torres Strait Islander Survey (NATSIS) was also conducted in 1994. Extensive information on the differences between the 2002 and 1994 surveys is provided in the National Aboriginal and Torres Strait Islander Social Survey: Expanded Confidentialised Unit Record File, Technical Paper, 2002 (cat. no. 4720.0). Understanding the extent to which data from the 2008 and 2002 NATSISS can be compared is essential in interpreting apparent changes over time. While many key data items in the 2008 survey are the same or similar to those in the 2002 survey, there are differences in the sample design and coverage, survey methodology and content, definitions, and classifications, all of which may impact on comparability. Survey methodology Both surveys collected information from Indigenous people living in private dwellings throughout Australia. The 2008 NATSISS collected information from Indigenous people of all ages, while the 2002 survey collected information on Indigenous people aged 15 years and over. In 2008, visitors who had been staying at a selected household for six months or longer were considered in scope, whereas in 2002 visitors were excluded. The scope of the NATSISS changed between 2002 and 2008, to enable the inclusion of Indigenous children aged 0-14 years. While this change does not specifically impact on the comparability of data for Indigenous adults aged 15 years and over, some survey modules and questions were redeveloped and/or expanded to include Indigenous children. For example, the 2008 survey includes information on the selected child's main carer, as well as on assumed parents or guardians of Indigenous children, which are not available in the 2002 survey. Refer to the data items list or to the topic based chapters for more information. The sample sizes varied for both surveys. The 2008 survey had a sample of approximately 13,300 Indigenous people, compared to approximately 9,400 Indigenous people in 2002. However, when comparing by similar age groups, the 2008 survey had a sample of approximately 7,800 Indigenous people aged 15 years and over, with the remainder being children aged 0-14 years. The 2008 survey had a larger sample of Indigenous households, approximately 6,900 households compared to approximately 5,900 households in 2002. Broad differences in the design of the two surveys are outlined in the Survey design chapter. Survey timing Each of the surveys were conducted over a similar enumeration period. The 2008 NATSISS was undertaken from August 2008 to April 2009 and the 2002 NATSISS from August 2002 to April 2003. Population characteristics Classifications The classification of several demographic and socio-economic characteristics used in the 2008 NATSISS differ to those used in 2002, as outlined in the table below.
Geographic characteristics The standard geographical classification for the two surveys differs, with the 2008 survey based on data from the 2006 Census of Population and Housing and the 2002 survey based on the 2001 Census. Mesh Blocks, a small area unit of information, were defined for the first time in the 2006 Census and were used to assist in targeting Indigenous people for the 2008 NATSISS. More information on this process is provided in the Survey design chapter. Only one of the Socio-Economic Indexes for Areas (SEIFA) items is available from the 2002 NATSISS, the Index of Relative Disadvantage. Section of state is not available for 2002 and therefore, there are no data for comparisons. Survey topics Within each of the following topic based chapters, information on the 2008 data items includes comparisons to items collected in 2002:
Other considerations
Document Selection These documents will be presented in a new window.
|