4720.0 - National Aboriginal and Torres Strait Islander Social Survey: Users' Guide, 2008
ARCHIVED ISSUE Released at 11:30 AM (CANBERRA TIME) 26/02/2010
Page tools: Print Page Print All | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
SURVEY DESIGN
SURVEY DEVELOPMENT Questionnaire testing Pre-testing Pre-testing covers a range of testing techniques, the common feature being that the testing is conducted prior to taking a survey into the field (ie 'field test'). This phase of testing is critical for identifying problems for both respondents and interviewers, particularly regarding question content. The techniques used are designed to identify problems with the part of the questionnaire being tested and give some indication of the source of the error. Techniques that were available during the pre-testing phase of questionnaire development:
A major advantage of pre-testing is that small samples are used and limited survey specific documentation and training is required as the testing is performed by people working on the survey. Consequently the process can allow several iterative tests of a set of questions in the time it would take to conduct a pilot test. The broad objectives of the 2008 NATSISS pre-testing were to assess:
Field testing The next phase of survey development involved field testing the survey questionnaire and procedures. Pilot Test The Pilot Test is the first field test of the entire question set. Testing is designed to:
Pilot testing was conducted in November 2007 in both remote and non-remote areas in NSW, SA, WA and NT. A maximum of four people per household, two adults and two children were randomly selected to be interviewed. Dress Rehearsal The Dress Rehearsal is the final test in the development cycle and mainly focuses on the procedural and timing aspects of the survey. Primarily, it is an operational test. Questionnaire design errors (eg sequencing errors) can be identified, investigated and corrected. Objectives of the Dress Rehearsal are to:
The dress rehearsal was conducted in April/May 2008 in both remote and non-remote locations in Victoria, SA, WA and NT. DATA COLLECTION The survey was conducted under the authority of the Census and Statistics Act 1905. The ABS sought the willing cooperation of households in this survey. For survey questions of a particularly sensitive nature (eg substance use) selected persons (or their proxies) may not have provided a response. More detailed information on allowed survey responses (including refusals) is provided in each of the topic-based chapters. The confidentiality of all information provided by respondents is guaranteed. Under this legislation, the ABS cannot release identifiable information about households or individuals. All aspects of the survey's implementation were designed to conform to the Information Privacy Principles set out in the Privacy Act 1988, and the Privacy Commissioner was informed of the details of the proposed survey. Trained ABS interviewers conducted personal interviews at selected private dwellings from August 2008 to April 2009. Interviews were predominantly conducted using a Computer-Assisted Interviewing (CAI) questionnaire. CAI involves the use of a notebook computer to record, store, manipulate and transmit the data collected during interviews. In remote areas, a paper back-up of the questionnaire was available, if needed, but generally was not used. In non-remote areas a self-enumerated paper form was used to collect information on substance use. Prior to enumeration, ABS interviewers participated in cultural awareness training, which provided information specifically developed for surveys involving Aboriginal and Torres Strait Islander people. The training outlined the ABS protocol for conducting surveys in community areas and described cultural considerations for interviewers. Interviews Interviewers conducted a screening process to identify Indigenous households, that is, households where one or more household members were identified as being Aboriginal or Torres Strait Islander. Interviewers went to dwellings in selected areas and asked one usually resident household member (aged 18 years or over), if anyone in the household is of Aboriginal or Torres Strait Islander origin. If the household spokesperson stated that one or more usual residents were Aboriginal or Torres Strait Islander, the household form was commenced. The household form collected general characteristics of the household, from one usually resident household member aged 18 years or over. This information included:
Based on this demographic information, individuals were randomly selected for personal interview. For selected households in discrete remote Indigenous communities and outstations, up to one Indigenous person aged 15 years and over and up to one Indigenous child aged 0-14 years was randomly selected. For selected households in non-remote and remote non-community areas up to two Indigenous persons aged 15 years and over and up to two Indigenous children aged 0-14 years were randomly selected. An elected household spokesperson also answered some financial and housing items on behalf of other household members, including:
If a usually resident household member aged 18 years or over was not available, interviewers made appointments to call-back to the household, as necessary. Information on demographic and household characteristics is provided in the Population characteristics chapter. A full list of the topics included in both the personal interviews and the household form is provided in the Introduction. In order to conduct a personal interview with the selected person (ie the respondent), interviewers made appointments to call-back to the household, as necessary. In some cases appointments for call-backs were made by telephone, however, all interviews were conducted face-to-face. Due to the sensitive nature of the survey questions, it was suggested that interviews be conducted in private. However, interviews may have been conducted in private or in the presence of other household members, according to the wishes of the respondent. Interviews, including the household assessment, took on average 109 to 119 minutes per fully-responding household in remote and non-remote areas. Personal interviews were conducted with selected Indigenous persons aged 15 years and over. Exceptions occurred where the selected person:
In the above instances, a proxy interview may have been organised. Where the selected person was mourning the death of a family member (Sorry Business) a personal interview was not pursued. Proxy interviews were used to collect information on selected Indigenous children aged 0-14 years. Wherever possible, the proxy was a parent or guardian. If no parent or guardian was available, then a close relative or other household member who had responsibility for the child provided responses. Where consent for interview was not given by a parent or guardian of an Indigenous person aged 15-17 years, a personal interview was not conducted. Questionnaire The questionnaire was administered by experienced ABS interviewers, who had received specific training for the survey. The questionnaire was further supported by detailed interviewer instructions, covering general procedural issues as well as specific instructions relating to individual questions. The questionnaire is not fully indicative of the range of information available from the survey, as additional items were created in processing the data. For example, ABS classifications were applied to raw data inputs to create labour force status. Additionally, some questions were asked solely for the purpose of enabling or clarifying other questions, and are not available in survey results. Initial household information was collected from one usually resident household member aged 18 years and over using a Household Form. This was similar in design to the household form used by the ABS Monthly Population Surveys (MPS). From this information, one Indigenous adult and one Indigenous child in the household (remote areas), and up to two Indigenous adults and up to two Indigenous children in the household (non-remote areas), were randomly selected to complete a personal interview. The personal interview consisted of a number of separate modules, collecting information on demographics; language and culture; social capital; life experiences; health; education; work; income and finances; housing and mobility; transport; information technology; and safety, crime and justice. Computer assisted interviewing (CAI) Interviews were conducted using a Computer Assisted Interviewing (CAI) questionnaire. CAI involves the use of a notebook computer to record, store, manipulate and transmit the data collected during interviews. This type of questionnaire offers important advantages over paper versions, including:
The questionnaire employed a number of different approaches to recording information at the interview:
To ensure consistency of approach, interviewers were instructed to ask the interview questions as shown in the questionnaire. In certain areas of the questionnaire, interviewers were asked to use indirect and neutral prompts, at their discretion, where the response given was, for example, inappropriate to the question asked or lacked sufficient detail necessary for classification and coding. Copies of the survey questionnaire With the release of this publication, a paper copy of the 2008 NATSISS questionnaire has also been made available on the ABS website. The survey questionnaire is provided as a reference to the 2008 NATSISS and should not be used for administering survey interviews. DATA PROCESSING Data capture Computer-based systems were used to collect and process data from the survey. The survey used computer-assisted interviewing (CAI) for data collection and the ABS Household Survey Facilities (HSF) system to process the survey. The use of CAI ensured that respondents were correctly sequenced throughout the questionnaire. Inbuilt edits meant that some issues could be clarified with the respondent at the time of interview. Interviewer workloads were electronically loaded on receipt in the ABS office in each state or territory. Checks were made to ensure the workloads were fully accounted for and that questionnaires for each household and respondent were completed. Problems with the questionnaire identified by interviewers were resolved, where possible by using other information contained in the questionnaire, or by referring to the comments provided by interviewers. Coding Computer-assisted coding was performed on responses to questions on: Classifications Family relationships Coding is based on household information collected for all persons in each dwelling. All usual residents were grouped into family units and classified according to their relationship within the family. Geography Geography data were classified according to the Australian Standard Geographical Classification (ASGC), Jul 2008 (cat. no. 1216.0). Language Languages spoken were coded utilising the Australian Standard Classification of Languages (ASCL), 2005-06 (cat. no. 1267.0). Education Coding is based on the level and field of education as reported by respondents and recorded by interviewers. Educational attainment data were classified according to the Australian Standard Classification of Education (ASCED), 2001 (cat. no. 1272.0). Occupation Data were classified according to the: Australian and New Zealand Standard Classification of Occupations (ANZSCO), First Edition, 2006; and Australian Standard Classification of Occupations (ASCO), Second Edition, 1997 (cat. no. 1220.0). Industry Data were classified according to the: Australian and New Zealand Standard Industrial Classification (ANZSIC), 2006 and 1993 (cat. no. 1292.0). ABS office coding There were a number of open-ended questions which required coding by ABS office staff. An example of office coding related to these types of questions occurred when people were asked about the types of contact they had made with family or friends who did not live with them (eg landline phone, mobile phone, internet, mail, etc). People were asked to nominate one or more response categories from a provided list, or they could provide an 'other' type of contact, which was then specified in a text field. Where possible, the text responses were allocated a code, which was based on the original list of categories. Output processing Information from the questionnaire, other than names and addresses, was stored on a computer output file in the form of data items. In some cases, items were formed from answers to individual questions, while in other cases data items were derived from answers to several questions. During processing of the data, checks were performed on records to ensure that specific values lay within valid ranges and that relationships between items were within limits deemed acceptable for the purposes of this survey. These checks were also designed to detect errors which may have occurred during processing and to identify instances which, although not necessarily an error, were sufficiently unusual or close to agreed limits to warrant further examination. Throughout processing, frequency counts and tables containing cross-classifications of selected data items were produced for checking purposes. The purpose of this analysis was to identify any problems in the input data which had not previously been identified, as well as errors in derivations or other inconsistencies between related items. Output file A multi-level hierarchical data file was produced. The contents of the person and household levels are briefly described below:
More information on the output file is provided in the Using the CURFS chapter. Validation checks The output data file was extensively validated through item-by-item examination of input and output frequencies, checking populations through derivations, internal consistency of items and across levels of the file, data confrontation, etc. Despite these checks, it is possible that some small errors remain undetected on the file. As a result of the validation processes, some adjustments have been made to data on a record-by-record basis. Changes were done with reference to other information provided by respondents and only to correct clear errors that were not identified during the survey interview, for example, where the reported amount of time lived in previous and/or current dwelling is inconsistent with a person's age. Adjustments may also have occurred as the result of an edit not being applied or being by-passed. For example, where the response to a question was recorded as 'don't know' and was subsequently answered. In cases where the interviewer did not or was unable to return to the original question, the details may have been recorded in a text file. In general, unless data were 'impossible' they have not been corrected, and results are essentially 'as reported'. To hide 'improbable' responses (eg extremely high alcohol consumption or income) some outliers have been reclassified, primarily to 'not stated' values. Some of these adjustments were made record-by-record; for others a global change was used for all records where reported values lay outside acceptable limits. Decisions to apply treatments or adjustments to the data were made, as appropriate, by the ABS. Data confrontation In the final stages of processing, extensive analyses, including data confrontation, were undertaken to ensure the survey estimates conformed to known or expected patterns, and were broadly consistent with data from the 2002 NATSISS or from other ABS data sources, allowing for methodological and other factors which might impact comparability. Comparisons of numerous demographic and socio-economic characteristics indicated that some of the 2008 NATSISS estimates did not align well with other ABS estimates due to potential undercoverage and other factors outlined elsewhere. As a result, additional benchmarks were incorporated into the survey's weighting strategy. See 'Weighting, benchmarking and estimation' for more information. Detailed analyses were undertaken for each topic to check for consistency with data from the following sources:
Checks undertaken include:
For new survey topics, external sources were used for comparison. For example, administrative data from the Australian Institute of Health and Welfare (AIHW) was used to check the maternal health and child health topics. Data available from the survey are essentially 'as reported' by respondents. The procedures and checks outlined above were designed primarily to minimise errors occurring during processing. In some cases it was possible to correct errors or inconsistencies in the data which was originally recorded in the interview, through reference to other data in the record; in other cases this was not possible and some errors and inconsistencies may remain on the data file. SURVEY METHODOLOGY Scope and coverage Scope The scope of the survey is all Indigenous people who were usual residents of private dwellings in Australia. Private dwellings are:
Usual residents are people who usually live in a particular dwelling and regard it as their own or main home. People usually resident in non-private dwellings, such as hotels, motels, hostels, hospitals, nursing homes, or short-stay caravan parks were not in scope. Further scope exclusions for this survey were:
Coverage The 2008 NATSISS was conducted in remote and non-remote areas in all states and territories of Australia, including discrete Indigenous communities. Coverage refers to the extent to which the defined scope is represented by the achieved sample. Whereas, undercoverage is the shortfall between the population represented by the achieved sample and the in-scope population. Undercoverage can be planned or unplanned. In this survey, coverage exclusions were explicitly applied to some people who were part of the in-scope population to manage enumeration costs. These people were not included in the sampling frame (based on where Indigenous households were identified in the 2006 Census of Population and Housing). These exclusions were:
These coverage exclusions result in an estimated undercoverage of approximately 6% of the in-scope Indigenous persons in Australia. Although these areas were not enumerated, the final sample was weighted to population benchmarks to account for these exclusions. In addition to this, there was further unplanned exclusions which resulted in increased undercoverage. Further information on undercoverage is provided in the Interpretation of results chapter. The projected resident Indigenous population at 31 December 2008, excluding those living in non-private dwellings, was 520,350. At the same date, there were an estimated 24,400 Indigenous people living in non-private dwellings throughout Australia and approximately 200 Indigenous people considered to be migratory or living offshore. For this survey, the population benchmarks were projections of the most recently released Estimated Resident Indigenous Population (ERP) data, in this case, 30 June 2006. For information on the methodology used to produce the projected resident Indigenous population see Experimental Estimates and Projections, Aboriginal and Torres Strait Islander Australians, 1991 to 2021 (cat. no. 3238.0). To create the population benchmarks for the 2008 NATSISS reference period, the Indigenous ERP from 30 June 2006 was projected forward to 31 December 2008 using average annual growth rates observed between the 2001 and 2006 censuses. The coverage of this survey included persons who self-identify (in a face to face question) as being Indigenous, whereas the benchmarks (scope) are based on persons who identified as being Indigenous in the Census and Post Enumeration Survey (PES) using different identification mechanisms. It is worth noting that whether a person identifies as Indigenous or not can depend on the data collection methodology. For example, in the 2006 Post Enumeration Survey (PES), which collects data via a face to face interview, around 10% of the respondents who identified themselves as Indigenous in the 2006 Census, via a self-completion form, stated they were non-Indigenous. Sample design The 2008 NATSISS was designed to produce reliable estimates at the national level and for each state and territory. For selected states and territories (NSW, Qld, WA and NT) the sample for children aged 0-14 years and persons aged 15 years and over was allocated to produce estimates that have a relative standard error (RSE) of no greater than 25% for characteristics that at least 5% of these populations would possess. The survey was also designed to provide reliable estimates at the national level for children aged 0-3 years, and at the Victorian Inner City and Regional levels for children aged 0-14 years, with the same RSE requirements. As with previous ABS Indigenous surveys, additional sample was collected in the Torres Strait Area, to ensure data of sufficient quality would be available for the Torres Strait Area and the remainder of Queensland. Funding was received from the Council of Australian Governments (CoAG) and agreed through the Working Group on Indigenous Reform to enable the collection of data on Indigenous children aged 0-14 years. Funding was also received from the Victorian Government Department of Education and Early Childhood Development (DEECD) to enable additional sample to be included for Victoria. Community and non-community samples The sample design incorporated a random selection of:
The samples for community and non-community areas were designed separately, with each involving a multistage sampling process. These two designs were combined to ensure that all areas of the in-scope population had a chance of being selected in the survey. It was not possible for someone to be selected in both the community and non-community sample. The sample design differed by community and non-community areas in:
Corresponding with the method used in non-community areas of Qld, SA, WA and the NT, the sample design for the following states/territory was the same throughout:
Community sample The community sample in remote areas was obtained from a random selection of discrete Indigenous communities and out-stations using a specially developed Indigenous Community Frame. This frame was constructed for operational purposes using counts from the 2006 Census of Population and Housing and information collected in the 2006 Community Housing and Infrastructure Needs Survey (CHINS). All communities on this frame were in remote (or very remote) areas of Qld, SA, WA and the NT. From this frame, 71 Community Sets (containing one main community and zero or more out-stations) were selected for enumeration. The number of communities selected for each applicable state/territory were:
A random selection of dwellings was made within the selected communities and out-stations, with different selection procedures applied to the main communities and out-stations. Depending on the size of the main community, up to 37 in-scope dwellings were selected for enumeration. All in-scope dwellings in selected out-stations were enumerated. Within each selected dwelling, up to one Indigenous person (aged 15 years or over) and up to one Indigenous child (aged 0-14 years) was randomly selected to participate in the survey. Non-community sample In non-community areas, dwellings were selected using a stratified multistage area sample. For the first time, Mesh Block level information within Census Collection Districts (CDs) was used to assist in targeting Indigenous people. A mesh block is a geographic building block, smaller than a CD, and consisting of approximately 50 households. A sample of CDs were randomly selected within each state and territory, with the likelihood of selection based on the number of Indigenous dwellings recorded in the area for the 2006 Census. All Mesh Blocks containing at least one Indigenous household within the selected CD were screened. Mesh Blocks containing no Indigenous households were either excluded on coverage or randomly sampled for screening. This approach significantly reduced screening effort in areas of low Indigenous density, such as major capital cities. In remote and very remote areas, all households were screened in selected CDs. For each randomly selected dwelling, one usual resident aged 18 years or over, was asked whether anyone in the household was of Aboriginal or Torres Strait Islander origin. This screening question was used to identify Indigenous households, from which the sampling process of persons was undertaken. If a dwelling contained one or more Indigenous usual residents, random selection for participation in the survey occurred as follows:
The original sample allocation for the non-community component for each state/territory appear in the following table:
In non-community areas, a significantly lower than expected number of households, containing Indigenous usual residents, were found after the initial screening process (described above). Therefore, additional CDs and Mesh Blocks were selected in Vic, Qld, SA, WA, Tas and the NT. Selected dwellings in these Mesh Blocks were enumerated in early 2009. The lower than expected number of households, identified as containing Indigenous usual residents, may have been due to the following reasons:
Survey response After screening households in non-community areas, approximately 2.5% were identified as having an Indigenous usual resident. Of these households, 83% responded to the survey. This response rate does not take into account approximately 11% of households who were unable to be contacted and therefore establish the Indigenous status of usual residents. In communities, 78% of in-scope households were fully responding. Some survey respondents provided most of the required information, but were unable or unwilling to provide a response to certain data items. The records for these persons were retained in the sample and the missing values were recorded as 'don't know' or 'not stated'. No attempt was made to deduce or impute for these missing values. Weighting, benchmarking and estimation Weighting Weighting is the process of adjusting results from a sample survey to infer results for the total in-scope population. To do this, a 'weight' is allocated to each sample unit corresponding to the level at which population statistics are produced, eg household or person level. The weight can be considered an indication of how many population units are represented by the sample unit. For the 2008 NATSISS, separate person and household weights were developed. Selection weights The first step in calculating weights for each person or household is to assign an initial weight, which is equal to the inverse of the probability of being selected in the survey. For example, if the probability of being selected in the survey was 1 in 45, then the person would have an initial weight of 45 (that is, they would represent 45 people). After calculating the initial person weights, an adjustment was incorporated into the weighting to account for Indigenous persons not covered by the sample. The initial household weights were also similarly adjusted. Analysis indicated that there was a higher level of undercoverage is areas with a smaller Indigenous population. As a result, the initial person weights were adjusted to meet population benchmarks based on the expected Indigenous population size of the area. That is, selected CDs with a low population of Indigenous persons were adjusted to ensure the weights were equal to all CDs of the same population size according to demographic characteristics. This ensured that people in low population areas were represented by other people in low population areas, and likewise for high population areas. This adjustment was considered important as Indigenous persons in low population areas have different characteristics to Indigenous persons in high population areas. For more information on undercoverage see the Interpretation of results chapter. Benchmarking The person and household weights were separately calibrated to independent estimates of the population of interest, referred to as 'benchmarks'. Weights calibrated against population benchmarks ensure that the survey estimates conform to the independently estimated distributions of the population rather than to the distribution within the sample itself. Calibration to population benchmarks helps to compensate for over- or under-enumeration of particular categories which may occur due to either the random nature of sampling, non-response, non-identification or various other undercoverage factors. This process can reduce the sampling error of estimates and may reduce the level of undercoverage bias. A standard approach in ABS household surveys is to calibrate to population benchmarks by state, part of state, age and sex. In terms of the effectiveness of 'correcting' for potential undercoverage bias, it is assumed that within the weighting classes the characteristics of the covered population (ie survey respondents) are similar to the uncovered population (ie people who were not surveyed), as determined by the benchmarking strategy. Where this assumption does not hold, biased estimates may result. Person weights For this survey, person weights were simultaneously calibrated to the following population benchmarks:
The 'state' population benchmarks consist of the six states and two territories:
'Remoteness area' was categorised into five groups, defined in the table below.
For the main population, the 'age' of respondents was categorised into 14 groups, defined in the table below.
For the 'Victoria by remoteness area by specific age groups by sex' population benchmarks, the 'age' of respondents was grouped as per the main population age groups (see table above), except for people living in outer regional Victoria, for whom 'age' was grouped as shown in the table below.
'Sex consists of two categories:
'Torres Strait Islander Status' consists of two categories:
'Torres Strait Island Region' consists of three categories:
'Community/non-community' consists of two categories which were defined by the survey design:
Initially only the first four sets of benchmarks were included in the weighting process. Given there was a higher level of undercoverage than expected, extensive analysis was undertaken to ascertain whether further benchmark variables should be incorporated into the weighting strategy. Analysis indicated that there was greater undercoverage in non-community areas and that the inclusion of a community/non-community benchmark would improve the quality of the estimates, particularly for the NT. Further details on undercoverage are provided in the Interpretation of results chapter. The survey was benchmarked to the estimated Indigenous resident population living in private dwellings at 31 December 2008. This estimated population is based on projections of the experimental estimates of the resident Indigenous population at 30 June 2006. More information on the calculation of projections is provided in Experimental Estimates and Projections, Aboriginal and Torres Strait Islander Australians, 1991 to 2021 (cat. no. 3238.0). As people in non-private dwellings (eg hotels) are excluded from the scope of the survey, they have also been excluded from the survey benchmarks. Therefore, the 2008 NATSISS estimates do not (and are not intended to) match estimates for the total resident Indigenous population obtained from other sources. Household weights The ABS does not produce Indigenous dwelling counts. Therefore, for the 2008 NATSISS, household level benchmarks were not available. Instead, the household weights for this survey were derived from the person level benchmarks. This was done by assigning the initial household weight (after the adjustment) to all Indigenous persons in the household. These weights were then calibrated to the person level benchmarks with the constraint that each person in the household must have the same final calibrate weight. The resulting weight was assigned as the final household weight. That is, the weights were calibrated in such a way that the household weights will reproduce the number of known person level benchmarks if all people in a household are included. Therefore, the sum of the household weights will only provide an estimate of the number of Indigenous households. This method was then analysed to ensure that person and household level estimates are as consistent as possible. Estimation Estimation is a technique used to produce information about a population of interest, based on a sample of units (ie persons or households) from that population. Each record in the 2008 NATSISS contains two weights:
The weights indicate how many population units (ie persons or households) are represented by the sample unit. Replicate weights have also been included - 250 person replicate weights and 250 household replicate weights. The purpose of these replicate weights is to enable calculation of the Relative Standard Error (RSE) for each estimate produced from the survey. Survey estimates of counts of persons are obtained by summing the weights of persons or households with the characteristic of interest. Estimates for means, such as mean age of persons, are obtained by summing the weights of persons in each category (eg individual ages), multiplying by the value for each category, aggregating the results across categories, then dividing by the sum of the weights for all persons. The majority of estimates contained in this publication are based on benchmarked person weights. However, the survey also contains some household estimates based on benchmarked household level weights. COMPARISON TO THE 2002 NATSISS The following table provides broad comparisons of the survey design for the 2008 and 2002 surveys.
Document Selection These documents will be presented in a new window.
|