4430.0.30.002 - Microdata: Disability, Ageing and Carers, Australia, 2003 (Reissue) Quality Declaration
ARCHIVED ISSUE Released at 11:30 AM (CANBERRA TIME) 22/07/2005 Reissue
Page tools: Print Page Print All | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
This document was added or updated on 02/10/2012. SURVEY METHODOLOGY
SCOPE AND COVERAGE Scope of the survey The survey included people in both urban and rural areas in all States and Territories, except for those living in remote and sparsely settled parts of Australia. For most individual States and Territories the exclusion of these people has only a minor impact on any aggregate estimates that are produced because they only constitute a small proportion of the population. However, this is not the case for the Northern Territory where such persons account for over 20% of the population. The survey included people in both private and non-private dwellings, including people in cared-accommodation establishments but excluding those in gaols and correctional institutions. The scope of the survey was all persons except:
Coverage Coverage rules were applied which aimed to ensure that each person eligible for inclusion in scope was associated with only one dwelling and thus had only one chance of selection. The household component and the cared-accommodation component of the survey each had their own coverage rules, as follows.
SAMPLE DESIGN AND SELECTION PROCEDURES Multistage sampling techniques were used to select the sample for the survey. The actual sample included:
Private dwelling selection The area based selection of the private dwelling sample ensured that all sections of the population living within the geographic scope of the survey were represented. Each State and Territory was divided into geographically contiguous areas called strata. Strata are formed by initially dividing Australia into regions, within State or Territory boundaries, which basically correspond to the Statistical Division or Subdivision levels of the Australian Standard Geographical Classification. These regions are then divided into Local Government Areas (LGAs) in State Capital City Statistical Divisions (metropolitan regions), and into major urban centres as well as minor urban and rural parts in non-metropolitan regions. Each stratum contains a number of Population Census Collection Districts (CDs) containing on average about 250 dwellings. The sample was selected to ensure that each dwelling within the same stratum had the same probability of selection. In capital cities and other major urban or high population density areas the sample was selected in three stages:
Cared accommodation and other non-private dwelling selection The sample of non-private dwellings was selected separately from the sample of private dwellings to ensure they were adequately represented. Non-private dwellings (including cared accommodation establishments) in each State and Territory were listed and sampled directly from these lists. Each non-private dwelling was given a chance of selection proportional to the average number of persons it accommodated. In order to identify the occupants to be included in the survey, all the occupants in each non-private dwelling were listed and then a random selection technique was applied. DATA COLLECTION FOR THE HOUSEHOLD COMPONENT Data for the household component of the survey were collected by trained interviewers mainly using personal computer assisted interviewing (CAI). There were a number of stages. First, an interviewer conducted a computer assisted interview with any responsible adult (ARA) in the household, to:
Personal computer-assisted interviews were conducted with people aged 15 and over in the identified groups. Proxy interviews were conducted with parents of children with disabilities or people aged 15-17 where parental consent for personal interview was not given. People who were prevented by their disability from responding personally were also interviewed by proxy (i.e. another person in the household who answered for them). Where there were language differences (including the need to use sign language), another member of the household was asked to interpret on behalf of, and with the permission of, the respondent. In some cases, arrangements were made to supply an interviewer conversant in the respondent's preferred language. People who were confirmed as primary carers in their personal interview were also asked to complete a short self-enumerated paper questionnaire during the interview. This method allowed them to provide information on more sensitive issues, as the care recipient would often be present at the interview. Interviewers for the household component of the survey were recruited from trained interviewers with previous experience in Australian Bureau of Statistics (ABS) household surveys. They were required to participate in CAI training, then in specific training for the survey, using laptop computers. Training emphasised understanding the survey concepts and definitions, and the necessary procedures to ensure a consistent approach to data collection. Prior to enumeration, a letter and brochure were sent out to each household selected for the survey. These documents provided information about the purpose of the survey and how it would be conducted. Both contained the ABS guarantee of confidentiality, and the brochure also provided answers to some of the more commonly asked questions. DATA COLLECTION FOR THE CARED-ACCOMMODATION COMPONENT Overview The cared-accommodation component completes the picture of the prevalence of health conditions, disability and levels of specific limitation or restriction in Australia. It also provides an indication of the balance between cared accommodation and community care for people with a disability, by age. In the 1981 and 1988 surveys, interviews were held with residents of cared accommodation. Many of these were not able to respond for themselves, and it was necessary to try and arrange for family members, who may not have been living nearby, to come and provide proxy interviews. Often it was not possible to find anyone who knew enough to provide the required information. For the 1993 survey the approach changed. A mail-back paper form was used, with a staff contact person as the respondent. The data collected were limited to the information a staff member could be expected to know from records. This method was also used for the 1998 and 2003 surveys. Questionnaires The administrators of selected cared-accommodation establishments were sent a letter informing them of the selection of their establishment in the survey. This letter also provided information on:
Contact Information Form The Contact Information Form (CIF) was sent, with the initial letter, to the administrators of selected cared-accommodation establishments. The purpose of the CIF was to confirm:
Selection Form After receipt of the CIF, the ABS dispatched the Selection Form and personal questionnaires to nominated contact officers. The Selection Form provided instructions on how to list and select a random sample of residents from the establishment. Personal Questionnaire Personal questionnaires were completed by staff of the health establishments. Information provided was based on staff members' knowledge of the selected residents and on medical, nursing and administrative records. Details of data collected and the relevant populations are in the Data item list in the Downloads tab. The personal questionnaire was field tested to ensure:
ESTIMATION PROCEDURES – PERSONS The estimation procedures developed for this survey ensure that survey estimates of the Australian population conform to independent benchmarks of the Australian population as at June 2003 at state by part of state or territory by age group by sex level. For the calculation of person estimates, one benchmark was used to weight both the household and cared-accommodation components of the survey. For the common questions, the two components were combined to represent the whole population, whereas for the differing questions each survey represented only its population. Benchmarks The benchmark used was all persons in Australia excluding persons living in sparsely settled areas of the Northern Territory. Conceptually as persons in sparsely settled areas did not have a chance of selection in SDAC they should be removed from the population benchmarks. However, this is difficult to do accurately and so the benchmarks used include persons resident in sparsely settled areas except in the Northern Territory. The effect on survey estimates from this is considered to be negligible as the relative proportion of the States' population resident in sparse areas is very small. Weighting methodology Expansion factors or 'weights' were added to each respondent's record to enable the data provided by each person to be expanded to provide estimates relating to the whole population within the scope of the survey. The first step of the weighting procedure was to assign an initial person weight to each fully responding person. The initial person weight was calculated as the inverse probability of the person's selection in the sample, and takes into account which component of the survey the respondent was selected in, i.e. the household component or the cared accommodation component. The next step in the weighting procedure was calibrating the initial person weights to a set of person level population benchmarks. The calibration to benchmarks ensures that the sample survey estimates agree with independent measures of the population at specific levels of disaggregation. In addition, the calibration reduces the impact of differential non-response bias at the specific levels of disaggregation, and also reduces sampling error. ESTIMATION PROCEDURES – HOUSEHOLDS This survey was also designed to produce estimates of numbers of households. Only respondents living in private dwellings were given household weights. The estimation procedures developed for the household estimates ensure that survey estimates of the Australian population of households conform to independent benchmarks of the Australian population of households as at June 2003 at state by part of state or territory by household composition level (where household composition is determined by the number of adults and children in a household). Benchmarks The benchmark used was all private dwelling households in Australia, excluding those households in sparsely settled areas of the Northern Territory. DATA QUALITY All reasonable attempts have been taken to ensure the accuracy of the results of the survey. Nevertheless, two potential sources of error – sampling and non-sampling error – should be kept in mind when interpreting results of the survey. SAMPLING ERROR Since the estimates are based on information obtained from a sample of the population, they are subject to sampling error (or sampling variability). Sampling error refers to the difference between the results obtained from the sample population and the results that would be obtained if the entire population were enumerated. Factors which affect the magnitude of sampling error include:
Standard error One measure of sampling variability is the standard error (SE). The SE is based on the 'normal' distribution and allows predictions about the accuracy of data. For example, there are about two chances in three that a sample estimate will differ by less than one SE from the figure that would have been obtained if the population were fully enumerated. The relative standard error (RSE) is the SE expressed as a percentage of the estimate to which it relates. Very small estimates may be subject to such high RSEs as to detract seriously from their value for most reasonable purposes. Only estimates with RSEs less than 25% are considered sufficiently reliable for most purposes. Estimates with RSEs between 25% and 50% are included in Australian Bureau of Statistics (ABS) publications, but are preceded by the symbol * as a caution to indicate that they are subject to high RSEs. Estimates with RSEs greater than 50% are considered highly unreliable and are preceded by a ** symbol. NON-SAMPLING ERROR Additional sources of error which are not related to sampling variability are referred to as non-sampling errors. This type of error is not specific to sample surveys and can occur in a census. The main sources of non-sampling error are:
Errors related to scope and coverage Some dwellings may have been incorrectly included or excluded from this survey. An example of this form of error might be an unclear distinction concerning the private and non-private status of dwellings. All efforts were made to overcome such situations by constant updating of lists both before and during the survey. There are also difficulties in applying the coverage or scope rules. Particular attention was paid to questionnaire design and interviewer training to ensure such cases were kept to a minimum. Response errors In this survey response errors may have arisen from three main sources: deficiencies in questionnaire design and methodology; deficiencies in interviewing technique; and inaccurate reporting by respondents. For example, errors may be caused by misleading or ambiguous questions, inadequate or inconsistent definitions of terminology used, or by poor questionnaire sequence guides causing some questions to be missed. In order to overcome problems of this kind, individual questions and the overall questionnaire were thoroughly tested before being finalised for use in the survey. Lack of uniformity in interviewing standards will also result in non-sampling errors. Thorough training programs, and regular supervision and checking of interviewers' work, were used to achieve and maintain uniform interviewing practices and a high level of accuracy in recording answers on the electronic survey collection instrument. Processing errors Processing errors may occur at any stage between initial collection of the data and final compilation of statistics. Specifically, in this survey, processing errors may have occurred at the following stages in the processing system:
Edits were devised to ensure that logical sequences were followed in the questionnaires, that necessary items were present and that specific values lay within certain ranges. In addition, at various stages during the processing cycle, tabulations were obtained from the data file showing the distribution of persons for different characteristics. These were used as checks on the contents of the data file, to identify unusual values which may have significantly affected estimates, and illogical relationships not previously picked up by edits. Non-response bias Non-response occurs when people cannot or will not provide information, or cannot be contacted. It can be total (none of the questions answered) or partial (some of the questions may be unanswered due to inability to answer or recall information etc.). This can introduce a bias to the results obtained in that non-respondents may have different characteristics from those persons who responded to the survey. The size of the bias depends upon these differences and the level of non-response. It is not possible to accurately quantify the nature and extent of the differences between respondents and non-respondents in the survey; however every effort was made to reduce the level of non-response bias through careful survey design and estimation procedures. RESPONSE RATES Response rates for both the household and cared-accommodation components were high. Of the 16,039 private dwellings and special dwelling units in the effective sample, 89% were either fully responding or adequate complete. Of the 592 health establishments in the cared-accommodation component, 542 (92%) were fully responding. TABLE 4.1 HOUSEHOLD COMPONENT, Response rates
TABLE 4.2 CARED- ACCOMMODATION COMPONENT, Response rates
Document Selection These documents will be presented in a new window.
|