Page tools: Print Page Print All | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
CONTENTS Sampling variability Since the estimates are based on information obtained from a sample of the population, they are subject to sampling variability (or sampling error), i.e. they may differ from the figures that would have been obtained from an enumeration of the entire population, using the same questionnaires and procedures. The magnitude of the sampling error associated with a sample estimate depends on the following factors:
Measure of sampling variability One measure of sampling variability is the standard error. There are about two chances in three that a sample estimate will differ by less than one standard error from the figure that would have been obtained if all dwellings had been included in the survey, and about nineteen chances in twenty that the difference will be less than two standard errors. The relative standard error (RSE) is the standard error expressed as a percentage of the estimate to which it relates. Very small estimates may be subject to such high relative standard errors as to detract seriously from their value for most reasonable purposes. Only estimates with relative standard errors less than 25% are considered sufficiently reliable for most purposes. However, estimates with relative standard errors of 25% or more are included in ABS publications of results from this survey: estimates with an RSE of 25% to 50% are preceded by the symbol * as a caution to indicate that they are subject to high relative standard errors, while estimates with an RSE greater than 50% are preceded by the symbol ** to indicate the estimate is too unreliable for general use. Standard errors of estimates from this survey are available compiled using two different methodologies:
Standard errors based on the split-halves methodology are shown in the publication National Health Survey: Summary of Results, Australia 2001 (cat. no. 4364.0) and also in National Health Survey: Aboriginal and Torres Strait Islander Results, Australia 2001 (cat. no. 4715.0). Standard errors compiled using the replicate weight methodology at the level of individual cells in a table, are available for National Health Survey: Aboriginal and Torres Strait Islander Results, Australia 2001 (cat. no. 4715.0) via the web site, and also on request with special data services. Significance testing for Indigenous results The relatively small number of Indigenous persons sampled means that results for health characteristics with low population prevalences are subject to relatively large sampling error. Comparing results between sub-populations needs to take account of the confidence that can be placed on the sample results. While significance tests are always encouraged, it is particularly important that any comparisons for Indigenous data are tested before inferring that a real difference exists. Some differences may appear quite marked but because of the relatively large sampling error associated with the Indigenous sample a significant difference may not exist. Table 1 in the publication National Health Survey: Aboriginal and Torres Strait Islander Results, Australia 2001 (cat. no. 4715.0) presented 20 age standardised summary results for both the Indigenous and non-Indigenous sub-populations. Significance tests were performed on key comparisons within this table and for 9 of these 20 key data items, sample error prevents a conclusion (with 95% confidence) that the sample results for the two sub-populations are statistically different. Table 7.1 below presents the age standardised results for the 11 summary characteristics for which the differences were significant (at the 95% level). When the Indigenous results are compared between remote and non-remote areas the Indigenous sample is further divided and significant differences (with 95% confidence) were limited to only 4 of the key summary health measures as shown in Table 7.2 below. TABLE 7.1 Significant comparisons between Indigenous and Non-Indigenous persons - age standardised
TABLE 7.2 Significant comparisons between Indigenous persons in remote and non-remote areas comparisons - age standardised
As 1995 NHS data are not available for remote areas, comparisons between Indigenous estimates for 1995 and 2001 are restricted to non-remote areas. Sampling error also affects the extent to which meaningful time series comparisons can be made. The small size of the 2001 Indigenous sample from non-remote areas, and the even smaller sample for the 1995 survey, mean that differences can be identified (with 95% confidence) for only 3 of the summary data items shown in Table 7.3 below. TABLE 7.3 2001 NHS(I) Indigenous Results - age standardised
The very large number of comparisons possible in each table (between areas, sexes, ages, Indigenous status, condition prevalences etc., and combinations of these) did not allow all potential differences to be tested for statistical significance. To highlight the issue of sampling error and whether differences were significant and to assist users in interpreting results, estimates in tables 1 and 2 of the publication National Health Survey: Aboriginal and Torres Strait Islander Results, Australia 2001 (cat. no. 4715.0) were footnoted to indicate where apparent differences were not statistically significant. Non-sampling errors The imprecision due to sampling variability should not be confused with inaccuracies that may occur for other reasons, such as errors in response and reporting. Inaccuracies of this kind are referred to as non-sampling errors, and may occur in any enumeration whether it be a full count or a sample. The major sources of non-sampling error are:
Each of these sources of error is discussed in the following paragraphs. Errors related to survey scope Some dwellings may have been inadvertently included or excluded because, for example, the distinctions between whether they were private or non-private dwellings may have been unclear. All efforts were made to overcome such situations by constant updating of lists both before and during the survey. Also, some persons may have been inadvertently included or excluded because of difficulties in applying the scope rules concerning who was identified as usual residents, and concerning the treatment of some overseas visitors. Other errors which can arise from the application of the scope and coverage rules are outlined in section Scope and Coverage. Response errors In this survey response errors may have arisen from three main sources: deficiencies in questionnaire design and methodology; deficiencies in interviewing technique; and inaccurate reporting by the respondent. Errors may be caused by misleading or ambiguous questions, inadequate or inconsistent definitions of terminology used, or by poor overall layout of the questionnaire causing questions to be missed. In order to overcome problems of this kind, individual questions and each of the questionnaires overall were thoroughly tested before being finalised for use in the survey. Testing took two forms:
As a result of both forms of testing, modifications were made to question design, wording, ordering and associated prompt cards, and some changes were made to survey procedures. In considering modifications it was sometimes necessary to balance better response to a particular item/topic against increased interview time or other effects on other parts of the survey, with the result that acceptable though not necessarily optimum approaches were adopted in some instances; for example in the collection of data on the usual intake of fruit and vegetables. Although such changes would have had the effect of minimising response errors due to questionnaire design and content issues, some will have inevitably have occurred in the final survey enumeration. Reporting errors may also have occurred because the survey is quite large, and particularly for those respondents reporting for themselves and several children, errors may have resulted from interviewer and/or respondent fatigue (i.e. loss of concentration). While efforts were made to minimise errors arising from deliberate misreporting or non-reporting by respondents (e.g. through emphasising the importance of the data, and through checks on consistency throughout the questionnaires), some instances will inevitably occurred. Reference periods used in relation to each topic were selected to suit the nature of the information being sought. However it is possible that the reference periods did not suit every person for every topic and that difficulty with recall may have led to inaccurate reporting in some instances. Lack of uniformity in interviewing standards will result in non-sampling errors. Thorough training and retraining programs, regular supervision and checking of interviewers’ work were methods employed to achieve and maintain uniform interviewing practices and a high level of accuracy in recording answers on the survey questionnaire (see Data Collection: Interviews). Non-uniformity of the interviewers themselves is also a potential source of error in that the impression made upon respondents by personal characteristics of individual interviewers such as age, sex, appearance and manner, may influence the answers obtained. Non-response bias Non-response may occur when people cannot or will not cooperate in the survey, or cannot be contacted by interviewers. Non-response can introduce a bias to the results obtained in that non-respondents may have different characteristics and behaviour patterns in relation to their health than those persons who responded to the survey. The magnitude of the bias depends on the extent of the differences and the level of non-response. The 2001 NHS(G) achieved an overall response rate of 92% (after sample loss). The response rate for the 2001 NHS(I) was 91% in non-sparsely settled areas and 87% in sparsely settled areas. Data to accurately quantify the nature and extent of the differences in health characteristics between respondents in the survey and non-respondents are not available. Under or over-representation of particular demographic groups in the sample are compensated for at the State, section of State, sex and age group levels in the weighting process. Other disparities are not adjusted for. Individuals for whom a partial response was obtained were treated as fully responding for estimation purposes if sufficient information was recorded e.g. if the only questions not answered related to income or age (provided the interviewer had provided an estimate) then the non-response items were coded to 'not stated'. With the exception of response to the Supplementary Women's Health Questionnaire, if any other questions were not answered, respondents were treated as if they had been non-responding (i.e. as if no questionnaire had been obtained). Missing answers to questions in the Supplementary Women's Health Questionnaire were recorded as "not stated"; generally information from these questionnaires was only completely omitted from the survey data file (i.e. treated as if the questionnaire was not received at all) when the information provided was considered so scant as to be of no use. In 2001 NHS(I), Indigenous facilitators were used to assist with interviews in an attempt to further reduce the impact and level of non-response. The sample achieved was weighted to population benchmarks to reduce the effect of any non-response bias. Processing Errors Processing errors may occur at any stage between initial collection of the data and final compilation of statistics. Specifically, in this survey, processing errors may have occurred at the following stages in the processing system:
A number of steps were taken to minimise errors at various stages of processing:
Other factors affecting estimates In addition to data quality issues, there are a number of other factors, both general and specific to individual topics, which should be considered in interpreting the results of this survey. The general factors affect all estimates obtained, but may affect topics to a greater or lesser degree depending on the nature of the topic and the uses to which the estimates are put. This section outlines these general factors. Additional issues relating to the interpretation of individual topics are discussed in the topic descriptions provided in other sections.
This should be taken into consideration when comparing results from this survey to data from other sources where the data relates to different reference periods.
Specific data quality issues for the sparse NHS(I) Based on experience with previous ABS surveys of Aboriginal and Torres Strait Islander peoples in sparsely settled areas, it was known that standard survey concepts and questions are not always appropriate. Specific testing was conducted in sparsely settled areas with the aim of testing as much as possible, but recognising that it may not be possible to fully test some items. However, based on the ABS's best judgement we would proceed to enumeration with a view that such data would be assessed for quality based on interviewer feedback and post-field quality checks, and would not be released if data quality was considered to be unacceptable. Data quality investigations undertaken on sparse NHS(I) data before final enumeration A validation process was undertaken on the 2001 NHS(I) pilot test data collected in sparsely settled areas to assess its quality. The following methods were employed to validate each data item:
Approximately 50% of completed questionnaires from the pilot test were randomly selected and validated against the respondents' health clinic records where possible, after first obtaining each respondent's permission.
Each data item was evaluated based on interviewer feedback on the performance of survey questions in the field. The outcomes from field testing and the validation process indicated that approximately 40% of the 2001 NHS(G) content would be of acceptable data quality if collected for the 2001 NHS(I) sample in sparsely settled areas using the personal interview collection methodology. This is based on the negligible level of mismatch encountered when validating topics against clinic records (e.g. topics such as visits to hospitals, hearing problems, sight problems and injuries), or the favourable assessment given to the performance of questions by interviewers (e.g. general demographics data, smoking, dental visits). Field testing also indicated that the number of data items to be collected in sparsely settled areas could be increased to approximately 60% of the 2001 NHS(G) content if selected health information were to be provided by health clinic staff (based on clinic records). The topics judged to be best suited for collection from clinics were type and number of medications used, health service usage, immunisation status, type of diabetes, and some women's health items. Based on these findings, the ABS sought consent and support from State and Territory health departments and representative Aboriginal and Torres Strait Islander community controlled health organisations in October 2000 for health clinic staff to provide selected information on the respondents behalf from their records, with the respondent's written permission. There was general support for ABS efforts in collecting reliable health information on Aboriginal and Torres Strait Islander peoples. However, the ABS was advised that relevant ethics committees in each State or Territory would have to fully consider all the implications of using health clinic records and this process could be very lengthy. As there was limited time to follow this through before enumeration was due to commence, the collection of selected health data from community health clinics was not adopted for the 2001 NHS(I). Instead, it was decided to collect items via personal interview, including items where some data quality concerns remained, with a view to examine the data collected. If no quality issues so serious as to undermine the usefulness of the data were encountered then the data would be released. The data items listed below were those where further investigation was considered necessary. Interviewers were instructed to record details about these items so an assessment could be made prior to releasing the results. Items from the 2001 NHS(I) where further investigation was required to assess quality:
Data quality investigations undertaken on sparse NHS(I) data after final enumeration Interviewer assessments and feedback As part of the sparse NHS(I) questionnaire, all interviewers completed an 'Interviewer's Assessment'. The aim of this assessment was to provide qualitative information that would assist in data validation. This assessment covered the items where further investigation was required and also other items where there can be some level of inaccuracy in reporting. Rather than having the interviewer subjectively assess the accuracy of the respondent's answers, interviewers were instructed to record 'observations' on particular topics and questions. These observations were then used to assess data quality. Interviewers were asked to assess responses given using the following 4-point scale:
The topics included for interviewer assessment were: Education Self-assessed health Adult immunisation Diabetes Eyesight Cancer Cardiovascular conditions Long-term health conditions Hospital visits Dentist visits Doctor visits Women's health Income Height and weight. Interviewer assessments and feedback indicated that height and weight were the questions which were least accurately reported. However, when sparse NHS(I) respondents were unsure of their height or weight they were asked if they would agree to be measured and interviewers reported that nearly all respondents who were unsure of their height or weight agreed to be measured. It should be noted that height and weight are not well self-reported in the general population and some level of inaccuracy is always expected. Most questions that asked for a reference period (i.e. how long since respondent had seen optometrist/dentist/doctor) were considered to contain some inaccuracies. When required, the interviewer prompted the respondent with a specific reference event (e.g. Christmas) in order to gain a more accurate response. While some problems were expected with the "Income" section of the questionnaire, results from the Interviewer Assessments showed that, with the exception of Q.374 ("Before Income Tax and other expenses are taken out, how much does your spouse/partner usually receive?'), responses to these questions were considered by interviewers to be reasonably accurate. General interviewer feedback received from all states was relatively consistent, with the same types of questions being reported as problematic in all reports. Contributing factors reported included language barriers, western concepts, varying skills of Indigenous facilitators, a tendency for respondents to say they are perfectly healthy, and individual surveying methods on a group culture. Validation against external data sources In validating the survey data, results were compared against external data sources where possible. As there is only limited health information available on the Indigenous population, and the data that are available are collected using different methodologies, direct comparisons between 2001 NHS(I) data and other external sources was not always possible. When direct comparisons were not possible, similar data were compared, where available, to check that general trends in the data were similar. For some items, because the data are not readily available by another source, no validation against an external data source was possible. The 2001 NHS(I) data were also compared with comparable data collected in the 2001 NHS(G) and the 1995 NHS to ensure the data followed expected patterns. Although it was not always possible to verify results against an external source, there was extensive internal validation performed, as outlined previously, for all data items. Outcomes of data quality investigations on sparse NHS(I) data The data quality investigations undertaken did not provide strong evidence that particular data items collected had quality concerns so serious as to undermine the usefulness of the data. However, it should be noted that this assessment has been made based on information available to the ABS at the time and there was only limited external validation possible for some items. The items listed above as being those 'where further investigation was performed' should be used with caution and if they appear to contradict another reliable source of data the ABS should be contacted. However, some apparent discrepancies can be due to differences in the method of collection or the scope of the data collection, or various other factors, without either source of data necessarily being incorrect. The ABS will interrogate any apparent discrepancies and a decision will be made regarding the reliability of the data item in question. INTERPRETATION OF RESULTS As noted above, there is a variety of factors which have impacted on the quality of the data collected. Through various means in the development and conduct of this survey the ABS has sought to minimise the effects of these factors; however, it is only sampling error which can be quantified enabling users of the data to adjust for possible errors when using/interpreting the data. For the other issues affecting the data, information is not available from the survey to enable these effects to be quantified. The relative importance of these factors will differ between topics, between items within topics, and by characteristics of respondents. Comments have been included in individual topic descriptions in this publication to alert users of the data to the more significant issues likely to effect results for that topic or items within it. In part these notes reflect ABS experience of past health and other surveys and feedback from users of data from those surveys, ABS and other research on survey methods and response patterns, on testing for this survey, on comparisons between survey data and other data sources and in part on 'common sense'. However, these comments are indicative only, and are not necessarily comprehensive of all factors impacting results, nor necessarily of the relative importance of those factors. Against this background, the following general comments are provided about interpreting data from the survey;
For the 2001 NHS(I), the content collected in sparsely settled areas was a subset of that collected in non-sparsely settled areas, therefore not all data items are available for the total Indigenous population. Also, no 1995 NHS data are available for sparsely settled areas restricting comparisons between Indigenous estimates for 1995 and 2001 to non-sparsely settled areas. In both 1995 and 2001, all children of Aboriginal and/or Torres Strait Islander origin living in households in non-sparsely settled areas had a random chance of selection in the 2001 NHS(G). Similarly, all such Indigenous children had a chance of selection in the Indigenous supplement to the 1995 NHS. Selected households in non-sparsely settled areas identified to have at least one usual resident of Aboriginal and/or Torres Strait Islander origin were enumerated. However, in the 2001 NHS(I), selected households were screened to identify only those households where at least one adult (18 years or over) of Aboriginal and/or Torres Strait Islander origin was a usual resident of the household. Therefore, Indigenous children living in non-sparsely settled areas where there was no Indigenous adult usually resident in the household (up to one quarter of all Indigenous children in non-sparsely settled areas reside in such households) did not have a chance of selection in the supplement. Indigenous respondents from the 2001 NHS(G) and 2001 NHS(I) samples were weighted and then benchmarked to Indigenous population estimates (for age, sex, and area of usual residence) so that final survey estimates will be representative of the age and sex characteristics of the Indigenous population in different areas. However, it is possible that the health characteristics of Indigenous children living in households where there are no Indigenous adults may be different to those of Indigenous children of the same age and sex living in the same non-sparsely settled areas, but in households where Indigenous adults are resident. If such differences exist, then survey results for Indigenous children may under-represent these differences. Although the methodology employed may under-represent these children in the final estimates which could affect interpretation of some results, the under-representation is generally not significant in the context of the sampling error associated with the survey. In the 2004-05 Indigenous Health Survey, field procedures will be changed to provide for adequate representation of Indigenous children in households with no resident Indigenous adult. AGE STANDARDISATION Australia's Indigenous population is considerably younger (on average) than the non-Indigenous population, and there is a close relationship between age and health-related issues. It is often misleading to compare Indigenous and non-Indigenous health outcomes unless the data have been age standardised to take account of this difference (i.e. adjusting the results to reflect the age composition of the total Australian population at the 2001 Census). Therefore, in National Health Survey: Aboriginal and Torres Strait Islander Results Australia, 2001 (cat. no.4715.0) any results presenting total population prevalence rates by Indigenous status were age standardised (about half the tables in the publication). However, for more detailed presentations by age group, the data were not age standardised. Analysis showed that, within narrow age groups, age standardisation did not significantly affect the results. Therefore, presenting the data without being age standardised provided measures that allowed comparisons between sub-populations as well as providing measures of prevalence within the reported sub-populations. For National Health Survey: Aboriginal and Torres Strait Islander Results Australia, 2001, the direct age standardisation method was used. The formula for direct standardisation is as follows: where Cdirect = the age standardised estimate of prevalence for the population of interest, a = the age categories that have been used in the age standardisation, Ca = the estimate of prevalence for the population being standardised in age category a, and Psa = the proportion of the standard population in age category a. Data which have been tabulated according to broad age groupings have not been age-standardised and hence the rates apply to the Indigenous and non-Indigenous populations without adjustments to account for the differing age structures. These rates, together with the total estimates presented in Table 4 below, can be used to calculate the actual population estimate for an item of interest. The ABS considers that comparisons of unadjusted rates within the broad age groups presented in National Health Survey: Aboriginal and Torres Strait Islander Results, Australia 2001 (cat. no. 4715.0) would be little different if standardised within the age ranges. TABLE 7.4 Population estimates: Aboriginal and Torres Strait Islander Persons: Summary health characteristics, Australia, 1995 and 2001(a)
Understanding the comparability of data from the 2001 NHS with data from the last previous NHS in 1995 (and with the 1989-90 NHS) is important to the use of those data and interpretation of apparent changes in health characteristics over time. While the 2001 NHS is deliberately the same or similar in many ways to the 1995 NHS (and in part to the 1989-90 NHS), there are important differences across most aspects of the surveys; sample design and coverage, survey methodology and content, definitions, classification, etc. These differences will effect the degree to which data are directly comparable between the surveys, and hence the interpretation of apparent changes in health characteristics over the 1995 to 2001 period. Throughout the topic descriptions and in other parts of this publication, comments have been made about the changes between surveys and their expected impact on the comparability of data. These are general comments based on results of testing, ABS experience in survey development, and a preliminary examination of results from the 2001 survey. As a result they should not be regarded as definitive statements on comparability and may omit the types of findings which might result from a detailed analysis of the affects of all changes made. The following tables summarise the key differences between the 1995 and 2001 surveys, and hence the degree of comparability between them:
TABLE 7.5: General Survey Characteristics
Sample design/size While the overall sample of households was about 18% lower in 2001 than 1995, the enumeration of selected persons only within households has meant the 2001 sample of persons is about half that of the 1995 survey. The 2001 approach has had the effect of spreading the sample more and reducing the effects on the final estimates of clustering of characteristics within households. However the smaller sample in 2001 has the effect of more than doubling the standard errors on estimates as shown below: Table 7.6 Relative standard errors (%)
The reduced reliability of estimates of similar size from the 2001 survey compared with the 1995 survey should be considered in interpreting apparent changes between the surveys. It is recommended that apparent changes are subject to significance testing to ensure that changes are not simply the product of different sample size and design. Through the weighting process, survey estimates at the State x part of State x sex x broad age group will be the same or very similar to the benchmark populations. Because the characteristics of the sample are not identical to those of the benchmark population (table see below), some records will receive higher or lower weights than others. As a result the RSE on estimates for those particular groups may also be slightly higher or lower than the average RSE shown in the table above. As this will vary between surveys, it is a factor to consider in comparing 1995 with 2001 data, but its impact on comparability is expected to be small. TABLE 7.7. Survey weighting
Partial enumeration of households In the 1995 NHS all persons in sampled dwellings were included in the survey, and only records from fully responding households were retained on the data file. This meant that results could be compiled at household, family and income unit level in addition to person level. Because the 2001 survey sub-sampled persons in households (one adult, one child 7-17 years, all children 0-6 years) complete enumeration only occurred in a minority of households, and by definition, only in single adult households. The table below shows the degree of enumeration within households, by household composition. TABLE 7.8: Number of Households by Composition and Coverage: 2001 NHS(G)
(b) Includes households where a spouse was less than 18 years old. Enumeration period The 2001 NHS(G) survey was effectively enumerated over about a ten month period, compared with a 12 month period for the 1995 survey; the 2001 survey was not enumerated in December or January, nor during a 6 week period mid-winter (coinciding with the conduct of the 2001 Census of Population and Housing and the Post Enumeration Survey). The effects of the shorter enumeration period have been assessed. For most variables collected in the 2001 NHS(G), the seasonal pattern is such that the proposed enumeration pattern should not produce any bias of a level that would be problematic to most users. Statistically significant differences were found for some items in the alcohol consumption, visits to other health professionals and exercise topics. It should be taken into account in interpretting Indigenous results particularly for these topics that the NHS(I) survey was only conducted over a six month period so any seasonal effects may be exaggerated for the NHS(I) sample. Because some data in the 2001 survey were not collected in the 1995 survey, or were collected in a substantially different way, it has not been possible to examine the possible effects of the shorter enumeration on all estimates from the 2001 survey, and users are advised to consider this when interpreting the data. TABLE 7.9: Survey content Note: This table only refers to content for 2001 NHS(G) and non-sparse NHS(I). For details about whether particular items are available for sparse NHS(I), please refer to Indigenous Output Data Item List on the web site.
Comparability of data about long-term conditions: As noted previously, a new classification and coding system for medical conditions was introduced in the 2001 NHS. These changes would have had some effects on comparability between the 2001 and previous NHSs. In general it is felt the coding system introduced for the 2001 survey enabled more accurate and consistent coding of reported conditions than in previous surveys. Potentially greater effects on comparability may arise through the use of different methodologies in the surveys for collecting and recording the data. The table below presents a selection of results from the 1995 and 2001 surveys, with comments regarding methodological similarities or differences which may have contributed to movements between surveys. In addition to the points below, the adoption of the new approach to NHPA conditions (described in Chapter 3: Health Status Indicators) should also be borne in mind. TABLE 7.10: Selected Long-Term Conditions: Comparison of Survey Methodology: 1995 NHS and 2001 NHS(G)
As well as the differences which may arise through the use/non-use of direct questions or prompt cards differences may arise from the context in which the questions were asked i.e. the effects of accompanying or associated questions. For example, in the 1995 survey, after the questions about long-term conditions respondents were asked about recent actions they had taken for their health (e.g. use of medication, consulted a doctor, had days away from work) and the medical conditions involved. This provided the opportunity for respondents to be reminded about a condition which they had but had forgotten to previously mention (e.g. because it was controlled through use of medication) and for them to identify it as a long-term condition (in which case earlier responses would have been amended accordingly). In contrast in 2001 respondents were asked about recent actions but, except for the NHPA conditions, were not asked to associate those actions with a particular condition. Under the 1995 approach, for example, a respondent may be reminded about their dermatitis by questions about their use of skin ointments or creams, while in 2001 this trigger was not available. While the result of this context effect may overall have been to boost 1995 levels relative to 2001, other changes introduced in 2001 (e.g. direct questions or mention on prompt cards) may have more than compensated for this effect for some conditions. A further factor which may affect comparability is that the reported prevalence of illness is complex and dynamic, and directly a function of respondent knowledge and attitudes, which in turn may be affected by the availability of health services and health information, public education and awareness, accessibility to self-help, etc. For example a public education program has been running in Australia over a number of years aimed to raise public awareness and public acceptance of mental health disorders. One consequence may be that respondents are more willing to talk about, and more willing to report feelings of anxiousness or depression now than they might have been willing to report previously. While the nature and general direction of the various influences on survey results can be gauged with reasonable surety, the level of effects are much more difficult to determine: i.e. how much of observed changes between estimates from the 2001 NHS and those from the 1995 or 1989-90 NHSs are attributable to real changes in the health characteristics or relationships between characteristics and how much to methodological or other differences between surveys, or to changes in respondent awareness of and attitudes to those characteristics. Unfortunately data to support this type of quantitative analysis are not available. While the points noted above, and within individual topic sections of this publication about comparability between NHSs, are useful guides to interpreting apparent changes between surveys, data users should also consider other information external to the NHS to assist them in interpreting the data. For many topics covered in the NHS, some data are available from other sources; although these other sources will seldom be directly comparable with the NHS they can provide a basis for data comparison and assessment. During validation of the 2001 NHS, selected results from the survey were compared both with results from previous NHSs and with data from other sources; differences were reconciled and notes relating to differences or changes have been included where appropriate within individual topic descriptions in this publication. However, as only selected data sources were examined, other differences may exist, and users of the NHS data should contact the ABS if they have any queries regarding comparability issues. ADDITIONAL COMPARABILITY ISSUES BETWEEN 1995 NHS AND 2001 NHS(I) National Health Survey: Aboriginal and Torres Strait Islander Results, Australia 2001 (cat. no. 4715.0) contains selected results from the Indigenous component of the 1995 NHS. These results are limited to topics where a reasonable level of comparability between the 1995 and 2001 data is expected. While the 2001 NHS(I) is similar to the 1995 survey in many ways, there are important differences in sample design and coverage, survey methodology and content, definitions, classifications, etc. which affect the degree to which data are directly comparable between the surveys. Due to the small size of the supplementary Indigenous samples in the 1995 and 2001 NHS(I), the Indigenous results from these surveys are not available at state level and have a larger associated sampling error than results from many other ABS surveys. For this reason, differences in reported rates for 1995 and 2001 may or may not be statistically significant. Significance testing has been undertaken on selected Indigenous and non-Indigenous comparisons (table 1) and time series data (table 2) presented in National Health Survey: Aboriginal and Torres Strait Islander Results, Australia 2001 (cat. no. 4715.0) to assist readers with understanding the level of significance that should be attributed to apparent differences in rates. Further information about significance testing is discussed in more detail earlier in this chapter. Time series information for 1995 and 2001 is based on data collected for non-sparsely settled areas only, due to concerns about the quality of data collected from sparsely settled areas in the 1995 survey. After an extensive investigation into Indigenous results from the 1995 collection, responses from people living in sparsely settled areas were excluded. Table 7.9 above in this chapter compares the survey content between 1995 and 2001 at the topic level.
|