National Aboriginal and Torres Strait Islander Health Survey methodology

Latest release
Reference period
2018-19 financial year
Released
11/12/2019
Next release Unknown
First release

Explanatory notes

Introduction

The 2018–19 National Aboriginal and Torres Strait Islander Health Survey (NATSIHS) was conducted between July 2018 and April 2019.

  • It collected information from Aboriginal and Torres Strait Islander people of all ages in non-remote and remote areas of Australia, including discrete Indigenous communities.
  • Funding for the survey was provided by the Australian Government Departments of Health and Prime Minister and Cabinet.
  • Previous surveys have been conducted in 2012–13 and 2004–05. Some health data was also collected in the 2014–15, 2008 and 2002 National Aboriginal and Torres Strait Islander Social Surveys (NATSISS).
  • It collected information on a number of topics for the first time, including mental health conditions, medications, consumption of sugar sweetened and diet drinks, experiences of harm and a hearing test. For further information see Appendix - survey topics.
     

The survey was dependent on the high level of cooperation received from Aboriginal and Torres Strait Islander people and their communities. Without their continued cooperation, the wide range of Aboriginal and Torres Strait Islander statistics published by the Australian Bureau of Statistics (ABS) would not be available.

The ABS is committed to using data to focus on the stories of Aboriginal and Torres Strait Islander people. For this reason, the ABS has not included comparisons with the non-Indigenous population in the narrated summaries and data highlights.

Users who require non-Indigenous comparisons can access information in some data cubes that have been published with age standardised proportions for both the non-Indigenous and Aboriginal and Torres Strait Islander populations. These are available on the Data downloads section. For further information, see Appendix - non-Indigenous comparisons.

    How the data is collected

    Consultation

    The survey was developed in conjunction with numerous stakeholders, including representatives from Australian Government agencies, state/territory government agencies, non-government organisations, and academic and research institutions.

    • An advisory group was established to assist the ABS in determining the content of the survey and to advise on data output requirements.
    • Expert advisory panels provided advice to the ABS on two new topics – mental health and physical harm. These panels comprised members from both government and non-government agencies.
    • New questions proposed for inclusion underwent cognitive testing to ensure concepts would be understood by respondents, and to enable the questions and associated procedures to be refined.
       

    A dress rehearsal was conducted in non-remote and remote areas of Western Australia and the Northern Territory in October 2017 to test survey content and procedures.

    The range of topics identified for possible inclusion exceeded the capacity of the survey. With the assistance of the advisory groups, these were assessed and priorities were established. Topics ultimately selected for inclusion were those identified as highest priority and which could be appropriately addressed in a survey of this type.

    Scope and coverage

    The scope of the survey was all Aboriginal and Torres Strait Islander people living in private dwellings.

    The following people were not included in the survey:

    • non-Indigenous persons
    • visitors to private dwellings staying for less than six months
    • people in households where all residents are less than 18 years of age
    • people who usually live in non-private dwellings, such as hotels, motels, hostels, hospitals, nursing homes and short-stay caravan park
    • students at boarding school
    • non-Australian diplomats, diplomatic staff and members of their household
    • members of non-Australian defence forces stationed in Australia and their dependents
    • overseas visitors.
       

    Coverage exclusions apply to those people who were in scope for the survey, but who were not included in the sampling frame. The following coverage exclusions were applied:

    • mesh blocks with no or very few Aboriginal and Torres Strait Islander households
    • some discrete Indigenous communities and outstations with a small number of Aboriginal and Torres Strait Islander households, and
    • Aboriginal and Torres Strait Islander people living in outstations which are not linked to a main community.
       

    The overall coverage of the 2018–19 NATSIHS was approximately 33% of Aboriginal and Torres Strait Islander persons in Australia. The final sample has been weighted to population benchmarks which align with the scope of the survey to account for undercoverage. Refer to the Non-sampling error and undercoverage section below for more information.

    The benchmarks are based on the most recently released Aboriginal and Torres Strait Islander estimated resident population (ERP), with an adjustment to account for the time period of the survey. For more information on ERP, see Estimates of Aboriginal and Torres Strait Islander Australians (cat. no. 3238.0.55.001).

    As at 30 June 2016, the estimated resident Aboriginal and Torres Strait Islander population living in private and non-private dwellings was 798,365. Excluding persons in non-private dwellings, the projected Aboriginal and Torres Strait Islander population at 31 December 2018 was 814,013 and this is the population benchmark that the survey results were weighted to meet.

    Sample design

    The survey was designed to produce reliable estimates for the whole of Australia, for each state and territory and for remoteness areas. The sample was designed to achieve a Relative Standard Error of less than 25% for key variables.

    The survey sample had two parts:

    • a community sample, made up of discrete Indigenous communities, including any outstations associated with them, and
    • a non-community sample, made up of persons in private dwellings within areas outside of Indigenous communities.
       

    Each part used a multi-stage sampling process to ensure the representativeness of the final sample.

    As with previous ABS Aboriginal and Torres Strait Islander surveys, additional sample was collected in the Torres Strait Area, to ensure data of sufficient quality would be available for the Torres Strait Area and the remainder of Queensland.

    The community sample was a random selection of discrete Indigenous communities and outstations in non-remote and remote areas.

    • Selections were made using the Dwelling Register for Aboriginal and Torres Strait Islander Communities (DRATSIC). The DRATSIC was constructed from counts from the 2016 Census of Population and Housing.
    • Communities in Tasmania were excluded from coverage entirely, consistent with the 2012–13 NATSIHS and the 2014–15 NATSISS.
    • Non-remote communities in Western Australia and the Northern Territory were also excluded from coverage as only a small number of fully responding households were required in the sample design. As a result, these communities were removed from the sample, which was similar to the 2014–15 NATSISS.
       

    Dwellings in the non-community sample were selected using a multi-stage area sample.

    • Mesh blocks with no Aboriginal and Torres Strait Islander households, as identified in the 2016 Census, were excluded from coverage.
    • Areas in remote Victoria and very remote Tasmania were excluded, consistent with the 2012–13 NATSIHS and the 2014–15 NATSISS.
    • For each randomly selected dwelling within the selected mesh block, one usual resident aged 18 years or over was asked whether anyone in the household was of Aboriginal and/or Torres Strait Islander origin. This screening question was used to identify Aboriginal and Torres Strait Islander households, from which the sampling process was undertaken for participants in the survey.
       

    Number of people selected per household

    Within each identified Aboriginal and Torres Strait Islander household in both the community and non-community sample:

    • up to two adults (aged 18 years and over) and two children (aged 0–17 years) were randomly selected in non-remote areas, and
    • up to one adult (aged 18 years and over) and one child (aged 0–17 years) were randomly selected in remote areas.
       

    Fully or adequately responding households

    A fully responding household is one in which all parts of the interview were completed for all persons in scope.

    An adequately responding household is the same as a fully responding household, but with legitimate ‘don't know’ answers or refusals where permitted in the survey. For example, adequately responding households include people who declined to answer the substance use questions or have their height, weight, waist circumference or blood pressure measured by an interviewer.

    For information on sample counts and response rates, see Appendix- response rates, sample counts and estimates.

    Data collection

    Interviewer training

    Information was collected by trained ABS interviewers using a computer based questionnaire. Prior to enumeration, interviewers:

    • participated in cultural awareness training which outlined how to conduct surveys in Indigenous community areas and described cultural considerations
    • completed classroom training and exercises to gain an understanding of the survey content and procedures.
       

    Face-to-face interviews

    Interviewers conducted face-to-face interviews in all selected households.

    A person aged 18 years or over was asked to provide basic information for all usual residents of the household, including Indigenous status, age, sex and relationships. This person, known as the household spokesperson, answered financial and housing questions, such as income, tenure arrangements and household facilities.

    Personal interviews were then conducted with selected Aboriginal and Torres Strait Islander persons aged 15 years and over. Some people were unable to be interviewed because of:

    • injury or illness (a proxy interview may have been arranged)
    • cultural considerations, such as mourning the death of a family member (sorry business) or
    • insufficient English skills and an interpreter was unable to be arranged.
       

    For selected persons aged 15–17 years:

    • a personal interview was conducted if a parent or guardian provided consent, or
    • their interview was completed by a proxy (that is, by a parent or guardian). Around two-thirds (66%) of interviews were conducted by proxy for this age group.
       

    An adult was asked to respond on behalf of children aged less than 15 years.

    Use of local Aboriginal and Torres Strait Islander advisors

    In communities, interviewers were accompanied, where possible, by local Aboriginal and Torres Strait Islander advisors who assisted in conducting interviews. The advisors:

    • explained the purpose of the survey
    • introduced the interviewers, and
    • assisted in identifying usual residents of a household and in locating residents who were not at home.
       

    Variations in data collection and survey questions

    To take account of language and cultural differences, the collection method and survey questions sometimes varied in remote areas.

    • Some questions were reworded to enhance a person’s ability to understand concepts.
    • Some topics were excluded if it was considered problematic to collect or not applicable.
       

    This means some data items are not available for the total Aboriginal and Torres Strait Islander population. Further information on the availability of data items can be found in the Data Item List, available in the Data downloads section.

    In non-community areas, where a person selected in the survey initially refused to participate, a follow up letter was sent and a second visit was made (where possible) to encourage participation. There was no follow up of refusals in communities.

    How the data is processed

    Estimation methods

    As only a sample of people were surveyed, their results needed to be converted into estimates for the whole population. This was done with a process called weighting.

    • Each person or household was given a number (known as a weight) to reflect how many people or households they represented in the whole population.
    • A person or household’s initial weight was based on their probability of being selected in the sample. For example, if the probability of being selected in the survey was one in 45, then the person would have an initial weight of 45 (that is, they would represent 45 people).
       

    For the person level weights, 250 replicate weights were produced to ensure accurate estimates of the standard errors.

    After calculating the initial person weights, an adjustment was incorporated into the weighting for persons and households to account for Aboriginal and Torres Strait Islander persons not covered by the sample.

    The person and household level weights were separately calibrated to independent estimates of the in scope population, referred to as ‘benchmarks’. The benchmarks used additional information about the population to ensure that:

    • people or households in the sample represented people or households that were similar to them
    • the survey estimates reflected the distribution of the whole population, not the sample.
       

    For this survey, person weights were simultaneously calibrated to the following population benchmarks:

    • state/territory by remoteness area
    • state/territory by age by sex
    • Torres Strait Islander status by Torres Strait Islander region by adult/child status
    • remoteness area by age by sex
    • state/territory by discrete Indigenous community.
       

    The survey was benchmarked to the estimated Aboriginal and Torres Strait Islander resident population living in private dwellings at 31 December 2018. As people in non-private dwellings (e.g. hotels) are excluded from the scope of the survey, they have also been excluded from the survey benchmarks. Therefore, the 2018–19 estimates do not, and are not intended to, match estimates for the total resident Aboriginal and Torres Strait Islander population obtained from other sources. This approach is consistent with the 2012–13 NATSIHS and 2014–15 NATSISS.

    For household weights, the ABS does not produce counts of Aboriginal and Torres Strait Islander dwellings and, as a result, household level benchmarks are not available. Instead, household level weights for this survey were derived from person level benchmarks, consistent with the approach for the 2012–13 NATSIHS and 2014–15 NATSISS. This was done by:

    • assigning the initial household weight (after the adjustment) to all Aboriginal and Torres Strait Islander persons in the household
    • adjusting the weights to the person level benchmarks with the restriction that each person in the household must have the same final weight.
       

    The resulting weight was the final household weight.

    The sum of the household weights will only provide an estimate of the number of Aboriginal and Torres Strait Islander households. This method was analysed to ensure that person and household level estimates were as consistent as possible.

    Age standardisation

    Age standardisation is a technique used to enhance the comparability of rates between populations with different age structures. This technique is often used to make comparisons between the non-Indigenous and Aboriginal and Torres Strait Islander populations. For further information, see Appendix - non-Indigenous comparisons.

    Accuracy

    Two types of error affect the accuracy of sample surveys: sampling and non-sampling error.

    Sampling error

    Sampling error is the difference between:

    • estimates for a population made by surveying only a sample of people, and
    • results from surveying everyone in the population.
       

    The size of the sampling error can be measured. It is reported as the Relative Standard Error (RSE) and 95% Margin of Error (MOE). For more information see the Technical note.

    In this publication, estimates with a RSE of 25% to 50% were flagged to indicate that the estimate has a high level of sampling error, and should be used with caution. Estimates with a RSE over 50% were also flagged and are generally considered too unreliable for most purposes.

    Margins of Error are provided for proportions to help people using the data to assess how reliable it is. The proportion combined with the MOE shows the range likely to include the true population value with a given level of confidence. This is known as the confidence interval. People using the data need to consider this range if they are making decisions based on the proportion.

    Analysis was done to compare the characteristics of respondents to the 2018–19 NATSIHS with a number of data sources to determine data consistency. Some sources for comparison included the:

    • 2016 Census of Population and Housing
    • 2012–13 NATSIHS
    • 2014–15 NATSISS.
       

    Estimates presented in this publication have been rounded. As a result, sums of components may not add exactly to totals.

    Non-sampling error and undercoverage

    Undercoverage is one potential source of non-sampling error. It is the shortfall between the population represented by the achieved sample and the in-scope population. It can introduce bias into the survey estimates; however, the extent of any bias depends on the size of the undercoverage as well as the difference in the characteristics of those people in the coverage population and those of the in-scope population.

    Undercoverage rates can be estimated by calculating the difference between the sum of the initial weights of the sample and the population count. If a survey has no undercoverage, then the sum of the initial weights of the sample would equal the population count (ignoring small variations due to sampling error).

    In the 2018–19 NATSIHS, there was an increase in the undercoverage rate compared with previous ABS Aboriginal and Torres Strait Islander surveys. The overall 2018–19 NATSIHS undercoverage rate was approximately 67% of the in-scope population at the national level. The estimated undercoverage in both the 2012–13 NATSIHS and the 2014–15 NATSISS was 62%.

    The overall undercoverage rate for non-community areas was 68% and for community areas was 57%. The undercoverage rate varied across the states and territories.

    Undercoverage rate in National Aboriginal and Torres Strait Islander Health Surveys, by state/territory

     NSWVic.QldSAWATas.NTACTAust.
      %
    2018–1972.063.464.963.060.556.068.168.066.5
    2012–1364.771.859.265.764.041.059.665.462.2

    Given the high undercoverage rate in the 2018–19 NATSIHS, there was extensive analysis undertaken to ensure the results were consistent with other data sources. Potential bias due to undercoverage was addressed by the application of adjustments to the initial weights and through the use of population benchmarks.

    Undercoverage may occur due to a number of factors, including:

    • frame exclusions (areas being removed from the sampling frame)
    • non-response
    • people not identifying as being of Aboriginal and/or Torres Strait Islander origin
    • issues arising in the field.
       

    Each of the factors is explained in more detail below.

      Frame exclusions

      Similar to the 2012–13 NATSIHS and 2014–15 NATSISS, frame exclusions were incorporated to make the sample design more efficient. Part of the frame exclusion that could be contributing to undercoverage are people who may have moved since the 2016 Census of Population and Housing.

      Non-response

      Non-response may occur when people cannot or will not cooperate, or they cannot be contacted. The loss of information on persons and households reduces the sample and increases both sampling error and the likelihood of incurring response bias.

      Non-response in this survey includes non-response to:

      • the screening question
      • the survey after identification of an Aboriginal and Torres Strait Islander household.
         

      To reduce non-response, the following methods were used:

      • face-to-face interviews with respondents
      • employment of local Aboriginal and Torres Strait Islander advisors to assist with interviewing in communities
      • follow up of respondents if there was initially no response
      • aligning the estimates with population benchmarks to ensure the weighted file was representative of the population.
         

      Of the households screened in non-community areas, 77% of households responded. Of those households who responded to the screening question, 3.9% were identified as having an Aboriginal or Torres Strait Islander usual resident. Of these identified households, 73% then responded to the survey.

      In discrete Indigenous communities 74.6% of selected in-scope households responded to the survey.

      People not identifying as being of Aboriginal and/or Torres Strait Islander origin

      Some Aboriginal and Torres Strait Islander households may not have been identified during the screening process due to:

      • Aboriginal and Torres Strait Islander people not identifying themselves as being of Aboriginal and/or Torres Strait Islander origin
      • the household spokesperson being unable to correctly provide the Indigenous status of other residents.
         

      There was a large increase in the number of people identifying as being of Aboriginal and/or Torres Strait Islander origin between the 2011 and 2016 Censuses of Population and Housing, and these people may not have chosen to identify in the same way for the 2018–19 NATSIHS for various reasons. Data from the 2016 Census is part of the population benchmarks used in this survey. For more information about the change across the Censuses, please refer to Census of Population and Housing: Understanding the Increase in Aboriginal and Torres Strait Islander Counts, 2016 (cat. no. 2077.0).

      Issues arising in the field

      Known undercoverage due to other issues arising in the field included sample being excluded due to:

      • overlap with the Monthly Population Survey and other special social surveys
      • occupational, health and safety issues
      • time constraints, particularly in Aboriginal and Torres Strait Islander communities and their associated outstations.
         

      Seasonal effects

      The data was collected from July 2018 to April 2019 which may not be fully representative of other time periods in the year. For example, people were asked about the number of times they had consulted with a health professional in the two weeks prior to interview. Travel to access health care facilities, particularly in remote areas, may be subject to seasonal variation throughout the year. Results could have differed if the survey had been conducted over the whole year or in a different part of the year.

      How the data is released

      The published results of the 2018–19 NATSIHS include a summary of findings and data cubes presented in spreadsheet format.

      A copy of the survey questionnaire, prompt cards and the physical measurements card provided to respondents are available in the Data downloads section.

      Additional data from the survey will be available via a range of other products and services:

      • detailed microdata to be released in the DataLab at the same time as this publication
      • a TableBuilder product (subject to the approval of the Australian Statistician) to be accessible via the ABS website using a secure log-on portal in early 2020
      • a number of supplementary themed publications, released progressively after the main publication
      • tables produced on request to meet specific information requirements from the survey (subject to confidentiality and sampling variability constraints).
         

      To support the return of results to Aboriginal and Torres Strait Islander peoples, a series of thematic releases are planned for distribution to Aboriginal and Torres Strait Islander communities.

      For more information about this survey and associated products contact the National Information and Referral Service on 1300 135 070.

      For further information on the comparability of data items see the Data downloads section.

      Confidentiality

      The Census and Statistics Act 1905 authorises the ABS to collect statistical information, and requires that information is not published in a way that could identify a particular person or organisation. The ABS must make sure that information about individual respondents cannot be derived from published data.

      The ABS takes care in the specification of tables to reduce the risk of identifying individuals. Random adjustment of the data is considered the best way to do this. A technique called perturbation randomly adjusts all cell values to prevent identifiable data being exposed. These adjustments result in small introduced random errors, which often result in tables not being 'internally consistent' (that is, interior cells not adding up to the totals). However, the information value of the table as a whole is not impacted. This technique allows the production of very large/detailed tables valued by clients even when they contain cells of very small numbers.

      In this publication, perturbation was applied to published data from 2018–19 onwards. Data from surveys before 2018–19 have not been perturbed, but have been confidentialised by suppressing cells if required.

      Interpretation of results

      Care has been taken to ensure that the results of this survey are as accurate as possible. The following factors should be considered when interpreting these estimates.

      • Information recorded in the survey is ‘as reported’ by respondents and may differ from information available from other sources or collected using different methodologies.
      • Results of previous surveys have shown a tendency for people to under-report when asked about certain topics, such as alcohol consumption, smoking and substance use.
      • Different data items were collected for different time periods. The reliability and accuracy of data are therefore dependent on the respondent's recall.
      • Some people may have provided responses they felt were expected, rather than those that accurately reflect their own situation.
         

      For a number of survey data items, some people were unwilling or unable to provide the required information.

      • Where responses for a particular data item were missing for a person or household they were recorded in a ‘not known’, ‘not stated’ or ‘refusal’ category.
      • 'Not stated’ categories have either been included in the data cubes as part of the total, or they appear as a separate category. This allows users to determine the suitability of the data for their purposes.
         

      Breastfeeding

      Information was collected for children aged 0–3 years (that is, children up to 3 years and 11 months of age, also referred to as 0–47 months).

      • The accuracy of the data may be affected in cases where an adult other than the child’s parent responded about the child.
      • An age group of 0–5 months is used to describe children aged up to 5 months and 30 days.
         

      Chronic conditions

      A chronic condition is a long-term health condition selected for reporting in this survey because it is common, poses significant health problems, has been the focus of population health surveillance efforts, and action can be taken to prevent its occurrence.

      • People reporting diabetes mellitus and/or particular types of heart, stroke and vascular disease (angina, a heart attack, other ischaemic heart diseases, stroke or other cerebrovascular diseases) were included regardless of whether the condition was current and/or long-term.
      • When counting the number of chronic conditions a person has, multiple conditions belonging to the same condition type are treated as the one condition. For example, a person reporting anxiety and depression is counted as having one chronic condition as they are both of the same condition type (mental and behavioural conditions).
         

      Disability

      A person has a disability if they have an impairment which restricts their everyday activities and has lasted, or is expected to last, for at least six months. A person with a disability is classified by whether they have:

      • a specific limitation with any core activities (mobility, communication and self-care)
      • a specific restriction when participating in schooling or employment activities, or
      • no specific limitation with core activities or restriction with schooling or employment activities.
         

      A person has a specific limitation with a core activity if they need help from another person, have difficulty or use an aid or other equipment to perform at least one selected task. The level of limitation for each core activity is based on the amount of help a person needs with a selected task:

      • profound — unable to do or always needs help with a core activity task
      • severe — sometimes needs help or has difficulty with a core activity task
      • moderate — does not need help but has difficulty with a core activity task
      • mild — does not need help and has no difficulty, but uses aids or equipment or has other limitations with a core activity task
         

      A person's overall level of core activity limitation is determined by their highest level of limitation in any of these activities. For example, if a person has a profound limitation with a communication task and a moderate limitation with a self-care task, they person is categorised as having a profound disability.

      A person has a schooling restriction if, because of their disability, they:

      • are not attending school/undertaking further study
      • need time off school or study
      • attend special classes or a special school
      • have other related difficulties.
         

      A person has an employment restriction if, because of their disability, they:

      • are restricted in the type of job they could do
      • are restricted in the number of hours they can work
      • have difficulty finding suitable work
      • need time off work
      • are permanently unable to work.
         

      A person with a ‘schooling/employment restriction only' is someone who reported no limitations with any of the core activities but reported having difficulty with schooling and/or employment activities.

      Employment

      Information was collected using the short-form version of the questions used in the ABS’ monthly Labour Force Survey.

      Two changes were made to the employment questions for the 2018–19 NATSIHS.

      The first change was that all references to the Community Development Employment Projects (CDEP) were replaced with the Community Development Program (CDP). The labour force status of a person on the CDP depends on whether they had another job and other factors.

      • If they had another job, they were considered to be employed.
      • If they did not have another job and they reported they were paid by an employer or another source, they were considered to be employed.
      • If they did not have another job and they reported they were paid by Centrelink or were unpaid, they were considered to be either unemployed or not in the labour force. They were asked further questions to determine their final labour force status.
         

      The other change was that ‘unpaid trainee work’ was added to the list of categories for payment arrangements to align with changes made to the ABS’ standard employment module. Unpaid trainees were previously categorised as ‘Other’. It is expected that the population in this category is small.

      The populations affected by these two changes are likely to be insignificant so there is no break in time series for the 2018–19 NATSIHS.

      Equivalised income

      Differences in household types and compositions, and their requirements relative to income, can be taken into account by the application of equivalence scales. These scales are a set of ratios which, when applied to the income of different household or income unit types, produce standardised estimates of income which reflect the households' relative well-being. The modified Organisation for Economic Co-operation and Development (OECD) equivalence scale (1994) was used.

      Equivalised income is derived by calculating an equivalence factor and then dividing income by that factor. The equivalence factor is built up by allocating points to each person in the household unit and summing those points. One point is allocated to the first adult in the unit, 0.5 points for each other person aged 15 years and over, and 0.3 points for each person aged less than 15 years. For example:

      • a single person household has a factor of one, so equivalised income is therefore the same as reported income
      • a household comprising two adults and a child aged less than 15 years would have a factor of 1.8, so equivalised income for this household is therefore the household income divided by 1.8.
         

      Equivalised income is available in dollar amounts and deciles.

      Family composition

      ‘Family composition’ is created through the relationships that exist between a single ‘responsible adult’ and each other member of that family living in the household. Family composition is then allocated on the basis of whether the types of relationships given below are present or not in the family in the following order of precedence.

      • A couple relationship is defined as a registered or defacto marriage, including same-sex relationships.
      • A parent-child relationship is defined as a relationship between two persons usually resident in the same household. The child is attached to the parent via a natural, adoptive, step, foster or child dependency relationship.
      • A child dependency relationship is defined as including all children under the age of 15 years (whether related or unrelated to the family reference person) and those natural, step, adopted or foster children who are full-time students aged 15–24 years.
      • Other relationship is defined as including all those persons related by blood or by marriage who are not covered by the above relationships.
         

      The definition of family used for the 2018–19 NATSIHS is a more restrictive definition than the ordinary notion of the term ‘family’ which generally includes relatives whether they live together or not. This is a reflection of the fact that, for survey-based research, it is necessary to place some physical bound on the extent of family for the purposes of being able to collect family data.

      Health actions

      A health action is an action taken by a person in the two weeks prior to interview or, for admission to hospital, in the 12 months prior to interview, related to their health. When interpreting health action data, the following points should be noted.

      • People were asked if there was a place they usually go when they are sick or need advice about their own health. In 2018–19, a change was made to combine ‘Aboriginal Medical Service’ and ‘Community Clinic’ into a single category. This was done to alleviate the problem for those who were unsure whether they attended an Aboriginal Medical Service or a Community Clinic.
      • In 2018–19, the term ‘doctor’ was changed to ‘GP/General Practitioner’ or ‘specialist’ to remove ambiguity in some of the questions.
      • People who reported a health condition in the other long-term conditions module, rather than in the specific condition module, may not have been asked particular questions about that condition. As a result, there may be under-reporting of some condition-specific health actions.
         

      Health conditions

      Information was collected on broad range of health conditions, with the primary focus being on those that were current and long-term. A current long-term health condition is an illness, injury or disability which was current at the time of the interview and which had lasted at least six months, or which the person expected to last for six months or more.

      Information on specific health conditions was collected in individual modules, as well as a general long-term health conditions module. Questions varied to take into account differences between non-remote and remote populations and demographic characteristics. For example, males and females were asked about different types of cancer testing.

      Interviewers coded reported conditions using an extensive pick list of conditions built into the computer based questionnaire. For output purposes, conditions are grouped together based on the International Statistical Classification of Diseases and Related Health Problems 10th Revision (ICD-10).

      The following should be taken into consideration when using health conditions data.

      • Some questions are subjective and responses may be influenced by factors unrelated to health.
      • The potentially sensitive and personal nature of some questions may have impacted on people’s willingness to respond.
      • Conditions which are reported in the long-term health conditions module, rather than the specific condition module, have not necessarily been diagnosed by a doctor or nurse.
      • People who are usual residents of hospitals or nursing homes are outside the scope of the survey. As a result, the estimates in this survey may under-estimate the prevalence of certain conditions, especially those associated with age.
      • As conditions are self-reported, people may report symptoms of an underlying condition, rather than the condition itself. For example, they may report having oedema which may be a symptom of a heart valve disorder.
      • It is expected that conditions that were specifically mentioned in questions or shown on prompt cards would have been better reported than conditions for which responses relied entirely on a respondent’s judgement and willingness to report them.
         

      Income

      The 2018–19 NATSIHS uses the short personal income module instead of the basic module used in the 2012–13 NATSIHS.

      • In the basic module, people were prompted to think of all of their sources of income, and were then asked to provide a separate amount for each of their source(s).
      • In the short module, people were asked to list all of their sources of income, but they were only required to provide one total amount for all of their income.
      • One benefit in using the short income module is a lower proportion of refusals when people were asked for their total income. In the basic module, if a person refuses to state how much they earn from one source, their total income is classified as a refusal. In the short module, a person is only asked to provide one estimate which is of their total income.
         

      In 2012–13, the Community Development Employment Projects (CDEP) was included as a category when asking about sources of personal income. As CDEP was discontinued in 2013, these questions were not included in the 2018–19 NATSIHS. Instead, questions were asked about the Community Development Program (CDP).

      In the 2018–19 NATSIHS, the deciles for both household and personal income are not taken from the deciles within the survey. Instead, a national figure is used (which includes both non-Indigenous and Aboriginal and Torres Strait Islander households), meaning there will not necessarily be 10% of the in scope population within each decile. The decile boundaries from the 2017–18 NHS have been used, and adjusted for inflation using the Consumer Price Index to account for enumeration period differences between the 2017–18 NHS and the 2018–19 NATSIHS. This is a similar approach to that used in the 2012–13 NATSIHS and 2014–15 NATSISS.

      The dollar amount cut-offs for equivalised household income deciles, gross weekly income of household deciles and gross weekly personal income deciles are available in the data item list via the Data downloads section.

      Medications

      Information about medication use was only collected in non-remote areas. Interviewers recorded the Australian Register of Therapeutic Goods Administration (TGA) identification number of each medication taken by the respondent. These were either:

      • AUST R medicines – all prescription medications and many over-the-counter products such as those used for pain relief, coughs and colds and antiseptic creams, or
      • AUST L medicines – generally lower risk self-medication products, which include vitamins, minerals, and herbal and homoeopathic products.
         

      For medications without an AUST R or AUST L code (for example, medications obtained overseas), interviewers were able to record details of the medication which were later coded by office staff. Up to 50 different medications were able to be recorded.

      Therapeutic substances reported were coded as either medications or dietary supplements.

      • Medications were coded to the World Health Organisation’s Anatomical Therapeutic Chemical (ATC) classification system based on their active ingredient(s) and their therapeutic application.
      • Dietary supplements were coded to a classification adapted from the Australian Food, Supplement and Nutrient Database (AUSNUT) food classification by Food Standards Australian New Zealand (FSANZ).
         

      The categorisation of substances as either medications or dietary supplements has been adopted for the purposes of describing data collected in the survey and should not be assumed to be an exact description of the contents of either category. For example, while the ATC includes codes for vitamins and minerals and other dietary supplements, such supplements were coded to the FSANZ supplements classification.

      For the purpose of this survey, dietary supplements included:

      • vitamins
      • minerals
      • herbal extracts (including Chinese herbs)
      • amino acids
      • mega 3 fatty acids
      • other fatty acids
      • glucosamine/chondroitin formulations.
         

      Substance use

      The collection method for this topic varied between non-remote and remote areas.

      • In non-remote areas, people had the option to answer questions using a self-completed computer based questionnaire.
      • In remote areas, people were asked questions by an interviewer.
         

      Substance use is likely to be under-reported.

      • Responses to these questions were voluntary, with respondents able to not answer some or all of the questions.
      • The potentially sensitive and personal nature of the questions may have impacted on people’s willingness to respond and what responses they provided.
      • Under-reporting of substance use may be more common in remote areas as people provided their responses directly to the interviewer and may have had other household members present at the interview.
      • The extent to which under-reporting has occurred is not able to be quantified.
         

      Comparability with previous surveys

      The ABS previously conducted the NATSIHS in 2012–13 and 2004–05. In addition, the National Health Survey (NHS) conducted in 2001 included an Aboriginal and Torres Strait Islander sample known as the NHS (I).

      The 2018–19 and 2012–13 surveys largely employed the same methodology and survey content to allow for comparability over time. Information about differences between the various iterations of the survey can be found below.

      • The data item list for the 2018–19 NATSIHS contains information about items that can be compared between 2018–19 and 2012–13. It also includes information about whether the item is comparable to the 2017–18 NHS.

      Between 2011 and 2016, the Aboriginal and Torres Strait Islander population increased by 18.4% or 100,800 people. When comparing estimates from the 2018–19 NATSIHS with previous surveys of the Aboriginal and Torres Strait Islander population, users should be aware of the large increase in the Aboriginal and Torres Strait Islander population and consider the impact this may have when interpreting change over time.

      Appendix - response rates, sample counts and estimates

      Show all

      Appendix - physical measurements

      Show all

      Appendix - mental health and wellbeing data

      Show all

      Appendix - assessing health risk factors

      Show all

      Appendix - physical and threatened physical harm data

      Show all

      Appendix - survey topics

      Show all

      Appendix - non-Indigenous comparisons

      Show all

      Appendix - hearing data

      Show all

      Appendix - modelled estimates for small areas

      Show all

      Technical note - reliability of estimates

      Two types of error are possible in estimates based on a sample survey:

      • non-sampling error
      • sampling error
         

      Non-sampling error

      Non-sampling error is caused by factors other than those related to sample selection. It is any factor that results in the data values not accurately reflecting the true value of the population.

      It can occur at any stage throughout the survey process. Examples include:

      • selected persons that do not respond (e.g. refusals, non-contact)
      • questions being misunderstood
      • responses being incorrectly recorded
      • errors in coding or processing the survey data.
         

      Sampling error

      Sampling error is the expected difference that can occur between the published estimates and the value that would have been produced if the whole population had been surveyed.

      Standard error

      One measure of sampling error is the standard error (SE). There are about two chances in three that an estimate will differ by less than one SE from the figure that would have been obtained if the whole population had been included. There are about 19 chances in 20 that an estimate will differ by less than two SEs.

      Relative standard error

      The relative standard error (RSE) is a useful measure of sampling error. It is the SE expressed as a percentage of the estimate:

      \(\large R S E \%=\left(\frac{S E}{e s t i m a t e}\right) \times 100\)


      RSEs for published estimates are supplied in Excel data tables, available via the Data downloads section.

      Only estimates with RSEs less than 25% are considered reliable for most purposes. Estimates with larger RSEs, between 25% and less than 50% have been included in the publication, but are flagged to indicate they are subject to high SEs. These should be used with caution. Estimates with RSEs of 50% or more have also been flagged and are considered unreliable for most purposes.

      Standard errors of proportions and percentages

       A measure of sampling error can be calculated for proportions formed from the ratio of two estimates.

      For proportions where the denominator (y) is an estimate of the number of persons in a group and the numerator (x) is the number of persons in a sub-group of the denominator, the formula to approximate the RSE is given below. The formula is only valid when x is a subset of y:

      \(\large{RSE}\left(\frac{x}{y}\right) \approx \sqrt{[R S E(x)]^{2}-[R S E(y)]^{2}}\)
       

      Comparison of estimates

      Published estimates can be used to calculate the difference between two survey estimates. The sampling error of the difference between two estimates depends on their SEs and the relationship (correlation) between them.

      An approximate SE of the difference between two estimates (x and y) may be calculated by the following formula:

      \(\large S E(x-y) \approx \sqrt{[S E(x)]^{2}+[S E(y)]^{2}}\)


      While the above formula will only be exact for differences between unrelated characteristics of sub-populations, it is expected that it will provide a reasonable approximation for other data comparisons.

      Margins of error

       Another measure of sampling error is the Margin of Error (MOE). This describes the distance from the population value that the sample estimate is likely to be within. It is specified at a given level of confidence. Confidence levels typically used are 90%, 95% and 99%.

      For example, at the 95% confidence level, the MOE indicates that there are about 19 chances in 20 that the estimate will differ by less than the specified MOE from the population value (the figure obtained if the whole population had been enumerated). The 95% MOE is calculated as 1.96 multiplied by the SE:

      \(\large{MOE}(y) \approx \frac{R S E(y) \times y}{100} \times 1.96\)


      The MOEs in this publication are calculated at the 95% confidence level. This can easily be converted to a 90% confidence level by multiplying the MOE by:

      \(\large\frac{1.645}{1.96}\)


      or to a 99% confidence level by multiplying the MOE by:

      \(\large\frac{2.576}{1.96}\)

      Confidence intervals

      A confidence interval expresses the sampling error as a range in which the population value is expected to lie at a given level of confidence. The confidence interval can easily be constructed from the MOE by taking the estimate plus or minus the MOE of the estimate.

      Significance testing

      When comparing estimates between surveys or between populations within a survey, it is useful to determine whether apparent differences are 'real' differences or simply the product of differences between the survey samples.

      One way to examine this is to determine whether the difference between the estimates is statistically significant. This is done by calculating the standard error of the difference between two estimates (x and y) and using that to calculate the test statistic using the formula below:

      \(\Large\frac{|x-y|}{S E(x-y)}\)


      where

      \(\large S E(y) \approx \frac{R S E(y) \times y}{100} \)


      If the value of the statistic is greater than 1.96, we can say there is good evidence of a statistically significant difference at 95% confidence levels between the two populations with respect to that characteristic. Otherwise, it cannot be stated with confidence that there is a real difference between the populations.

      Glossary

      See all

      Quality declaration - summary

      Institutional environment

      Relevance

      Timeliness

      Accuracy

      Coherence

      Interpretability

      Accessibility

      Back to top of the page