Patient Experiences in Australia: Summary of Findings methodology

This is not the latest release View the latest release
Reference period
2018-19 financial year
Released
12/11/2019

Explanatory notes

Introduction

This publication contains results from the Patient Experience Survey, a topic on the Multipurpose Household Survey (MPHS) conducted throughout Australia from July 2018 to June 2019. The MPHS, undertaken each financial year by the Australian Bureau of Statistics (ABS), is a supplement to the monthly Labour Force Survey (LFS) and is designed to collect statistics for a number of small, self-contained topics.

The survey collected information from people about their experiences with selected aspects of the health system in the 12 months before their interview, including access and barriers to a range of health care services. Respondents were asked about their experiences with medical professionals, the frequency of their visits, waiting times, and barriers to accessing care, as well as their self-assessed health status, long term health conditions and private insurance. Data was also collected on aspects of communication between patients and health professionals. Labour force characteristics, education, income and other demographics was also collected.

Scope and coverage

The scope of the Patient Experience Survey was restricted to people aged 15 years and over who were usual residents of private dwellings and excludes:

  • members of the Australian permanent defence forces
  • certain diplomatic personnel of overseas governments, customarily excluded from Census and estimated resident population counts
  • overseas residents in Australia
  • members of non-Australian defence forces (and their dependants)
  • persons living in non-private dwellings such as hotels, university residences, boarding schools, hospitals, nursing homes, homes for people with disabilities, and prisons
  • persons resident in the Indigenous Community Strata (ICS).
     

The scope for MPHS included households residing in urban, rural, remote and very remote parts of Australia, except the ICS.

In the LFS, rules are applied which aim to ensure that each person in coverage is associated with only one dwelling, and hence has only one chance of selection in the survey. See Labour Force, Australia (cat. no. 6202.0) for more detail.

Data collection

The Patient Experience Survey is one of a number of small, self-contained topics on the Multipurpose Household Survey (MPHS), conducted throughout Australia from July 2018 to June 2019. The MPHS is a supplement to the monthly LFS. In 2018–19, the MPHS topics were:

  • Patient Experiences in Australia
  • Crime Victimisation
  • Barriers and Incentives to Labour Force Participation
  • Retirement and Retirement Intentions
  • Qualifications and Work
  • Income (Personal, Partner's, Household).
     

Each month, one eighth of the dwellings in the LFS sample were rotated out of the survey and selected for the MPHS. After the LFS had been fully completed for each person in scope and coverage, a usual resident aged 15 years or over was selected at random (based on a computer algorithm) and asked the additional MPHS questions in a personal interview.

In the MPHS, if the randomly selected person was aged 15 to 17 years, permission was sought from a parent or guardian before conducting the interview. If permission was not given, the parent or guardian was asked the questions on behalf of the 15 to 17 year old (proxy interview).

Data were collected using Computer Assisted Interviewing (CAI), whereby responses were recorded directly onto an electronic questionnaire in a notebook computer, with interviews conducted either face-to-face or over the telephone. The majority of interviews were conducted over the telephone.

Sample size

After taking into account sample loss, the response rate for the Patient Experience Survey was 71.8%. In total, information was collected from 28,719 fully responding persons. This includes 477 proxy interviews for people aged 15 to 17 years, where permission was not given by a parent or guardian for a personal interview.

Weighting, benchmarks and estimation

Weighting

Weighting is the process of adjusting results from a sample survey to infer results for the total 'in-scope' population. To do this, a 'weight' is allocated to each enumerated person. The weight is a value which indicates the number of persons in the population represented by the sample person.

The first step in calculating weights for each unit is to assign an initial weight, which is the inverse of the probability of being selected in the survey. For example, if the probability of a person being selected in the survey was 1 in 600, then the person would have an initial weight of 600 (that is, they represent 600 people).

Benchmarks

The initial weights were calibrated to align with independent estimates of the population of interest, referred to as 'benchmarks'. Weights calibrated against population benchmarks ensure that the survey estimates conform to the independently estimated distribution of the population rather than the distribution within the sample itself. Calibration to population benchmarks helps to compensate for over or under-enumeration of particular categories of persons/households which may occur due to either the random nature of sampling or non-response.

The survey was benchmarked to the Estimated Resident Population (ERP) living in private dwellings in each state and territory at December 2018. People living in Indigenous communities were excluded. These benchmarks are based on the 2016 Census.

While LFS benchmarks are revised every 5 years, to take into account the outcome of the 5-yearly rebasing of the ERP following the latest Census, the supplementary surveys and MPHS (from which the statistics in this publication are taken) are not. Small differences will therefore exist between the civilian population aged 15 years and over reflected in the LFS and other labour household surveys estimates, as well as over time.

Estimation

Survey estimates of counts of persons are obtained by summing the weights of persons with the characteristic of interest.

Confidentiality

To minimise the risk of identifying individuals in aggregate statistics, a technique is used to randomly adjust cell values. This technique is called perturbation. Perturbation involves a small random adjustment of the statistics and is considered the most satisfactory technique for avoiding the release of identifiable statistics while maximising the range of information that can be released. These adjustments have a negligible impact on the underlying pattern of the statistics.

After perturbation, a given published cell value will be consistent across all tables. However, adding up cell values to derive a total will not necessarily give the same result as published totals. The introduction of perturbation in publications ensures that these statistics are consistent with statistics released via services such as TableBuilder.

Perturbation has been applied since 2013–14. Data from previous cycles (2009 to 2012–13) have not been perturbed.

Reliability of estimates

All sample surveys are subject to error which can be broadly categorised as either sampling error or non-sampling error. For more information refer to the Technical Note.

Data quality

Information recorded in this survey is 'as reported' by respondents, and may differ from that which might be obtained from other sources or via other methodologies. This factor should be considered when interpreting the estimates in this publication.

Information was collected on respondents' perception of their health status and experiences with services. Perceptions are influenced by a number of factors and can change quickly. Care should therefore be taken when analysing or interpreting the data.

The definition of 'need' (in questions where respondents were asked whether they needed to use a particular health service) was left to the respondents' interpretation.

For some questions which called for personal opinions, such as self-assessed health or whether waiting times were felt to be unacceptable, responses from proxy interviews were not collected.

Data comparability

Comparability of time series

When comparing data from different cycles of the survey, users are advised to consult the questionnaires (available from the Data downloads section), check whether question wording or sequencing has changed, and consider whether this may have had an impact on the way questions were answered by respondents.

The data item 'Whether seen an other health professional for own health in the last 12 months' was collected in 2018–19, but not in 2017–18.

All data items shown in time series tables are comparable between the survey cycles presented.

Comparability with other ABS surveys

Caution should be taken when comparing across ABS surveys and with administrative by-product data that address the access and use of health services. Estimates from the Patient Experience Survey may differ from those obtained from other surveys (such as the National Aboriginal and Torres Strait Islander Health Survey, National Aboriginal and Torres Strait Islander Social Survey, National Health Survey, General Social Survey and Survey of Disability, Ageing and Carers) due to differences in survey mode, methodology and questionnaire design.

Comparability to monthly LFS statistics

Since the Patient Experience Survey is conducted as a supplement to the LFS, data items collected in the LFS are also available in this publication. However, there are some important differences between the two surveys. The LFS had a response rate of over 90% compared to the MPHS response rate of 71.8%. The scope of the Patient Experience Survey and the LFS also differ (refer to these sections above). Due to the differences between the samples, data from the Patient Experience Survey and the LFS are weighted separately. Differences may therefore be found in the estimates for those data items collected in the LFS and published as part of the Patient Experience Survey.

Classifications

Geography

Australian geographic data are classified according to the Australian Statistical Geography Standard (ASGS): Volume 1 - Main Structure and Greater Capital City Statistical Areas (cat. no. 1270.0.55.001). Remoteness areas are classified according to the Australian Statistical Geography Standard (ASGS): Volume 5 - Remoteness Structure (cat. no. 1270.0.55.005).

Country of birth

Country of birth data are classified according to the Standard Australian Classification of Countries (SACC) (cat. no. 1269.0).

Industry

Industry data are classified according to the Australian and New Zealand Standard Industrial Classification (ANZSIC), (Revision 2.0) (cat. no. 1292.0).

Occupation

Education

Education data are classified according to the Australian Standard Classification of Education ASCED (cat. no 1272.0). The ASCED is a national standard classification which can be applied to all sectors of the Australian education system including schools, vocational education and training and higher education. The ASCED comprises two classifications: Level of Education and Field of Education.

Language

Language data are classified according to the Australian Standard Classification of Languages (ASCL) (cat. no. 1267.0).

Socio-Economic Indexes for Areas (SEIFA)

This survey uses the 2016 Socio-economic Indexes for Areas (SEIFA).

SEIFA is a suite of four summary measures that have been created from 2016 Census information. Each index summarises a different aspect of the socio-economic conditions of people living in an area. The indexes provide more general measures of socio-economic status than is given by measures such as income or unemployment alone.

For each index, every geographic area in Australia is given a SEIFA number which shows how disadvantaged that area is compared with other areas in Australia.

The index used in the Patient Experience publication is the Index of Relative Socio-economic Disadvantage, derived from Census variables related to disadvantage such as low income, low educational attainment, unemployment, jobs in relatively unskilled occupations and dwellings without motor vehicles.

SEIFA uses a broad definition of relative socio-economic disadvantage in terms of people's access to material and social resources, and their ability to participate in society. While SEIFA represents an average of all people living in an area, it does not represent the individual situation of each person. Larger areas are more likely to have greater diversity of people and households.

For more detail, see the following:

Products and services

Data Cubes containing all tables for this publication in Excel spreadsheet format are available from the Data downloads section. The spreadsheets present tables of estimates and proportions, and their corresponding relative standard errors (RSEs) and/or Margins of Error (MoEs).

As well as the statistics included in this and related publications, the ABS may have other relevant data available on request. Subject to confidentiality and sampling variability constraints, tables can be tailored to individual requirements. A list of data items from this survey is available from the Data downloads section. All enquiries should be made to the National Information and Referral Service on 1300 135 070, or email client.services@abs.gov.au

Acknowledgements

ABS surveys draw extensively on information provided by individuals, businesses, governments and other organisations. Their continued cooperation is very much appreciated and without it, the wide range of statistics published by the ABS would not be available. Information received by the ABS is treated in strict confidence as required by the Census and Statistics Act 1905.

Privacy

The ABS Privacy Policy outlines how the ABS will handle any personal information that you provide to the ABS.

Next survey

The next Patient Experience Survey will be collected from July 2019 to June 2020.

Technical note - data quality

Reliability of the estimates

The estimates in this publication are based on information obtained from a sample survey. Any data collection may encounter factors, known as non-sampling error, which can impact on the reliability of the resulting statistics. In addition, the reliability of estimates based on sample surveys are also subject to sampling variability. That is, the estimates may differ from those that would have been produced had all persons in the population been included in the survey. This is known as sampling error.

Non-sampling error

Non-sampling error may occur in any collection, whether it is based on a sample or a full count such as a census. Sources of non-sampling error include non-response, errors in reporting by respondents or recording of answers by interviewers and errors in coding and processing data. Every effort is made to reduce non-sampling error by careful design and testing of questionnaires, training and supervision of interviewers, and extensive editing and quality control procedures at all stages of data processing. It is not possible to quantify the non-sampling error.

Sampling error

One measure of sampling error is given by the standard error (SE), which indicates the extent to which an estimate might have varied by chance because only a sample of persons was included. There are about two chances in three (67%) that a sample estimate will differ by less than one SE from the number that would have been obtained if all persons had been surveyed, and about 19 chances in 20 (95%) that the difference will be less than two SEs.

Another measure of the likely difference is the relative standard error (RSE), which is obtained by expressing the SE as a percentage of the estimate. The RSE is a useful measure in that it provides an immediate indication of the percentage error likely to have occurred due to sampling and therefore avoids the need to also refer to the size of the estimate.

\(\large\ R S E \%=\left(\frac{S E}{\text {estimate}}\right) \times 100\)

Only estimates (numbers or percentages) with RSEs less than 25% are considered sufficiently reliable for most analytical purposes. However, estimates with larger RSEs have been included. Estimates with an RSE in the range 25% to 50% should be used with caution while estimates with RSEs greater than 50% are considered too unreliable for general use. All cells in the Excel spreadsheets with RSEs greater than 25% have been annotated and footnoted.

Another measure of sampling error is the Margin of Error (MOE), which describes the distance from the population value that the sample estimate is likely to be within, and is specified at a given level of confidence. Confidence levels typically used are 90%, 95% and 99%. For example, at the 95% confidence level the MOE indicates that there are about 19 chances in 20 that the estimate will differ by less than the specified MOE from the population value (the figure obtained if all dwellings had been enumerated). The 95% MOE is calculated as 1.96 multiplied by the SE.

The 95% MOE can also be calculated from the RSE by:

\(\large\ MOE(y) \approx \frac{R S E(y) \times y}{100} \times 1.96\)

The MOEs in this publication are calculated at the 95% confidence level. This can easily be converted to a 90% confidence level by multiplying the MOE by:

\(\Large{\frac{1.645}{1.96}}\)

or to a 99% confidence level by multiplying by a factor of:

\(\Large{\frac{2.576}{1.96}}\)

A confidence interval expresses the sampling error as a range in which the population value is expected to lie at a given level of confidence. The confidence interval can easily be constructed from the MOE of the same level of confidence by taking the estimate plus or minus the MOE of the estimate.

Estimates of proportions with an MOE more than 10% are annotated to indicate they are subject to high sample variability and particular consideration should be given to the MOE when using these estimates. Depending on how the estimate is to be used, an MOE greater than 10% may be considered too large to inform decisions. In addition, estimates with a corresponding standard 95% confidence interval that includes 0% or 100% are annotated to indicate they are usually considered unreliable for most purposes.

The Excel spreadsheets in the Data downloads section contain all the tables produced for this release and the calculated RSEs and/or MOEs for each of the estimates.

Calculations of standard errors

Standard errors can be calculated using the estimates (counts or percentages) and the corresponding RSEs. See What is a Standard Error and Relative Standard Error, Reliability of estimates for Labour Force data for more details.

Standard errors of proportions and estimates

Proportions and percentages formed from the ratio of two estimates are also subject to sampling errors. The size of the error depends on the accuracy of both the numerator and the denominator. A formula to approximate the RSE of a proportion is given below. This formula is only valid when x is a subset of y:

\(\ R S E\left(\frac{x}{y}\right) \approx \sqrt{[R S E(x)]^{2}-[R S E(y)]^{2}}\)

Comparisons of estimates

The difference between two survey estimates (counts or percentages) can also be calculated from published estimates. Such an estimate is also subject to sampling error. The sampling error of the difference between two estimates depends on their SEs and the relationship (correlation) between them. An approximate SE of the difference between two estimates (x-y) may be calculated by the following formula:

\(\ S E(x-y) \approx \sqrt{[S E(x)]^{2}+[S E(y)]^{2}}\)

While this formula will only be exact for differences between separate and uncorrelated characteristics or sub populations, it provides a good approximation for the differences likely to be of interest in this publication.

Significance testing

A statistical significance test for a comparison between estimates can be performed to determine whether it is likely that there is a difference between the corresponding population characteristics. The standard error of the difference between two corresponding estimates (x and y) can be calculated using the formula shown above in the Comparison of estimates section. This standard error is then used to calculate the following test statistic:

\(\Large\ {\frac{|x-y|}{S E(x-y)}}\)

where:

\(\large\ S E(y)=\frac{R S E(y) \times y}{100}\)

If the value of this test statistic is greater than 1.96 then there is evidence, with a 95% level of confidence, of a statistically significant difference in the two populations with respect to that characteristic. Otherwise, it cannot be stated with confidence that there is a real difference between the populations with respect to that characteristic.

Glossary

Show all

Quality declaration - summary

Institutional environment

Relevance

Timeliness

Accuracy

Coherence

Interpretability

Accessibility

Abbreviations

Show all

Back to top of the page