Participation in Sport and Physical Recreation, Australia methodology

Latest release
Reference period
2013-14 financial year
Released
18/02/2015
Next release Unknown
First release

Explanatory notes

Introduction

1 The statistics presented in this publication were compiled from data collected in the Australian Bureau of Statistics 2013–14 Multipurpose Household Survey (MPHS). The MPHS is conducted each financial year throughout Australia from July to June as a supplement to the ABS' monthly Labour Force Survey (LFS) and is designed to provide annual statistics for a number of small, self-contained topics.

2 In 2013–14 the topics were: 

  • Attendance at selected cultural venues and events 
  • Participation in selected cultural activities 
  • Participation in sport and physical recreation 
  • Work related injuries 
  • Crime victimisation 
  • Patient experience.
     

3 For all topics, general demographic information such as age, sex, labour force characteristics, education and income are also available. 

4 This publication covers the Participation in Sport and Physical Recreation topic and presents details about the number and characteristics of people aged 15 years and over who participated in a range of sports and physical recreational activities.

5 Data for other 2013–14 topics will be released in separate publications. For all MPHS topics, information on labour force characteristics, education, income and other demographics are also available. Survey microdata from selected 2013-14 topics will also be released through the TableBuilder product. For more details, refer to the TableBuilder information, Microdata: Participation in Sport and Physical Recreation, Australia (cat. no. 4177.0.55.002).

Scope

6 The scope of the Labour Force Survey (LFS) is restricted to people aged 15 years and over and excludes the following:

  • members of the permanent defence forces
  • certain diplomatic personnel of overseas governments usually excluded from census and estimated resident populations
  • overseas residents in Australia
  • members of non-Australian defence forces (and their dependants).
     

7 In addition, the 2013–14 MPHS excluded the following from scope:

  • households in Indigenous communities
  • people living in non-private dwellings (e.g. hotels, university residences, students at boarding schools, patients in hospitals, inmates of prisons and residents of other institutions (e.g. retirement homes, homes for persons with disabilities)).

Coverage

8 In the LFS, coverage rules are applied which aim to ensure that each person is associated with only one dwelling and hence has only one chance of selection in the survey. See Labour Force, Australia (cat. no. 6202.0) for more details.

Data collection

9 The 2013–14 MPHS was conducted as a supplement to the monthly LFS. Each month one eighth of the dwellings in the LFS sample were rotated out of the survey. All of these dwellings were then selected for the MPHS each month. In these dwellings, after the LFS had been fully completed for each person in scope and coverage, a person aged 15 years or over was selected at random (based on a computer algorithm) and asked the MPHS questions in a personal interview. If the randomly selected person was aged 15 to 17 years, permission was sought from a parent or guardian before conducting the interview. If permission was not given, the parent or guardian was asked the questions on behalf of the 15 to 17 year old. Data were collected using Computer Assisted Interviewing (CAI), whereby responses were recorded directly onto an electronic questionnaire in a notebook computer, usually during a telephone interview.

10 For the 2013–14 MPHS, the sample was accumulated over a twelve month period from July 2013 to June 2014.

11 The publication Labour Force, Australia (cat. no. 6202.0) contains information about survey design, sample redesign, scope, coverage and population benchmarks relevant to the monthly LFS, which also applies to supplementary surveys. It also contains definitions of demographic and labour force characteristics, and information about telephone interviewing relevant to both the monthly LFS and supplementary surveys.

Sample size

12 The initial sample for the Participation in Sport and Physical Recreation topic was around 24,400 private dwellings, from which one person was randomly selected. Of the 20,800 private dwellings that remained in the survey after sample loss (e.g. vacant or derelict dwellings, dwellings under construction and dwellings selected in the survey that had no residents in scope for the LFS), 15,988 or 76.8% fully responded to the questions on participation in sport and physical recreation.

Weighting, benchmarking and estimation

Weighting

13 Weighting is the process of adjusting results from a sample survey to infer results for the total in-scope population. To do this, a 'weight' is allocated to each covered sample unit which for the MPHS can be either a person or a household. The weight is a value which indicates how many population units are represented by the sample unit.

14 The first step in calculating weights for each unit is to assign an initial weight, which is the inverse of the probability of being selected in the survey. For example, if the probability of a person being selected in the survey was 1 in 600, then the person would have an initial weight of 600 (i.e. they represent 600 people).

Benchmarking

15 The initial weights were then calibrated to align with independent estimates of the population of interest, referred to as 'benchmarks', in designated categories of sex by age by area of usual residence. Weights calibrated against population benchmarks ensure that the survey estimates conform to the independently estimated distribution of the population rather than the distribution within the sample itself. Calibration to population benchmarks helps to compensate for over or under-enumeration of particular categories of persons/households which may occur due to either the random nature of sampling or non-response.

16 For person estimates, the MPHS was benchmarked to the projected population in each state and territory, at March 2014. The MPHS estimates do not (and are not intended to) match estimates for the total Australian population obtained from other sources.

Estimation

17 Survey estimates of counts of persons or households are obtained by summing the weights of persons or households with the characteristic of interest. Estimates of non-person counts (e.g. days spent attending an event) are obtained by multiplying the characteristic of interest with the weight of the reporting person/household and aggregating.

18 To minimise the risk of identifying individuals in aggregate statistics, a technique is used to randomly adjust cell values. This technique is called perturbation. Perturbation involves small random adjustment of the statistics and is considered the most satisfactory technique for avoiding the release of identifiable statistics while maximising the range of information that can be released. These adjustments have a negligible impact on the underlying pattern of the statistics. After perturbation, a given published cell value will be consistent across all tables. However, adding up cell values to derive a total will not necessarily give the same result as published totals. The introduction of perturbation in publications ensures that these statistics are consistent with statistics released via services such as Table Builder.

Reliability of estimates

19 All sample surveys are subject to error which can be broadly categorised as either:

  • sampling error
  • non-sampling error.

Sampling error

20 Sampling error is the difference between the published estimates, derived from a sample of persons, and the value that would have been produced if the total population (as defined for the scope of the survey) had been included in the survey. For more information refer to the Technical Note.

Non-sampling error

21 Non-sampling error may occur in any collection, whether it is based on a sample or a full count such as a census. Sources of non-sampling error include non-response, errors in reporting by respondents or recording of answers by interviewers and errors in coding and processing data. Every effort is made to reduce non-sampling error by careful design and testing of questionnaires, training and supervision of interviewers, and extensive editing and quality control procedures at all stages of data processing.

Interpretation of results

22 This publication presents details of persons who participated in a sport or physical recreational activity as a participant at least once during the 12 months prior to interview in 2013-14 The term 'participant' is defined as a player, competitor or person who takes part in some other physically active role. Information on involvement by people who participated solely as coaches, umpires or club officials are excluded from the participant data but included in the involvement of organised activity data.

23 Information on the frequency of participation was collected by asking respondents how many times they participated during the year, for each activity in which they participated.

24 No information was collected regarding the intensity in which the sport or physical activity was undertaken. For example, a cycling participant may have undertaken this activity at a leisurely pace or vigorously.

25 Activities such as gardening, housework, manual labouring and other forms of occupational physical activity were excluded from the data.

Data comparability

26 The ABS has previously collected data on participation in sport and physical recreation in various surveys from 1993 to 2000 using the Population Survey Monitor, in 2002, 2006 and 2010 using the General Social Survey, in 2001, 2004, 2007 and 2010 using the Involvement in Organised Sport and Physical Activity survey, and in 2005–06, 2009–10 and 2011–12 using the MPHS. Comparisons can only be made with the 2005–06, 2009–10 and 2011–12 surveys because of methodological differences with the other surveys.

Comparability with the 2005–06, 2009–10 and 2011-12 Participation in Sport and Physical Recreation Survey

27 While the 2013-14 Participation in Sport and Physical Recreation Survey is generally comparable with the 2005–06, 2009–10 and 2011-12 iterations, there are some differences. The 2005–06 survey provided information on the motivators and constraints of non-participants and low level participants. This information was replaced in 2009–10 and 2011–12 with questions on types of facilities used for sport and physical activities. However, the types of facilities that were asked about in 2011–12 differ to the facilities covered in 2009–10, and therefore these estimates are not comparable.

28 Care should be taken when comparing data for the Northern Territory (NT) across the surveys, as sample sizes limit the reliability of the estimates particularly at a detailed level.

29 There have been changes to the definition of several sports and physical recreation activities. Estimates for Dancing/Ballet and Gymnastics should not be compared to previous surveys. The following sports and physical recreation activities are separately identified since the 2011–12 survey: Aerobics; Fitness/Gym; Football sports; Scuba diving/Snorkelling; and Weightlifting/Bodybuilding. These activities should not be compared with previous surveys. For further information, see the Glossary.

Comparability with monthly LFS statistics

30 Due to differences in the scope and sample size of the MPHS and that of the LFS, the estimation procedure may lead to some small variations between labour force estimates from this survey and those from the LFS.

Future surveys

31 The ABS will conduct the MPHS again during the 2014-15 financial year. The 2014-15 MPHS topics are:

  • Education and Household income (core)
  • Household use of IT
  • Barriers and incentives to labour force participation
  • Retirement and retirement intentions
  • Patient Experience
  • Crime Victimisation
  • Qualifications and work outcomes.

Acknowledgement

32 ABS surveys draw extensively on information provided freely by individuals, businesses, governments and other organisations. Their continued cooperation is very much appreciated. Without it, the wide range of statistics published by the ABS would not be available. Information received by the ABS is treated in strict confidence as required by the Census and Statistics Act 1905.

Related publications

33 Current publications and other products released by the ABS are available from the ABS website. The ABS also issues a daily upcoming release advice on the website that details products to be released in the week ahead.

Technical note - data quality

Reliability of the estimates

1 The estimates in this publication are based on information obtained from a sample survey. Any data collection may encounter factors, known as non-sampling error, which can impact on the reliability of the resulting statistics. In addition, the reliability of estimates based on sample surveys are also subject to sampling variability. That is, the estimates may differ from those that would have been produced had all persons in the population been included in the survey.

Non-sampling error

2 Non-sampling error may occur in any collection, whether it is based on a sample or a full count such as a census. Sources of non-sampling error include non-response, errors in reporting by respondents or recording of answers by interviewers and errors in coding and processing data. Every effort is made to reduce non-sampling error by careful design and testing of questionnaires, training and supervision of interviewers, and extensive editing and quality control procedures at all stages of data processing.

Sampling error

3 One measure of the likely difference is given by the standard error (SE), which indicates the extent to which an estimate might have varied by chance because only a sample of persons was included. There are about two chances in three (67%) that a sample estimate will differ by less than one SE from the number that would have been obtained if all persons had been surveyed, and about 19 chances in 20 (95%) that the difference will be less than two SEs.

4 Another measure of the likely difference is the relative standard error (RSE), which is obtained by expressing the SE as a percentage of the estimate.

\(R S E \%=\left(\frac{S E}{\text {estimate}}\right) \times 100\)

5 RSEs for count estimates have been calculated using the Jackknife method of variance estimation. This involves the calculation of 30 'replicate' estimates based on 30 different subsamples of the obtained sample. The variability of estimates obtained from these subsamples is used to estimate the sample variability surrounding the count estimate.

6 The Excel spreadsheets in the Data downloads section contain all the tables produced for this release and the calculated RSEs for each of the estimates.

7 Only estimates (numbers or percentages) with RSEs less than 25% are considered sufficiently reliable for most analytical purposes. However, estimates with larger RSEs have been included. Estimates with an RSE in the range 25% to 50% should be used with caution while estimates with RSEs greater than 50% are considered too unreliable for general use. All cells in the Excel spreadsheets with RSEs greater than 25% contain a comment indicating the size of the RSE. These cells can be identified by a red indicator in the corner of the cell. The comment appears when the mouse pointer hovers over the cell.

Calculation of standard error

8 Standard errors can be calculated using the estimates (counts or percentages) and the corresponding RSEs. See What is a Standard Error and Relative Standard Error, Reliability of estimates for Labour Force data for more details.

Proportions and percentages

 9 Proportions and percentages formed from the ratio of two estimates are also subject to sampling errors. The size of the error depends on the accuracy of both the numerator and the denominator. A formula to approximate the RSE of a proportion is given below. This formula is only valid when x is a subset of y:

\(R S E\left(\frac{x}{y}\right) \approx \sqrt{[R S E(x)]^{2}-[R S E(y)]^{2}}\)

Differences

10 The difference between two survey estimates (counts or percentages) can also be calculated from published estimates. Such an estimate is also subject to sampling error. The sampling error of the difference between two estimates depends on their SEs and the relationship (correlation) between them. An approximate SE of the difference between two estimates (x-y) may be calculated by the following formula:

\(S E(x-y) \approx \sqrt{[S E(x)]^{2}+[S E(y)]^{2}}\)

11 While this formula will only be exact for differences between separate and uncorrelated characteristics or sub populations, it provides a good approximation for the differences likely to be of interest in this publication.

Significance testing

12 A statistical significance test for a comparison between estimates can be performed to determine whether it is likely that there is a difference between the corresponding population characteristics. The standard error of the difference between two corresponding estimates (x and y) can be calculated using the formula shown above in the Differences section. This standard error is then used to calculate the following test statistic:

\(\large\left(\frac{x-y}{S E(x-y)}\right)\)

13 If the value of this test statistic is greater than 1.96 then there is evidence, with a 95% level of confidence, of a statistically significant difference in the two populations with respect to that characteristic. Otherwise, it cannot be stated with confidence that there is a real difference between the populations with respect to that characteristic.

Glossary

Show all

Quality declaration - summary

Institutional environment

Relevance

Timeliness

Accuracy

Coherence

Interpretability

Accessibility

Abbreviations

Show all

Back to top of the page