RELIABILITY OF ESTIMATES
Although care has been taken to ensure that the results of this survey are as accurate as possible, there are certain factors which affect the reliability of the results to some extent, and for which no adequate adjustments can be made. These are known as sampling error and non-sampling error. These factors, which are discussed below, should be kept in mind when interpreting the results of the survey.
Comparisons between estimates from surveys conducted in different periods, for example, comparison of 2006 TUS estimates with 1997 TUS estimates, are also subject to the impact of any changes made to the way the survey was conducted (see Chapter 5 'Changes from previous surveys').
Sampling Error
Sampling error is a measure of the variability that occurs by chance because a sample, rather than the entire population, is surveyed. Since the estimates in the Time Use Survey publication are based on information obtained from occupants of a sample of dwellings they are subject to sampling variability. That is, they may differ from the figures that would have been produced if all dwellings had been included in the survey. One measure of sampling variability is the standard error (SE). There are about two chances in three that a sample estimate will differ by less than one SE from the figure that would have been obtained if all dwellings, in the population described, had been included in the survey, and about nineteen chances in twenty that the difference will be less than two SEs.
Another measure of the likely difference is the relative standard error (RSE), which is obtained by expressing the SE as a percentage of the estimate:
The RSE is a useful measure in that it provides an immediate indication of the percentage errors likely to have occurred due to sampling, and thus avoids the need to refer also to the size of the estimate.
RSEs for estimates from the 2006 Time Use Survey are published for the first time in 'direct' form. Previously, a statistical model was produced that related the size of estimates with their corresponding RSEs, and this information was displayed in a 'SE table'. For 2006, RSEs for TUS estimates have been calculated for each estimate and published individually. The group jackknife method of variance estimation is used for this process, which involved the calculation of 60 'replicate' estimates based on 60 different subsamples of the original sample. The variability of estimates obtained from these subsamples is used to estimate the sample variability surrounding the main estimate.
In the tables in this publication, only estimates (numbers, percentages, participation rates and means) with RSEs less than 25% are considered sufficiently reliable for most purposes. However, estimates with large RSEs (between 25% and 50%) have been included and are marked with a cell comment to indicate they have a relative standard error of 25% to 50% and should be used with caution. Estimates with RSEs of 50% or more are marked with a cell comment to indicate that they are subject to sampling variability too high for most practical purposes.
Standard errors of proportions and percentages
Proportions and percentages formed from the ratio of two estimates are also subject to sampling error. The size of the error depends on the accuracy of both the numerator and the denominator. The RSE of a proportion or percentage can be approximated using the formula:
This formula is only valid when
x is a subset of
y .
The SE of an estimated percentage or rate, computed by using sample data for both numerator and denominator, depends on the size of both numerator and denominator. However, the formula above shows that the RSE of the estimated percentage or rate will generally be lower than the RSE of the estimate of the numerator.
Standard errors of differences
The difference between two survey estimates (of numbers or percentages) is itself an estimate and is therefore subject to sampling variability. The SE of the difference between two survey estimates depends on their SEs and the relationship (correlation) between them. An approximate SE of the difference between two estimates can be calculated using the formula:
While this formula will only be exact for differences between separate and uncorrelated (unrelated) characteristics or sub-populations, it is expected to provide a good approximation for all of the differences likely to be of interest in this publication.
Testing for statistically significant differences
For comparing estimates between surveys or between populations within a survey it is useful to determine whether apparent differences are 'real' differences between the corresponding population characteristics or simply the product of differences between the survey samples. One way to examine this is to determine whether the difference between the estimates is statistically significant. This is done by calculating the standard error of the difference between two estimates (x and y) and using that to calculate the test statistic using the formula:
If the value of the test statistic is greater than 1.96 then we may say that we are 95% certain that there is a statistically significant difference between the two populations with respect to that characteristic. Otherwise, it cannot be stated with confidence that there is a real difference between the populations.
Non-sampling error
Lack of precision due to sampling variability should not be confused with inaccuracies that may occur for other reasons such as errors in response and recording. Inaccuracies of this type are referred to as non-sampling error. This type of error is not specific to sample surveys and can occur in a census enumeration. The major sources are:
- errors related to scope and coverage;
- response errors such as incorrect interpretations or wording of questions,
- interviewer bias;
- non-response bias; and
- processing errors.
Errors related to scope and coverage
Some dwellings may have been inadvertently included or excluded because, for example, the distinctions between private and special dwellings were unclear. Some persons may have been wrongly included or excluded because of difficulties applying the coverage rules concerning, for instance, household visitors, or scope rules concerning persons excluded from the survey. Particular attention was paid to question design and interviewer training to ensure such cases were kept to a minimum.
Response errors
Response errors may have arisen from three main sources:
- deficiencies in questionnaire design and methodology;
- deficiencies in interviewing technique; and
- inaccurate reporting by respondents.
Errors may be caused by ambiguous or misleading questions, or, in the case of diaries, ambiguous column headings or example pages, inadequate or inconsistent definitions of terminology used, or by poor questionnaire sequence guides, causing some questions to be missed. Thorough testing occurred before the questionnaire and diary format were finalised to overcome problems in questionnaire and diary content, design and layout.
Lack of uniformity in interviewing also results in non-sampling error. Thorough training programs, a standard Interviewer's Manual, the use of experienced interviewers and checking of interviewers' work were methods employed to achieve and maintain uniform interviewing practices and a high level of accuracy in recording answers on the survey questionnaire.
A respondent's perception of the personal characteristics of the interviewer can be a source of error. The age, sex, appearance or manner of the interviewer may influence the answers obtained.
In addition to the response errors described above, inaccurate reporting by respondents may occur due to misunderstanding of the question, inability to recall the required information and deliberately incorrect answering to protect personal privacy.
Non-response bias
One of the main sources of non-sampling error is non-response when persons resident in households selected in the survey cannot be contacted, or if they are contacted are unable or unwilling to participate. Non-response can affect the reliability of results and can introduce bias. The magnitude of any bias depends upon the level of non-response and the extent of the difference between the characteristics of those people who responded to the survey and those who did not. For the 2006 TUS, some of the non-response resulted from logistical difficulty in aligning interview times with allocated diary days rather than the unwillingness of selected household members to participate in the survey.
As it was not possible to quantify accurately the nature and extent of the differences between respondents and non-respondents in the survey, every effort was made to reduce the level of non-response. The estimation procedures used make some adjustment for non-response.
Processing errors
Processing errors may occur at any stage between initial collection of the data and final compilation of statistics. There are four stages where error may occur:
- coding, where errors may have occurred during the coding of various items by office processors;
- data transfer, where errors may have occurred during the transfer of data from the questionnaires to the data file;
- editing, where computer editing programs may have failed to detect errors which reasonably could have been corrected; and
- manipulation of data, where inappropriate edit checks, inaccurate weights in the estimation procedure and incorrect derivation of new items from raw survey data can also introduce errors into the results.
Steps to minimise errors
A number of steps were taken to minimise errors at various stages of processing. These included:
- thorough training of staff;
- providing detailed coding instructions and regular checking of work performed;
- computer edits designed to detect reporting or recording errors;
- validation of the data file using tabulations to check the distribution of persons for different characteristics; and
- investigation of unusual values on the data file.