ACCURACY
The fourth dimension of quality in the ABS DQF is Accuracy. Accuracy refers to the degree to which the data correctly describe the phenomenon they were designed to measure. This is an important component of quality as it relates to how well the data portray reality, which has clear implications for how useful and meaningful the data will be for interpretation or further analysis. In particular, when using administrative data, it is important to remember that statistical outputs for analysis are generally not the primary reason for the collection of the data.
Accuracy should be assessed in terms of the major sources of errors that potentially cause inaccuracy. Any factors which could impact on the validity of the information for users should be described in quality statements.
The dimension of Accuracy can be evaluated by considering a number of key aspects:
- Coverage error: this occurs when a unit in the sample is incorrectly excluded or included, or is duplicated in the sample (e.g., a field interviewer omits to interview a set of households or people in a household). Coverage of the statistical measures could be assessed by comparing the population included for the data collection to the target population.
- Sample error: where sampling is used, the impact of sample error can be assessed using information about the total sample size and the size of the sample in key output levels (e.g., number of sample units in a particular geographical area), the sampling error of the key measures, and the extent to which there are changes or deficiencies in the sample which could impact on accuracy.
- Non-response error: this refers to incomplete information provided by a respondent (e.g., when some data are missing, or the respondent has not answered all questions or provided all required information). Assessment should be based on non-response rates, or percentages of estimates imputed, and any statistical corrections or adjustment made to the estimates to address the bias from missing data.
- Response error: this refers to a type of error caused by respondents intentionally or accidentally providing inaccurate responses, or incomplete responses, during the provision of data. This occurs not only in statistical surveys, but also in administrative data collection where forms, or concepts on forms, are not well understood by respondents. Respondent errors are usually gauged by comparison with alternative sources of data and follow-up procedures.
- Other sources of errors: Any other serious accuracy problems with the statistics should be considered. These may include errors caused by incorrect processing of data (e.g. erroneous data entry or recognition), alterations made to the data to ensure the confidentiality of the respondents (e.g. by adding "noise" to the data), rounding errors involved during collection, processing or dissemination, and other quality assurance processes.
- Revisions to data: the extent to which the data are subject to revision or correction, in light of new information or following rectification of errors in processing or estimation, and the time frame in which revisions are produced.
To assist in evaluating the Accuracy dimension of a dataset or a statistical product, we provide some suggestions of questions which might be asked below.
Suggested questions to assess Accuracy:
- Are there particular questions which are hard to understand and which respondents may provide an inaccurate response?
- To what extent are there procedures in place to manage processing error?
- Are any areas of the population unaccounted for in data collection?
- Are there particular questions which are sensitive and which respondents are less likely to answer?
- Have the data been adjusted in any way to account for non-response?
- Have the data been adjusted to ensure confidentiality of responses? If so, what methods have been used?
- What is the organisation's revision policy? How quickly are revisions produced and disseminated?
- Have the data been rounded at any stage in the collection or dissemination process?
- Has the sampling method changed for this data collection compared with previous cycles of data collection?
- Have weights been applied to the dataset? What are the benchmarks with which the weights align?