Measuring crime victimisation: the impact of different collection methodologies

This paper examines the effects of the different methodologies that are used in the collection of information on crime victimisation

Released
5/02/2004

Introduction

There are a number of ways in which individuals, the community and governments know about crime, and there are a number of different sources of statistics on crime. Users may ask which of the various statistics available are the 'right' ones. However, it is not a simple process to reduce such a complex social issue to a single set of numbers.

Statistics need to be well understood in order for them to be useful in making informed decisions. Part of this understanding comes from a knowledge of the process by which particular events are recorded and eventually reported back to users. The Australian Bureau of Statistics (ABS) maintains national collections on crime victimisation sourced from two different areas: administrative records obtained from state and territory police agencies and victimisation data obtained through surveys of individuals in the Australian community. In some instances the results may provide a different picture of crime in the community, with administrative data indicating a trend in one direction and personal experience indicating the opposite.

The expectation that different sources of crime victimisation statistics should produce similar figures forms the basis that one source or the other is wrong. Such expectations arise out of the false belief that different data sources are always measuring the same thing, and are utilising the same methodologies. The reality is that the different data sources often use different methods and are producing different sets of crime victimisation indicators. 

The full extent of crime is unlikely to ever be captured, and it is for this reason that we often hear of the 'dark figure' of crime (Coleman & Moynihan, 1996). The 'dark figure' refers to the volume of crime that is not officially recorded. For example, one person's interpretation of an action as a crime may not be the interpretation of another, or someone may be a victim of a crime but never notify anyone of their experience of the crime. Therefore, it is difficult to quantify all differences, but qualitative assessments can aid in understanding differences between sources of crime victimisation data.

The main aim of this paper is to increase community understanding of the nature of crime measurement in Australia and why the findings from different data sources may differ:

  • section 2 outlines national crime victimisation statistics available from several different sources in the Australian context
  • section 3 draws comparisons between the statistics from these sources
  • section 4 describes methodological differences between survey sources
  • section 5 describes the possible impacts of the methodological differences between the survey vehicles.


Through sections 3, 4 and 5 the scope and definitions of the data from various collections is progressively adjusted to assist in comparisons.

The paper concludes with a summary of the main points raised throughout.

Crime victimisation data sources

There are many national collections which present statistics on crime victimisation. Broadly speaking, these collections can be divided into: 

  • statistics recorded on administrative systems by agencies (referred to as administrative data)
  • surveys of individuals in the community regarding their experiences of crime


This section outlines a number of the key collections and presents the objectives, scope and characteristics for each. Although the focus is on Australian sources, the International Crime Victims Survey (ICVS) is discussed below as it produces data at the national level for Australia and has been referred to by some users in the context of providing national crime victimisation data.

Administrative data

Administrative data on crime victimisation can be derived from a number of sources including police, hospitals and community service agencies. The ABS has two relevant national collections based on administrative data: Recorded Crime - Victims and Causes of Death.

Recorded crime

The annual Recorded Crime statistics collection (RCS) provides indicators of the level and nature of recorded crime in Australia and a basis for measuring change over time in criminal victimisation. The collection is a census of all victims of selected crimes reported to, or detected by, police, whose details are subsequently recorded on police administrative systems. Data have been provided by state and territory police agencies on a calendar year basis since 1993. National standards and classifications are applied to all aspects of the collection.

Causes of death

The annual Causes of Death collection brings together statistics and indicators for all deaths, including perinatal deaths, registered in Australia. The statistics are compiled from data made available to the ABS by the Registrar of Births, Deaths and Marriages in each state or territory. Deaths from external causes are attributed to the event leading to the fatal injury as described in Chapter 20 of International Classification of Diseases Revision 10 (ICD-10). The Causes of Death collection has been collected and published on a calendar year basis since 1964. National standards and classifications are applied to all aspects of the collection.

The ICD-10 category of death through external causes can be further disaggregated to the category of death through assault which can be compared to murder/manslaughter in the RCS collection.

Survey data

One of the limitations with administrative data on crime victimisation is that incidents may never come to the attention of authorities such as police, or the victim may never speak of the incident to anyone else. Surveys of individuals in the community provide a way of asking people directly about their experiences of crime, and therefore the victimisation rates from surveys are generally considerably greater than rates from administrative system

Read more

Comparison of crime victimisation rates

This section compares statistics on crime victimisation from various sources, including survey and administrative data.

Comparisons using administrative data

Differences between various sources of official statistics may include any number of the following:

  • source agency (e.g. police, hospital, welfare agency)
  • whether the individual chooses to report (an individual's decision to report in a hospital may be quite different from his/her decision to report to police)
  • whether the agency chooses to record the information on their recording systems (e.g. whether the information was mandatory to record, whether it followed investigation first, whether there was any other basis for recording)
  • when the information is recorded (e.g. recording of an incident as it comes to the attention of the agency or at some later stage after investigation or other formal process)
  • what reference date is used for recording (e.g. date of incident, date of notification to agency, date of action by agency)
  • what counting unit is used for recording (e.g. individual, individual each time they come to the attention of the agency, the types/numbers of crimes/criminal incidents).


Differences may also occur within the same source of administrative data (e.g. state/territory police agencies), due to factors such as:

  • differences in legislation between states and territories
  • differences in recording practices between relevant state/territory agencies (e.g. police)
  • differences in local standards and classifications that may impact on their ability to map to official statistical standards.


To provide an indication of the numeric differences, the following table illustrates homicide rates sourced from the two administrative datasets considered in this paper.

1. Number and rates per 100,000 of homicide in Australia, 1997-2001

 Recorded CrimeCauses of death
YearNumberRateNumberRate
19973601.93291.8
19983321.83071.6
19993862.03021.6
20003631.93131.7
20013401.83001.6


The figures show that homicide rates according to RCS are consistently higher than those from the Causes of Death collection, and in 1999 the rates differed by 0.4 deaths per 100,000 persons. There are two main differences between these two sets of figures that make direct comparisons difficult:

  • The collections use different reference dates. Causes of Death figures are classified either by the date of death or the date on which the death is registered. RCS figures are classified according to the date the incident was reported to or discovered by the police, which for a small proportion of homicides may be a substantial time after the death occurred. There is no way to reconcile this difference between the two collections.
  • The collections are measuring different things. In the Causes of Death collection, deaths are classified as homicides based on the coroner's determination of the 'final intent' of the incident causing death. The RCS collection homicide classification is based on the initial recording of criminal incidents by state and territory police agencies. If an initial report is eventually shown to be incorrect, the RCS record may not necessarily be adjusted. Likewise, if the coroner has not finalised a case at the time of data extraction the case will have a default value of 'accidental' death. Therefore, delays in finalising coroner's reports can impact on the categorisation of a relevant death as homicide.
     

Measurement of a single crime category using administrative and survey data

To illustrate the differing statistics between administrative data and surveys, and surveys themselves, the following section focuses on assault victimisation statistics. Assault was chosen because:

  • aside from Causes of Death, each of the collections identified in section 2 collects data on victimisation related to assault
  • assault is one of the crime victimisation categories that shows considerable differences between data sources.
     

2. Unadjusted assault prevalence rates as published by collection

 2002 GSS2002 NCSS2002 RCS1996 WSS2000 ICVS(a)
Assault victims (number)1,312,000717,900159,948404,4001,258,939
Assault victims (prevalence)9.0%4.7%0.8%5.9%9.0%
95% confidence intervals(8.3, 9.7)(4.5, 4.9)na(5.3, 6.5)(7.6, 10.4)

na not applicable
a. Australian component only.
 


The above table indicates that prevalence of assault victimisation ranges from 0.8% (2002 RCS) to 9.0% of the population (2002 GSS and 2000 ICVS). These rates are significantly different between the surveys, with the exception of the ICVS and GSS not being significantly different.

The smaller victimisation rates for the RCS collection are to be expected as it is well documented that only a small proportion of victimisation is subsequently reported to and recorded by police (e.g. ABS (a), 2002; Statistics Canada, 1997). A crime must progress through a series of victim, and then police, decisions before it may be recorded by police in official administrative systems. These decisions include:

  • victim recognition that a crime has occurred
  • victim decision to notify police
  • police notification/detection (by victim or other person)
  • police data entry of incident
  • police data entry of crime associated with the incident.


Looking just between surveys, there are some basic scope differences between the survey collections relating to age and sex of victims which will affect the victimisation rates. These include:

  • scope of age of victim (NCSS includes persons aged 15 years and over, ICVS includes persons aged 16 years and over and GSS and WSS include persons aged 18 years and over)
  • scope of sex of victim (WSS is limited to females only).


It is possible to adjust for these age and sex scope differences. The table below presents prevalence rates for a comparable scope of females aged 18 years and over.

3. Assault prevalence rates, adjusted only for comparable respondent group (females aged 18 years and over)

Offence category2002 GSS2002 NCSS1996 WSS(a)2000 ICVS
Total population (number)7,327,0007,309,2006,880,5006,790,319
Assault victims (number)528,000294,200404,400627,635
Assault victims (prevalence)7.2%4.0%5.9%9.2%
95% confidence intervals(6.5, 7.9)(3.7, 4.3)(5.3, 6.5)(b)(7.8, 10.6)

a. Includes all females who experienced an incident involving physical violence by either a male or female perpetrator.
b. Confidence intervals used available RSEs which were based on full population of persons aged 16 years and over. Therefore, confidence intervals will be these figures or greater.
 


However, even with the output amended for comparability on age and sex of victims the assault rates are still significantly different. The assault rates for the GSS, WSS and ICVS are significantly higher than the NCSS rate. The ICVS assault rate is significantly higher than the WSS rate. Therefore, consideration must be given to other factors within each of the collections where differences may arise and ultimately impact on an individual's decision to report being a victim of assault. These are outlined and discussed in detail in section 4 and the impact these differences may have is discussed in section 5.

Comparing survey methodologies

This section describes methodological differences between survey sources. 

Sample design and selection

Decisions on the appropriate sample size, distribution and method of selection are dependent on a number of considerations. These include the aims and content of the survey, the level of disaggregation and accuracy at which the survey estimates are required, and the costs and operational constraints of conducting the survey.

The General Social Survey (GSS), National Crime and Safety Survey (NCSS), Women's Safety Survey (WSS) and the International Crime Victim's Survey (ICVS) aim to provide national estimates of the nature and extent of victimisation for selected crime, with assault being the selected crime type collected in all four surveys.

Read more

Scope and coverage

Inclusions

The scope and coverage for each of the above surveys was based on usual residents of the selected household and included:

  • for the GSS interviews were conducted with one person aged 18 years or over
  • for the NCSS information was collected on all persons aged 15 years or over
  • for the ICVS interviews were conducted with one person aged 16 years or over
  • for the WSS interviews were conducted with one woman aged 18 years or over.
     

Exclusions

The following persons were specifically excluded from the GSS, NCSS and WSS:

  • members of non-Australian defence forces (and their dependants) stationed in Australia
  • certain diplomatic personnel of overseas governments, customarily excluded from censuses and surveys
  • overseas residents in Australia
  • residents of special dwellings, such as hospitals, retirement villages, refuges, prisons, etc.


In addition, the NCSS also excluded members of the permanent defence forces.

For the ICVS, where persons selected as respondents were absent for the duration of the survey period, were incapable of responding, deaf, suffered from an illness or disability or were too old, another person in the household was selected.

Questionnaire

Each of the surveys used a different introductory statement and different questionnaire format.

Read more

Survey procedures

Survey procedures relate to the mode of the data collection and any special procedures required (e.g. communicating with respondents who did not speak English). The procedure used for each of the surveys is outlined below.

Read more

Response rate

Response rates varied between the surveys. The details for each are outlined below. Fully completed refers to forms where significant amounts of information were not missing.

Read more

Impact of differing survey methodologies

Steps are taken in all aspects of development and conduct of surveys to ensure that data collected are as accurate as possible. However, as with all surveys, information collected can be affected by various types of errors. Sampling error can usually be quantified; however, non-sampling error is difficult to quantify.

The impacts of various methodological differences are discussed below. Where possible, the assault victimisation data (or break-in data for NCSS/GSS comparisons only) from each survey is used to illustrate the extent of the differences.

Sampling error

Sampling errors occur in sample surveys because a sample, rather than the entire population, is surveyed. Various factors that can influence the size of the sampling error include sample design, sample size and population variability. The extent of this variability can be determined by calculating the relative standard error (RSE) of the estimates, which expresses the standard error of the estimate as a percentage of the estimate's value. The standard error is a measure of the variation among the estimates from all possible samples, and thus a measure of the precision with which an estimate from a particular sample approximates the average results of all possible samples.

At the national level, RSEs for the assault victimisation rates from the GSS, NCSS, ICVS and WSS were less than 25%. However, when assault victimisation rates were disaggregated by other data items (e.g. age, sex, state/territory) the RSEs increased (in some cases to over 50%). Where RSEs are greater than 50% the estimates are considered too unreliable for general use.

Non-sampling error

Errors made in collecting, recording and processing data can occur regardless of whether a sample or complete enumeration of a population is undertaken. These errors are referred to as non-sampling errors. Specific types of non-sampling errors and their possible impact on assault victimisation data are discussed in detail below.

Respondent errors

Errors made may arise through ambiguous or misleading questions, inadequate or inconsistent definitions of terms used, by the complexity of questionnaire sequence guides causing some questions to be missed, by the use of long recall periods, or by respondent behaviours to particular types of questions or situations. 

Respondent error may also occur due to respondents answering incorrectly to protect their personal integrity, their personal safety or to protect another person. For example, respondents may not have reported incidents they experienced, particularly if the perpetrator was somebody close to them, such as a partner or family-member. The WSS attempted to minimise this effect by conducting interviews alone with respondents. The GSS (which reported a higher assault victimisation rate for females than the WSS, and referred to 2002 rather than 1996) did conduct personal interviews with respondents, although there was no requirement for respondents to be interviewed alone and it is unknown what proportion of respondents were interviewed in the presence of a third party. The NCSS involved leaving questionnaires to be mailed back, providing the possibility of respondents completing forms in private or in the presence of other persons, or even for a single member of the household to complete the NCSS on behalf of other in-scope respondents. The ICVS is a telephone based survey with no requirement for the respondents to be alone so, similar to the GSS and NCSS, it is unknown whether another person was present when the respondent completed the interview. 

Recent research on surveys conducted in the USA, Great Britain and Germany found that between 37% and 57% of respondents were interviewed face-to-face in the presence of a third party (cited in Zipp & Toth, 2002), and that when the respondent is married, his or her spouse is present in approximately one of every three interviews. Comparable findings in the Australian context are not available. One could infer that for the GSS, NCSS and ICVS between 40% and 60% of respondents reported in the presence of a third party, and may have introduced some level of respondent error to protect personal integrity, safety or safety of another person. However, the true nature and extent of this issue cannot be quantified at this time.

All four surveys asked respondents to recall their experiences of victimisation. An important element of this recall is the time period. Each of the ABS surveys asked about experiences within the previous 12 months from interview date (the WSS also asked about experiences in the last five years). The 2000 ICVS asked about experiences in 2000, 1999 and the last five years respectively, and for publication purposes used the 1999 recall period. Therefore the recall period used for output of results is inconsistent between the ABS surveys and the ICVS, with the ABS surveys using a 0-12 month recall period and the ICVS using a 3-15 month recall period. In addition, it is not possible to identify for the ICVS whether persons reporting as victims did so where victimisation occurred in the three months between the end of the 1999 recall period and the interview dates in March 2000, and therefore whether there was any over-reporting of victimisation.

One study using data from the US National Crime Survey found that victimisation rates for personal crimes based on a three month recall period was almost 40% higher than victimisation rates based on a 12 month recall period (Bushery, 1981). This research excluded series crime (i.e. two or more similar or related incidents which occur during the same reference period). Therefore the results reflect the recall rates of persons experiencing a single type of incident once in the reference period, either within 3 months or 12 months prior to interview. In addition, further research found that recall rates for crimes by relatives and marital partners were worse than recall rates for crimes by other non-strangers (Skogan, 1990). The GSS questions on crime victimisation are preceded by questions on family contact, whilst the NCSS questions on crime victimisation are preceded by questions on neighbourhood problems. The immediacy of reference to family contacts in the GSS may have aided recall of crimes by family contacts leading to a higher rate for older incidents in the reference period that are more difficult to recall (i.e. crime by family members).

Non-response bias

Non-response can introduce errors into results when non-respondents have different characteristics and experiences of victimisation from those individuals who did respond. Non-response bias is the difference between the estimate based on the responding sample and the estimate if there were no non-respondents. Non-response bias arises if the respondents in the sample are not representative of the non-respondents with respect to the characteristic under measurement (in this case crime victimisation). Measuring non-response bias is difficult without information about non-respondents, though the likely size of its effect can be considered. 

The proportion of non-respondents for the surveys under discussion ranged from 9% for the GSS, 22% and 24% for the WSS and NCSS respectively, through to 43% for the ICVS. Therefore, greater impact due to non-response bias is expected from the ICVS than the NCSS or WSS, and GSS is expected to have the least impact due to non-response bias. 

There has been conflicting research regarding the impact of non-response bias on crime victimisation reporting rates. Van Dijk (1990) reviewed the 1989 International Crime Victims Survey and found that there was no clear evidence on the effects of non-response. Strangeland (1995) concluded the same in his survey in regional Spain and went on to state that there appeared to be two counter-balancing effects operating. Firstly, the lifestyle of non-respondents which makes them difficult to locate may also make them more vulnerable to crime. Secondly and alternatively, victims may be more motivated to respond because they have something to tell.

The magnitude of difference between prevalence of crime victimisation among non-respondents and respondents would have to be large for non-response bias to be the primary cause of the discrepancy between survey estimates. This point is made in the following simple example using the NCSS and GSS which reported the greatest differences in assault victimisation. In the NCSS, which had a sample size of 54,418, the non-response rate for victims of assault was approximately 24% or 13,248 individuals. An estimate of the number of respondents who reported to be victims of assault was 1,935, calculated as 4.7% of the number of respondents. Non-response is greatest in the younger age groups and reduces as age increases, whilst prevalence of assault decreases with age. If we assume the overall prevalence of assault for non-respondents in the NCSS was in fact 9.4% (double the rate for respondents), the estimated number of assault victims among the non-respondents would be 1,245. The estimate of the total number of victims from the sample would then be 3,180 persons (1,245 + 1,935), producing an overall estimate of prevalence of 5.8% which is still well below the 9.0% prevalence rate in the GSS.

One of the benefits of the NCSS is that because it is a supplement to the ABS LFS, selected characteristics of most non-respondents to the NCSS are known (i.e. those characteristics that are collected for the LFS). An analysis was undertaken on a range of socio-demographic characteristics for NCSS respondents and non-respondents to determine if there were any significant differences. There were no significant differences.

The weighting methodology applied in the NCSS and GSS may have also reduced the non-response bias, though the size of this reduction is not possible to measure. Such a reduction would have occurred if, within age by sex by state categories, the non-respondents would have provided similar answers to the respondents.

However, comparing the two surveys with the greatest difference in assault victimisation prevalence rates (NCSS and GSS), and with a large difference in non-response rates, non-response bias may explain a small amount of the difference in assault victimisation.

Sample representation

To gain some insight into the representativeness of the sample, the socio-demographic characteristics of survey respondents can be compared to independent estimates of the corresponding population at the time of the survey. Where there are differences in sample representation and these differences are not adjusted for in the final results, the victimisation rates may not be representative of the entire population.

The distribution of GSS, NCSS and WSS survey respondents (classified by geographic location, age and country of birth) conformed reasonably well with the geographic location, age and birthplace distribution of the relevant population at the time of each survey. Each of the surveys under discussion did not sample from sparsely populated areas. As noted in section 4, the ICVS over-sampled persons aged 65 years or over. However these respondents were weighted down in the ICVS to produce estimates that conformed with the age distribution of the population aged 16 years or over at the time of the survey.

As stated above, the NCSS is a supplementary questionnaire to the ABS LFS. Underenumeration in the NCSS results primarily from non-response (in the NCSS), but also, to a lesser extent, from undercoverage in the LFS. Undercoverage, which is present in all household surveys, occurs when certain individuals in the population do not have a chance of being included in a survey (e.g., because they have no place of usual residence, or they are overseas, or because of imperfections in survey procedures). The NCSS weighting process ensures that estimates conform to the distribution of the population by age, sex and geographical area, thereby compensating for any underenumeration in the survey. Bias will only occur if persons not enumerated in the NCSS have significantly different characteristics to the survey respondents in the same age by sex by geography category.

Mode effects

Mode effect refers to the impact of the survey delivery method on the response to the survey. Different survey modes include self-administered questionnaires (e.g. mailback questionnaires), telephone interviews and face-to-face interviews. Modes can also be different within surveys (e.g. a mailback questionnaire that involves follow-up via telephone interview), and these are often referred to as mixed-mode surveys.

Below is a summary of some of the advantages and disadvantages of using a self administered questionnaire, telephone interview and face to face interview.

4. Advantages and disadvantages of different survey modes

Mail / Self administered questionnaireTelephone interviewFace to face interview
DisadvantagesDisadvantagesDisadvantages

- less flexible
- no control over the response process, for example the researcher has no control over who is present while the survey form is being completed
- cannot be long or complex
- wide variation in the reading and writing ability of respondents
- usually higher non-response

- can be affected by the presence of others
- social desirability bias whereby respondents may want to provide what they see as the socially desirable response
- potential interviewer effects
- limited control over the response process, for example the researcher may not know if another person is present whilst the interview is being completed
- needs to be short to prevent fatigue
- recency effects due to memory difficulty
- phone conversation rules severely limit information processing/records retrieval time

- can be affected by the presence of others
- social desirability bias whereby respondents may want to provide what they see as the socially desirable response
- potential interviewer effects
- third party may be present whilst the interview is being completed

AdvantagesAdvantagesAdvantages

- no interviewer effects
- good for sensitive behaviours
- response order and question order effects are reduced
- respondents have more time and room to consider answers

- greater flexibility
- interviewer - can motivate or encourage responses
- interaction between respondent and interviewer for clarification/ meaning etc.
- interactive editing of data
- establishing trust and building rapport

- greater flexibility
- enables longer and more complex instruments compared to self administered questionnaires
- interviewer - can motivate or encourage responses
- interaction between respondent and interviewer for clarification/ meaning etc.
- interactive editing of data
- establishing trust and building rapport


The mode of the survey is likely to have some impact on the reporting of crime although it would not solely explain the differences between the survey results. Mode effect is costly to investigate, though it can be quantified. An investigation would require the conduct of parallel surveys using different delivery methods. In the case of crime victimisation this has not been done. What follows is a qualitative assessment of the possible impacts of different modes that should be considered when developing surveys on crime victimisation.

Self administered questionnaires (SAQ), the method used for the NCSS, allow respondents to look ahead and go back to earlier items and may therefore reduce the impact of question order. The likely occurrence of this cannot be quantified at this stage. Personal face-to-face and telephone interviews (GSS, WSS and ICVS) do not allow respondents this opportunity.

There are two factors which should be considered when completing crime questions using different modes: sequencing and the notion of the word 'threat':

  • The sequencing on the NCSS allows respondents to choose their answer by the sequencing of the questions. For example, question 54 asks if anyone had tried to use or threatened to use force or violence against them and if they select no, they are directed to question 81 ( the end of the questionnaire). Therefore, the respondent may realise that they do not have to answer as many questions if they answer 'no' to the filters.
  • The word 'threat' is highlighted in the respondents way of thinking in the NCSS and they know which type of questions are coming because they can look forward to the questions. With the GSS, WSS and ICVS respondents cannot look ahead to what is coming. Therefore, respondents in these surveys may include incidents of assault being threatened at this question, whereas they may exclude them in the NCSS. Unfortunately, given the limited free text comments on the respondents answers and no post enumeration survey this is unable to be quantified.


The GSS used Computer Assisted Interviewing (CAI) as the means of data collection and this may also have subtler effects on data quality. Notebook computers may still have a novelty value for many respondents, and the use of notebook computers in the respondent's home may effect the respondent's perception of the interview - a survey may be seen as more important or more objective when computers are used to collect data (Tourangeau & Smith, 1996).

The involvement of an interviewer helps maintain respondent motivation (Fowler, 1993). Therefore, for all interviewer based surveys (i.e. GSS, ICVS and WSS) this may assist in keeping respondents focused on providing satisfactory answers rather than trying to find a way to complete the questionnaire as quickly as possible.

In the mail back NCSS, there is more information that is given to the respondent, such as the inclusion and exclusion boxes that provide information for the respondent on what types of events to include or exclude for the questions. Generally for the four surveys being considered the definition of what to include or exclude as 'use force or violence against you' is unclear for the assault-related questions. Furthermore, respondents may attempt to accumulate a common ground based on previous information thereby using this information to determine the context of subsequent questions (Clark & Schober 1992). As the previous topics in sequencing differ in the different surveys it is important that this information on what to include or exclude be readily available to all respondents. 

SAQs (such as the NCSS) allow respondents to change their answers. For example, in order to make their responses consistent with the title, some respondents may decide to go back and change their responses after being asked question 61 (form A) 'Do you consider this to be a crime?'. The quantitative impact of this on the NCSS victimisation rates is not known, though it is expected to be minimal.

The NCSS questionnaire included a number of prompts for inclusions and exclusions for main questions. This is necessary in a SAQ as the respondent does not have the presence of an interviewer to assist them. These prompts were developed over a number of iterations to cover the key issues related to the question. The GSS, WSS and ICVS questions are asked of the respondent by the interviewer without prompting which is normal practice for an interviewer based survey. The interviewer will only assist the respondent if asked about inclusions or exclusions and interviewer instructions and training cater for these circumstances. Within the NCSS there is a specific exclusion of incidents involving name calling, swearing, etc. which did not involve a physical threat. In the GSS this was only stated by the interviewer if the respondent specifically asked. Therefore for the GSS, WSS and ICVS respondents may adopt a wider definition of assault than in the NCSS.

The table below shows that if the assault victimisation responses are disaggregated into actual versus attempted assault within each survey, there is a similar difference in the proportions of actual versus attempted assault victimisation for the GSS and NCSS (i.e. 54% actual in GSS and 60% actual in NCSS) whereas the WSS shows a different pattern with 85% of identified assaults being actual assaults. When comparing prevalence rates for actual assault for the GSS, NCSS and WSS, the GSS and WSS are not significantly different from each other though both are still significantly different from the NCSS. This difference in the reporting of actual versus attempted assault in the WSS may be due in part to the difference in the question responses available between the GSS/NCSS and the WSS. In the GSS and NCSS respondents can answer yes or no to a question regarding attempted/threatened assault, whereas for the WSS the respondent is asked to answer yes or no to specific examples.

5. Comparison of prevalence rates for assault from ABS surveys, by actual versus attempted assault 

Offence category2002 GSS (All respondents)2002 NCSS (All respondents)1996 WSS (All respondents)
Total weighted population (number)14,483,70715,215,1006,880,500
Assault(a) victims (number)1,312,000 (700,600 actual)717,900 (433,912 actual)(a)404,400 (346,900 actual)
Assault(a) victims (prevalence)9.0% (4.8% actual)4.7% (2.9% actual)5.9% (5.0% actual)
95% confidence intervals for actual assault(4.5, 5.1)(2.7, 3.1)(4.4, 5.6)

a. Includes physical violence but excludes sexual violence.
 


The possible impact of prompts on break-ins was also compared for the NCSS and GSS. The table below indicates that any differences in prompts for attempted break-in does not explain differences in reporting between the two surveys as there is still a significant difference in the prevalence rates for actual break-in. The possible impact of different population bases (i.e. GSS used person population whilst NCSS used household population) has been dealt with by the re-calculation using households as the base population for the GSS. This resulted in a slight increase (approximately 3%) in the rate for the GSS.

6. Comparison of prevalence rates for break-in from ABS surveys, by actual versus attempted break-in

Offence category2002 GSS2002 NCSS1996 WSS
Total population (number)(a)7,495,4507,479,200na
Break-in victims (number)895,801 (561,656 actual)553,500 (354,000 actual)na
Break-in victims (prevalence)12% (7.5% actual)7.4% (4.7% actual)na
95% confidence intervals for actual break-in(b)(7.0, 8.0)(4.4, 5.0)na

na not available
a. Population is households for both GSS and NCSS.
b. Standard errors are taken from Household Use of Information Technology, 2001-02 (ABS cat. no. 8146.0).
 

A final difference between self-administered questionnaires and personal interviews involves whether the questions are read to the respondent (auditory) or by the respondent (visual). The personal interview questions are presented aloud for the GSS, ICVS and WSS and this eliminates the requirement that the respondent be able to read. This may be important within sub-populations where literacy problems are common. On the other hand, when the questions are only read aloud, the respondent has less control over the pace of the interview and may be prone to 'primacy' effects, favouring options presented early in the list of permissible answer categories over those presented toward the end (Tourangeau & Smith, 1996). The latter would be unlikely to have had an impact on the GSS and ICVS with regard to victimisation questions as they involve 'yes' or 'no' answers. However, this could be a consideration for the WSS which used a series of possible responses for the crime victimisation questions.

Context effects

Context effects occur when the preceding questions influence responses to subsequent questions (directional) or when the order in which the questions are administered affects the correlation between the target and the context questions. Context effects are difficult to predict because of their subjective nature, with the impact varying for different people.

It is possible that the context effects may contribute to the differences in reporting of assault victimisation in the surveys listed above. However, as with mode effects, without developing an experiment to compare the differences between the survey methodologies there is no way of quantifying how much each factor contributes to the overall difference in results. Therefore, the following is a qualitative assessment of the possible context effects in the different crime victimisation surveys.

There are two ways that context effects can influence the cognitive process of respondents: Assimilation (respondents might interpret new information by associating and including it with existing knowledge) and Contrast (respondents focus on the differences between new information and existing knowledge and exclude the new information). A respondent's comprehension and interpretation of question meaning is affected by the context in which the question is presented. Hence, the more ambiguous the wording, the more the respondent relies on the context of the question to understand the question.

One of the mechanisms producing context effects is the context and retrieval processes. Respondents tap only a small portion of the potentially relevant information when formulating their responses to survey questions. Information that is highly accessible in memory is likely to dominate the respondents response i.e. respondents take what comes to mind easily (Schuman & Presser, 1981).

The NCSS may appear to respondents to be focused on a narrower concept of crime than the other surveys, in particular the GSS. The NCSS is introduced as a crime and safety survey and respondents are able to refer to the purpose of the survey at any time to give them a context or framework to answer the questions (i.e. they may assimilate or contrast new information with the purpose of the survey at any time because the NCSS is a SAQ). Furthermore, the NCSS states that its purpose is about crime prevention and forming a basis for community programs. Respondents may only report assaults which they believe meet this purpose.

Interestingly, when the scope of the ABS survey being considered is confined to females aged 18 years and over, the assault prevalence rate for the NCSS increases and decreases for the GSS, bringing the prevalence rates for assault victimisation closer together (see table 7 below). Looking at this in another way, under the GSS there is a greater proportion of males reporting assault victimisation than females compared to the NCSS. This may be related to the context of the surveys, whereby males do not see themselves as being victims of crime and/or are not interested in crime prevention (the purpose of the NCSS), although they do acknowledge that they have experienced an actual or attempted act of force or violence.

The WSS and GSS were both interviewer based surveys and introduced a more general concept of assault. The WSS referred to 'experiences of aggressive or threatening behaviour' and the GSS referred to 'crimes that may have happened to you'. WSS used specialised procedures to ensure that respondents felt safe and to encourage respondents to report all incidents of aggressive behaviour rather than narrowing this to only crime reported to police. The general survey context along with specific question order context encourages GSS respondents to have a broader definition of crime. The GSS does not mention the topic of crime until this module is reached approximately half way through the interview, and there are no specialised procedures involved in eliciting information on assault victimisation.

The questions asked prior to the questions under analysis (i.e. questions asked prior to those on assault victimisation) can impact on the response. The NCSS and ICVS ask questions regarding robbery prior to questions regarding assault. Any element of force or violence (assault) that is associated with theft is designed to be counted as part of robbery in these two surveys, and therefore should be excluded from the assault counts. The NCSS asks about sexual assault after assault. Checks on the NCSS questionnaire allow any sexual assault data for females aged 18 years or over to be excluded from the assault counts. The WSS asks about sexual assault before assault, and therefore any assault that was in a sexual context is designed to be excluded from the assault counts. The GSS asks about assault victimisation before any other type of victimisation involving the use of force or violence.

Therefore, the assault victimisation figures from the NCSS should not include sexual assault or assault associated with robbery, and the WSS assault victimisation figures should not include any sexually related assault experiences, whilst the GSS may include both sexual assault and assault associated with robbery in the assault victimisation data. For comparability, all assault related offences are aggregated and presented in the table below for the same age and sex respondent group (note that ICVS data for the scope requested is not available at this stage). The assault-related prevalence rate for NCSS is significantly lower than those for the GSS and WSS, even after adjusting for age, sex and offence scope differences. 

7. Comparison of prevalence rates for victims of assault related offences from ABS surveys, adjusted for age and sex and offence scope differences

OFFENCE CATEGORY2002 GSS2002 NCSS1996 WSS
Assault - All respondents   
Victims (number)(a)1,312,000(b)717,900(c)404,400
Victims (prevalence)9.0%4.7%5.9%
95% confidence interval(8.3, 9.7)(4.5, 4.9)(5.3, 6.5)
Assault - Females 18+   
Victims (number)(a)528,000(b)294,200(c)404,400
Victims (prevalence)7.2%4.0%5.9%
95% confidence interval(6.5, 7.9)(3.7, 4.3)(5.3, 6.5)
Assault related - Females 18 years and over   
Victims (number)(a)528,000(d)335,719(e)490,400
Victims (prevalence)7.2%4.6%7.1%
95% confidence interval(6.5, 7.9)(4.3, 4.9)(6.4, 7.8)

a. Asked about experiences of physical violence and therefore may include robbery and sexual assault related victimisation.
b. Includes assault but excludes sexual assault and robbery.
c. Includes physical violence but excludes sexual violence.
d. Includes victims of assault, robbery and sexual assault.
e. Asked about experiences of violence (including physical and sexual).
 

Question wording

What may appear to be slight differences in the wording of a question can have a great impact on the responses to that question. For the assault questions in the four surveys being discussed, the wording is the same for the two surveys with the greatest differences in results (GSS and NCSS). For break-in victimisation there is one key difference between the GSS and NCSS. In the NCSS the question stated break-in to your home, with garage and shed in an include/exclude box below the question, whereas for the GSS, home, garage or shed were all read out in the actual question. An analysis of the Recorded Crime statistics indicates that approximately 8% of residential unlawful entries recorded by police are for outbuildings (i.e. garages and sheds). However, these are less serious matters and are often not reported to police, therefore the 8% figure should be seen as a guide only. Using Recorded Crime data as an indication, it seems likely that at least 8% of break-ins will occur in garages or sheds, and therefore differences in question wording (i.e. the exclusion of the mention of garages or sheds in the question) may have some impact on results for break-in between the GSS and NCSS. 

Coding differences

When survey responses are returned to the initiating agency there may be a number of forms where further detail or comment indicates that the response provided may be incorrect. Depending on the proportion of recoding that is completed (or able to be completed) this may result in changes to the crime victimisation rates that are actually published.

For the ABS surveys mentioned above (GSS, NCSS and WSS), only the NCSS contained detailed comments (due in part to being a SAQ). An analysis of the recoding indicated that the change in prevalence and incidence rates based on recoding was not significant (6% of all break-ins were taken out and 3% of all assaults).

Information available from the NCSS has indicated that recoding alone would not make a significant change to victimisation rates for assault or break and enter.

Conclusion

Sources of crime data can vary on many different levels and for a number of valid reasons. Ultimately the choice of which data source is used should be an informed decision made by the user based on an understanding of the purpose of the data source and the methodology behind it. The ABS assesses the fitness-for-purpose of data sources using the ABS Quality Framework (Allen, 2002). The framework contains 6 dimensions of quality that should be considered when choosing a data source: relevance, accuracy, timeliness, accessibility, interpretability and coherence. Users should also refer to the Explanatory Notes in publications in order to assess the fitness of data for use in their situation.

Both victimisation survey data and police recorded crime data contribute to informing users about the nature and extent of crime victimisation. Data from victimisation surveys can be used to contextualise information from the police recorded crime data. Alternatively, the two types of data sources can be used to test alternative hypotheses related to criminal activity (Statistics Canada, 1997). Neither administrative statistics nor victimisation surveys alone can provide comprehensive information about crime. Each is useful for addressing specific issues.

However, even within a particular method of data collection (i.e. survey or administrative data) there are differences between collections. Administrative collections may have limitations due to different practices between agencies supplying the relevant data. For example, information recorded by police agencies may vary between states and territories due to legislation, recording systems and recording practices. Surveys may have limitations due to differences in methodology and changes between successive cycles.

Following a brief comparison of two ABS administrative collections, Causes of Death and Recorded Crime statistics, this paper has presented comparisons of four surveys to illustrate the nature and extent of differences in survey methodologies:

  • General Social Survey (ABS, 2002)
  • National Crime and Safety Survey (ABS, 2002)
  • International Crime Victims Survey (Australian component (AIC, 2000))
  • Women's Safety Survey (ABS, 1996).


Each collection has its own purpose, such as the comparison and analysis of different social phenomena (e.g. GSS) or the measurement of crime and safety across states and territories over time (e.g. NCSS), and each provide different ways of measuring similar phenomena. As illustrated in section 4, there are numerous elements that combine to make up a single survey methodology. These include:

  • sample design and selection
  • scope and coverage
  • questionnaire format and content
  • survey procedure
  • response rate.


Differences in any one of these elements may impact on statistics from the collection and on response to an individual question within it. This paper's review of the survey methodologies, using assault victimisation results from the four surveys, is summarised in the following table.

8. Summary of review of survey methodology using assault victimisation rates

Methodological elementAssault victimisation rates
Scope of population sampledExpected to impact: NCSS is persons aged 15 years or over; ICVS is persons aged 16 years or over; GSS is persons aged 18 years or over; and WSS is females aged 18 years or over. When the scope was made comparable (females aged 18 years or over) there were still significant differences in results.
Sample size and/or population variabilityAt the national level RSEs were comparable and all less than 25%.
Presence of other persons during interview processThere may be some response errors between surveys. WSS was the only survey designed for one-on-one interviews.
Response ratesThere may be some non-response bias as response rates differed between all four surveys; however this is unable to be quantified.
Sample representationAll results weighted to represent population and therefore no impact on results.
Mode of surveyExpected to impact but not quantifiable: GSS and WSS were face-to-face, NCSS was a self administered questionnaire and the ICVS was a telephone interview.
Survey contextExpected to impact but not quantifiable.
Question wordingExpected to impact but not quantifiable.
Re-coding of resultsExpected to impact. Using NCSS, analysis indicates insignificant impact.


As the table shows, there are a number of methodological factors expected to contribute to the differences between the survey results. However, a number of these factors have not been quantifiable, including the impact of the mode of the survey and the context of the survey and its questions. The ABS is currently investigating the crime victimisation survey methodology, with the aim of ensuring continued collection of quality crime victimisation statistics.

This paper aims to increase community understanding of the nature of crime measurement in Australia, and why findings from data sources may differ. Ultimately, users must decide which measure of crime is fit for their purpose. The information in this paper can help inform that decision.

Further information

For further information about this product, please contact Catherine Andersson on Melbourne 03 9615 7375, email catherine.andersson@abs.gov.au or email crime.justice@abs.gov.au

List of references

Show all

Previous catalogue number

This release previously used catalogue number 4522.0.55.001

Back to top of the page