ABS Response to Parity Article - The ABS Count of Homeless Youth

                            THE ABS RESPONDS TO PARITY ARTICLE

ABSTRACT

On 27 June 2012 Professor Chris Chamberlain, Director, Centre for Applied Social Research, RMIT University, provided the Australian Bureau of Statistics with a copy of an article titled The ABS Count of Homeless Youth, which he advised would appear in the June edition of Parity.

The ABS takes exception to the false claims, made in Professor Chamberlain’s article, about the ABS. Contrary to the claims made in Professor Chamberlain’s article, the ABS:
  • has not claimed there were only 5,424 homeless youth aged 12 to 18 on Census night 2006. Indeed, in his Parity article Professor Chamberlain quotes many explanations from the published ABS Discussion Paper about where ABS thinks some homeless youth are invisible in the Census dataset, so it is difficult to understand his claim that ABS ‘has released data claiming there were only 5,424 homeless youth aged 12 to 18 on Census night’
  • has not identified 5,424 homeless teenagers aged 12 to 18 on Census night 2006
  • will not replace Chamberlain and MacKenzie’s (C and M) approach with data from the National Census of Population and Housing (the Census), but instead has developed its own more transparent, consistent and reliable methodology
  • does not question whether the national Supported Accommodation Assistance Program (SAAP) data can provide a valid indicator of the proportion of school students in the homeless population, but does question the validity of the methods that C and M apply when they use with the SAAP data and proposes several changes that would be needed to make the C and M less inconsistent and more robust
  • has never stated that it found no evidence to suggest that homeless teenagers were either misclassified or undercounted by the Census - indeed, this language used in Professor Chamberlain’s article is not ABS terminology, is misleading, and misrepresents the ABS view. ABS does have serious reservations about the validity of the C and M methodology applied in compiling their Counting the Homeless (CTH) estimates, and for their estimates of homeless youth in particular, and has found no evidence to support the plausibility of those level estimates
  • has not claimed that the Census provides an accurate ‘count’ of homeless youth, although ABS does claim that it can generate a consistent measure over time for monitoring trends, something that the CTH estimates can not support.

The ABS has published its assessment that transparent, consistent and repeatable estimates of homelessness in Australia are important to support sound policy and decision making in this area of social concern. This article restates the ABS position in regard to its efforts to establish more robust estimation for homelessness and repeats some of the concerns that it has already published about the C and M methodology they use in compiling their CTH estimates, in particular in regard to estimates of homeless youth.


INTRODUCTION

In March 2011 the ABS published the Discussion Paper: Methodological Review of Counting the Homeless, 2006 (cat. no. 2050.0.55.001). An early finding from the review was the need to clearly distinguish between:
  • enumeration in the Census of people who may be experiencing homelessness
  • subsequent analysis of Census records to estimate the number of people enumerated in the Census who are likely to be experiencing homelessness.

Several misleading or false statements have been made about Census enumeration methods, in part to justify estimation methodologies adopted by some researchers. But some confusion has also arisen where the inability to be able to classify a Census record as likely to represent a person who may have been homeless on Census night is incorrectly described as either under-enumeration, or as people who were ‘missed’ by the Census.

The Discussion Paper described the methodology that C and M had used to compile their estimates, and proposed a range of methodological changes that would be needed before consistent, transparent and repeatable official estimates could be made of the number of people enumerated in the Census that were likely to have been homeless on Census night. The Discussion Paper also outlined a process of consultation on the findings, to inform a final methodology.

The Executive Summary of the Discussion Paper noted, on page 9, that there may be some people ‘couch surfing’ but for whom a usual residence is reported and to which they cannot return (i.e. their potential homeless state is masked in the Census dataset). As published in the Position Paper - ABS Review of Counting the Homeless Methodology, August 2011 (cat. no. 2050.0.55.002) the consultation on the review findings confirmed the significance of the areas of likely underestimation of homelessness in relation to youth, Indigenous people and people escaping domestic violence. It also noted that no satisfactory alternative estimation methods had yet been identified or developed to address the issues of underestimation for homelessness in these groups.

Guided by its Homelessness Statistics Reference Group, the ABS is continuing to undertake research and development to improve the estimation of homelessness, including youth homelessness. In particular, the ABS is undertaking a quality study to inform the potential development of a nationally representative homeless school students survey.

Until a robust methodology is developed to measure the level of youth homelessness, ABS will focus on producing transparent, consistent and repeatable estimates that can be used to monitor change over time.


TRANSPARENCY

The use of SAAP Data in CTH: Census Night Accommodation in SAAP

There are several overlapping instances in CTH, and in Professor Chamberlain’s article, regarding the use of SAAP data in compiling estimates of homelessness that do not reach the level of transparency expected for statistical analysis.

In their CTH report for 2001, C and M stated that: ‘We decided to replace the census data with national SAAP data for all states except Victoria and the Northern Territory.’ However, with their 2006 CTH report, on page 12 C and M state that: 'The ABS convention is to replace census figures with information from the SAAP National Data Collection if the SAAP figures are higher.’

This is not true. Prior to the commencement of its ABS Methodological Review in 2009, ABS neither had access to nor used SAAP data for any estimation purposes.

When compiling their 2006 CTH estimates, C and M used:
  • a support period measure - which covered all people included in any SAAP support periods, regardless of how long those periods were
  • provided that the total support period spanned Census night 2006
  • provided that the entire support period included at least one night of supported accommodation at sometime during the period.

The support period measure used by C and M is larger than the Census night SAAP counts provided by the Australian Institute of Health and Welfare to C and M for their estimation of the number of persons accommodated on Census night.

People who were in supported accommodation at some stage over a protracted support period may (indeed, were very likely to) have been counted in another category on Census night. So, not only is the scale of the support measure used by C and M for their CTH estimate wrong, but its composition is also wrong - it will include the wrong people.

C and M did not acknowledge that the method they applied in CTH 2006 was inconsistent with their published methodology. Further, they stated (on page 13 of CTH 2006) that: ‘Overall, the census strategy worked better than in 2001, but Table 3.1 shows that in all states (except Victoria) the census count was lower than the SAAP count’.

This statement masks the fact that the Census data and the SAAP data provided by AIHW for CTH for C and M’s report were in broad agreement and significantly lower than the numbers C and M chose to use. While several reasons for using the support period measure were offered by C and M during the Review, none are convincing. No reason has been provided to ABS for C and M remaining silent on the inconsistent methodology when producing their CTH report, and no reason has been provided for C and M misrepresenting the quality of the Census data.

The use of the support period data overstated the SAAP homeless estimate by about 2,500 (15 per cent), and youth homelessness by about 900.

ABS was not able to determine what CTH methodology was used for C and M’s 2001 CTH, nor for their 1996 estimates.

The use of SAAP Data in CTH: ‘estimating up’ Youth Homelessness

Contrary to Professor Chamberlain’s claim in his Parity article, ABS does not question whether the national Supported Accommodation Assistance Program (SAAP) data provide a valid indicator of the proportion of school students in the homeless population, but does question the validity of the methods that C and M apply when they use the SAAP data for this purpose. The ABS Discussion Paper notes a number of reasons why this uprating process may be less than ideal for the particular SAAP measures and the method to which C and M apply them. The SAAP data have wider scope (including private school students). The C and M use of support period data may also be inappropriate, as students may be more likely to have shorter support periods than non-students, introducing a bias into the ratio of students to non-students. SAAP publications show that people not in the labour force have much shorter periods of support, on average. This potentially indicates that students presenting to SAAP as clients will, on average, have shorter support periods than youth who are either working or are unemployed. The ratio of students to nonstudents would be higher if SAAP Census night estimates were used rather than support period estimates.

In CTH, C and M are clear about the assumption that readers need to accept when C and M use SAAP data to ‘estimate up’ their National Census of Homeless School Students estimate of homeless school children aged 12 to 18 years to represent both the school and non-school homeless youth (page 19, CTH 2006).

'The best source of information about the proportion of school students in the homeless population is the SAAP National Data Collection. The SAAP National Data Collection records information on all clients who use SAAP services throughout Australia.

In order to ‘estimate up’, an assumption was made that the characteristics of the youth population in SAAP reflect the characteristics of the homeless youth population overall.

This assumption has underpinned previous analyses, but it cannot be independently verified. As long as it remains reasonable to assume that the proportion of school students in SAAP is reflective of the broader homeless youth population, then the SAAP datacan be used for this purpose.'

Neither in CTH, nor in C and M’s youth homeless report, nor in Professor Chamberlain’s article, is it explained what attempts, if any, were made to verify or at least expose the scale of the assumption used in their particular method and their particular source data (their NCHSS collection).

For example, for transparency purposes, C and M could have noted that the age distribution of SAAP data on client support periods was quite different to the age distribution they obtained in their NCHSS collection and to which they apply their SAAP ‘estimate up’ process.

The table below shows the comparison:

Age ranges
2006 NCHSS
SAAP client support data 2005-06, all closed support periods
NCHSS
(Table 3.8
in C&M Youth
report, 2006
NCHSS
adjusted to
age scope of CTH youth 'estimate up' process
By ageBy student status
14 or younger
(in secondary school)
25%not applicable
12 to 1426%8%77%
15 to 1643%45% 42%44%
17 to 1828%29%50%22%
19 or older4%
Total34%


The casual observer might assume that the age profile of school students would be weighted towards the younger ages, and to some extent accept the age distribution shown in NCHSS as possibly reasonable. They might then wonder why the school participation rate derived from the client based SAAP data, heavily biased towards the older ages was used in C and M’s CTH and in their youth report. A causal observer might also have assumed that client support period SAAP data should be weighted to the age distribution in NCHSS, as that is the population being ‘estimated up’. This alone would reduce the CTH youth homelessness estimate by 17 per cent (the ‘estimate up’ factor drops from over 3 to 2.59).

However , the 77 percent school participation rate for the 12 to 14 year olds presenting to SAAP as clients looks low, and AIHW advice was that it may be as a result of some school students who were absent from school being recorded as non-school students. Adjusting the participation rate reduces the CTH estimate further.

The casual reader might also wonder about all the homeless youth in SAAP who were accompanying either their parents or other adults, rather than presenting to the services by themselves, and why these were excluded from the CTH in calculating the ‘estimate up’ factor. There are likely to be nearly as many of these youth in the 12 to 18 age range as there were clients of that age (based on the SAAP data for people accommodated on Census night). While school participation status is not reported in SAAP for accompanied children, reasonable assumptions about the likely participation by age raises somewhat the average participation across the 12 to 18 age group, and the CTH ‘estimate up’ factor would drop from three to two.

As noted in the ABS Discussion Paper, CTH 2006 introduced a new method of using the SAAP data - using a five year average rather than the Census year data. If the data in SAAP fluctuated around the average, there may be some argument for using the average. However, when the SAAP data represent a trend upwards in school participation, and therefore downwards in the C and M ‘estimate up’ factor, the use of the five year average further biases (upward) the CTH ‘estimate up’ factor. This affects not only transparency, but also consistency over time, as the 2001 estimates used the 2001 year SAAP data, not the five year average.

Week Long Estimates

Professor Chamberlain’s Parity article explains the week long estimate that schools are asked to report in the NCHSS as follows:

‘...it is unlikely that a student counsellor or an Assistant Principal will know every homeless students on one particular day. The decision to ask schools to count over a week makes sense given the practical difficulties of identifying homeless students.’

No attempt is made, in CTH, in C and M’s youth report, or in Professor Chamberlain’ Parity article, to accommodate the overestimation that results.

Professor Chamberlain’s Parity article justifies the CTH approach by claiming that: ‘The ABS set the precedent for this approach when it recommended that Census collectors count the primary population over a number of days (Harvie 2001, p.12).’

This argument might appear valid to readers with no knowledge of Census processes.

The week long rough sleeper approach adopted in some locations, in special circumstances, by ABS when conducting the Census, consists of the following:
  • the use of services staff, recruited and employed by ABS staff to assist the enumeration, to make sure we use their expertise to both find their clients who are sleeping rough, and to make sure we don’t inadvertently count them twice.

The use of a third party report (i.e. by the school staff) in the NCHSS has none of the features of the Census field practice, and its design will overestimate, even if the schools report perfectly, with perfect knowledge.

Analysis of the SAAP data provides an indication of the scale of the overestimation. In 2005–06, 52 per cent of all closed SAAP support periods lasted for one week or less, and 26 per cent lasted for less than one day, 17 per cent lasted from one week to one month, 18 per cent lasted from one month to three months, seven per cent lasted from three months to six months and six per cent lasted for longer than 6 months. The median length of support Australia-wide was six days. The mean length of support Australia-wide was 48 days. Clearly, if this high turnover rate for SAAP clients (and accompanying children) also applies to homeless school students, it is highly likely that a measure of all school students who were homeless at any time during a week will be very much larger than the numbers who are homeless on any one night.

Vague Question Wording in the NCHSS

There are several instances where the NCHSS methodology is questionable, and where statements about it are incorrect. Page five of Chamberlain and MacKenzie’s report, Youth Homeless in Australia, 2006, discusses their letter sent to Principals to recruit them in their NCHSS and states that: ‘The letter indicated that the ABS was implementing a special strategy to count homeless people at the 2006 census, and that the census of homeless school students was part of this strategy.’

The ABS has played no part in the design, development, or enumeration of the NCHSS, nor in its estimation. It is not part of the any ABS ‘strategy to count homeless people’. ABS has not been able to access either the data collected in NCHSS (whether identified or anonymous) or its detailed methodological documentation to assess how it might best be used.

Principals may have been misled into cooperating with the NCHSS if they thought it was part of an ABS strategy, and any concerns about its methods that may occurred to them may have been allayed with false claims.

On page three of Professor Chamberlain’s Parity article he introduces the discussion of ABS’s concerns with the following statements:

‘Schools were also asked to provide information on any students who ‘have ceased to be homeless’ but who have been ‘homeless within the last three months and need continuing support’.

'Altogether 99 per cent of schools completed a census return. Schools reported 7,035 homeless students using the cultural definition and 2,354 students who had previously been homeless but were still in need of continuing support. Only students who were homeless according to the cultural definition were used to build the estimate of the homeless population'.

Neither of these paragraphs is correct. While ABS is not aware of the detail of the methodology applied in processing the NCHSS data, the wording of the questions proposed to schools appear to leave some room for further overestimation. Schools are asked to report:

‘To the best of your knowledge, how many homeless students are currently attending your school? [Include those who have found accommodation, but who are in need of continuing support]’

Schools are also asked to report: ‘Longer term arrangements. These students were homeless within the last 3 months and need continuing support.’

It is unclear if, or how, the reports from the NCHSS respondents for current homelessness are then adjusted by C and M to exclude students who they reported were homeless at some time more than 3 months ago but who are in need of ongoing support.

The second quoted paragraph from Professor Chamberlain’s Parity article also includes a statement which is not true. Schools did not report 7,035 homeless students, and probably didn’t report 2,354 students who had previously been homeless but were still in need of continuing support . From C and M ’ s Youth Homelessness, Australia, 2006 report, table 3.1 shows estimates of homeless students for 2001 and 2006. That table reports that for 2001 schools sent C and M data that reported to totalled to 11,461 school students either homeless at some time during Census week or who had found accommodation, but who are in need of continuing support. The number reported by C and M in that table for 2006 is 7,795, a fall of 32 per cent.

C and M’s table 3.1 goes on to show that they adjust the totals they calculate from school reports up by 6.7 per cent in 2001 and 20.5 per cent in 2006, to account for schools ‘undercounting’ their homeless students. It is not stated, nor clearly deducible from their Youth Homelessness report, whether the adjustment is applied uniformly to the reported totals of homeless plus those no longer homeless but in need of ongoing supports, or whether separate adjustments are made by C and M for the homeless and the formerly homeless. Transparency in method would be useful to help readers assess the method and the plausibility of the authors’ resulting estimates.


CONSISTENCY

In addition to noting the five year averaging issue that destroys consistency in the NCHSS estimates between 2001 and 2006, the Review also noted that tripling the undercount adjustment in NCHSS is surprising when the ‘raw’ NCHSS results fell very dramatically. C and M’s Youth Homelessness report for 2006 states that the follow up method used by C and M changed between 2001 and 2006. In 2001 they spoke to about 500 schools to assess the quality of teacher reports and make documented assumptions about the adjustments they applied based on school size.

In 2006 C and M report that they visited a ‘purposive sample’ of 173 schools that had reported three or more homeless students in the NCHSS. They also report that two thirds of the schools satisfied them that there had been no undercounting. C and M then only report an aggregate adjustment of 20.5 per cent for 2006.

Presumably, if two thirds of schools were assessed by C and M to have reported with no ‘undercounting’ then the entire underreporting adjustment of about 1,600 homeless school students is attributable to the one third that C and M assess as having underreported. Assuming the schools who did under report, and those that did not, were, on average, about the same size and with the same proportion of homeless kids, that suggests that the C and M adjustment is an upward adjustment of 60 per cent to the reported numbers for the schools they assess as having underreported.

It may also be that no adjustment was made for the schools not contacted and not in scope of the purposive sample (that is., that reported less than three homeless school students)—the C and M methodology for 2001 made no adjustment for smaller schools). If this were the case, the undercount adjustment would be higher than 60 per cent for the target population of schools in scope for the adjustment. Seeing the detail of the adjustment would help assess the plausibility of that adjustment.

Any undercount by schools is likely to have been larger when the school reports of homelessness in 2001 were very much larger, and time series are very likely highly distorted by the changed undercount adjustment methodology applied by C and M.

ABS rejects the claim made on page five of Professor Chamberlain’s Parity article that:

‘...the ABS accepts that the overall homeless youth figure must be higher than the number of homeless school students (7,035), but they estimate up by multiplying by two, whereas C and M estimated up by multiplying by three.’

In the quoted paragraph ABS was simply addressing the inappropriateness of the ‘estimate up’ process used by C and M, and this should be considered in light of the fundamental problems with the school based estimates that C and M create using their NCHSS before they apply their ‘estimate up’ process. Table two on page six of Professor Chamberlain’s Parity article may also imply to some readers that ABS may have accepted NCHSS based estimate - this is not correct.


REPEATABILITY

The methods used in NCHSS are unlikely to be repeatable in a reliable way.

The ABS rejects the statement page 6 of Professor Chamberlain’s Parity article that: ‘The key to understanding the ABS position is their conviction that there was no evidence to suggest that homeless teenagers were either misclassified or undercounted by the Census (ABS 2011, p.46).’

ABS has been very transparent that for some homeless people either they are not enumerated in the Census or the Census provides insufficient information to support assumptions about their likelihood of having been homeless on Census night. Youth are one such group, which is why ABS is both pursuing several alternative approaches to estimating the level of youth homelessness, and creating methods that consistently report those youth who appear to be homeless in the Census records, so that changes over time can be reported from the Census.

ABS has been clear in stating that there is no evidence to support the level of youth homelessness estimated by C and M, and sufficient evidence to conclude that their estimates are very wide of the mark as well as being very inconsistent over time.

BROADER VALIDATION OF YOUTH HOMELESS ESTIMATES

A cursory examination of the SAAP data shows the very high numbers of clients approaching and using services over a period of time compared to a much smaller number of clients at any one time. A ratio of between six to one and eight to one is likely. It is the higher numbers of clients accessing services that resonates with many service providers as the better reflection of the scale of youth homelessness.

The Methodological Review noted that while ABS could not be definitive about the level of youth homelessness on Census night, it was likely to be much lower than the CTH estimates because those much higher estimates implied estimates of the numbers of youth aged 12 to 18 experiencing homeless over a year to be 150,000 or more. No evidence was brought forward during the review or in Professor Chamberlain’s Parity article to substantiate such high estimates.

On pages seven to nine of Professor Chamberlain’s Parity article, Professor Chamberlain argues for a proportion of the youth visiting other households and with a usual address reported for them in the Census to have potentially have been homeless on Census night. He argues that one quarter to one third of the 19,100 youth visiting on their own on Census night might be hidden homeless.

The 19,100 estimate reported by Professor Chamberlain adds together the separately reported numbers in the ABS Discussion Paper for 3,100 youth visiting a lone person household on Census night and about 16,000 youth visiting predominately resident families. Of the 3,100 youth visiting older visiting older persons, ABS reported these older lone persons as half being aged 36 to 55 years and a quarter aged over 55 years. These numbers appear to be broadly consistent with the large numbers of youth who are reported in nationally representative surveys as staying overnight with either a parent living elsewhere or with a grandparent, although it is possible that some may be homeless.

Of the 16,000 other youth, about half are visiting holiday destinations, and the usual address for them is elsewhere. ABS conservatively concluded, in its Discussion paper, that some may be homeless, but not many. For the balance, a similar conclusion is drawn that while most will be undertaking the usual sleepover activity (because mum and or dad are away, for school work, family celebration etc, some maybe homeless . The scale of homelessness proposed by Professor Chamberlain is not likely and there is no evidence to support estimates on that scale.

Similar arguments apply for the 3,600 youth visiting with their mothers. While they do not accord with the arguments in CTH for a hidden youth homelessness population, it is possible, as argued in the ABS Discussion Paper, that some of the women, and their children, are escaping domestic violence, but that cannot be quantified in past Census data and is unlikely to be on the scale suggested by Professor Chamberlain.

Of the 3,500 youth visiting with children, most will be visiting in the normal course of family life. While some may be homeless, the rate of one quarter to one third suggested by Professor Chamberlain is highly unlikely.

Overall, if Professor Chamberlain’s assumptions were to be believed, the 12 to 18 year old group would be the most stay at home age group of all (other than for the huge numbers reclassified as homeless in his estimates).

On page nine of Professor Chamberlain’s Parity article he notes that: ‘Altogether, there were about 37,500 teenagers aged 12 to 18 who were not counted by the Census for various reasons. If one-quarter to one-third of them were homeless, this would account for 9,400 to 12,500 young people.’

The scale of the assumption by Professor Chamberlain is not supported by any evidence.

There were 6,276 people enumerated in the 2006 PES and for whom a Census record was not found (ABS Cat. no. 2940.0, table 4.1). Some 97 per cent of these respondents to the PES were usual residents of their PES address. Only 3 per cent were visitors at the time of the PES. About three quarters of the 4,796 PES respondents who thought that they had been counted in the Census 3 weeks earlier also thought that they would have been counted at the PES address i.e. at home. They were not visitors to that address at either the time of the Census nor at the time of the PES. For these people with a common usual address at both the Census and the PES, the reasons for them being missed during the Census are not known.

Common reasons include simple error on the part of householders completing the form (for example. assuming the person is being enumerated elsewhere because they were away from home on Census night, or being accidentally left off the Census form). If they were in a specialist homeless services dwelling at the time of the Census and have since returned home, they would have been included in the SAAP component of homelessness in the Census dataset. If they had been in a boarding house at the time of the Census and were missed in a boarding house on Census night, their usual address elsewhere in Australia (their PES address) would have meant they would need to be excluded from any homeless estimate. Similarly, if they were missed while sleeping rough, had they been enumerated they should have also reported a usual address elsewhere in Australia and excluded from any homelessness estimates.

The other 25 per cent of this group of 4,796 PES respondents who thought they had been counted nominated another address where they thought they were enumerated in the Census. About 50 per cent of these people (546) nominated a Census night dwelling address that was missed by Census collectors, that is they were not people who chose not to participate, nor were they in dwellings that the Census collectors could not make contact with. They were in dwellings that the Census collector simply missed and their omission from the Census enumeration has no impact on homelessness estimates.

While 97 per cent of PES respondents reported their PES address as their usual address, and most of those reported that it was also their Census night address, there may be some people enumerated in the PES who were not counted in the Census because they were temporarily absent and homeless at the time of the Census, but had returned home in the three weeks since the Census. Such circumstances may include youth or people escaping domestic violence and staying temporarily with other households on Census night but not being recorded on the Census form for that household.

And as the PES does not approach nonprivate dwellings (or people sleeping out) it does not generate direct estimates of undercount for people in those circumstances at the time of the PES. However, the final under-count estimates are weighted to account for the entire population, including those people in non-private dwellings and also include people who were not in dwellings.

Overall, an argument for some undercount of youth in the Census to be related to homelessness, it is likely to be a small component and cannot justify the level of assumption made by Professor Chamberlain.


THE WAY FORWARD

In addition to the homeless school students quality study, the ABS has developed a work program of both surveys and the Census which will improve the measurement of homelessness in Australia, including for youth homelessness.

Until a robust methodology is developed to measure the level of youth homelessness, ABS will focus on producing transparent, consistent and repeatable estimates that can be used to monitor change over time. Because the ABS methods are transparent, users can assess whether there is any evidence to suggest that the components of homelessness that cannot yet be estimated reliably are likely to be moving differently over time to those elements that can be measured.

Without transparency and consistency, users cannot know the likely quality of the level estimates of homelessness, nor interpret movements over time, nor understand the composition of the homeless population. The risks of misdirected policy are significant without reliable evidence.


Access to the latest Parity publication containing Chris Chamberlain's article, 'The ABS Count of Homeless Young People' can be found at http://www.chp.org.au/parity/index.html