In its development of its 2010 Review methodologies for assessing State fiscal capacities, the Commonwealth Grants Commission (CGC) sought a consultancy study on the effects of government policy versus non-government policy on the enrolment of students.
The consultant used econometric methods to estimate those effects. The report is available from the CGC website: http://cgc.gov.au/method_review2/2010_review_documents2/2010_review_consultancy_report/2009_-_modeling_of_post-compulsory_enrolments
In its report the consultant identified and used a small number of non-policy variables and then used State dummy variables to estimate the enrolments, by a classification of statistical areas.
The consultant derived estimates and drew conclusions from those estimates:
It attributed the State dummies as to the policy effects
it claimed that the policy effects were 20% and non-policy effects to be 80%, by dividing the explained variances into that explained by the selected non-policy variables and that by the residual State dummies
There are some issues with that approach. Firstly, as with any econometric estimates, there are unexplained variances and explained variances. There is an issue of what the unexplained variances should be attributed to, that is, effects of policy or non-policy? Or both but by what proportion?
Secondly, even within the explained part, the use of State dummy variables means they are the residuals in the sense that they pick up the effects of the differences between the States after those explained by the identified and selective non-policy variables.
Intuitively, one would understand that in any econometric estimations, the variables used are only a part of the actual ones that affect the issue at question. Inevitably, there are some effects of those real variables or their effects are left out due to various reasons.
Under the current scenario, the residual State dummies actually include the effects of those variables left out of the estimation. So it is categorily incorrect to attribute the effects of the dummies to the effects of State policy variables.
In addition, the consultant acknowledge that the Census data used had an issue of under count. It assumed that the effects of under count to be the same across all States.
It is highly likely that the under count in the Census data may have a strong bias, in that it was more serious in remote areas as compared to city or urban areas. This means that the States with larger shares of remote population are likely to have higher enrolment in the Census than their actual enrolment, and vice versa.
The assumption of equal effects across the States was highly problematic and results derived under such a conclusion was wrong.
This was especially the case in the Asutralian context, where two small States have drastically and diagomatically different shares of remote population.
The ACT has almost 100% urban population, because it is largely a city. The Northern Territory, on the other hand, has the highest remote and very remote population because of its very large share of Indigenous population in its total population.