The silent tragedy of NAPLAN, students reported in misleading bands

The release of international reports on education as well as NAPLAN have placed teachers under much pressure. Most of this pressure arises from innuendo, or what statisticians call correlations. Is this pressure warranted?

NAPLAN reporting of student abilities is unreliable. This is likely to have tragic effects for some students. These individual tragedies are largely silent except to the student. The dubious accuracy of NAPLAN results questions the fairness of recent media reports that label students as big improvers, coasters, and strugglers.

NAPLAN reports student results as dots within bands numbered from 1 to 10. That these dots are solid conveys a sense of certainty, a certainty not matched by the mathematics. It is normal practice in statistics to show a confidence interval.  For example, a 90% confidence interval would show a range in which we are 90% confident a student’s ability is located.  NAPLAN does not report these confidence intervals for individual students.

Margaret Wu (2016) finds that if NAPLAN included confidence intervals, it would not be possible to confidently locate a student in a particular band. That is, around one in ten students is being reported in the wrong band. This effect is random and potentially has tragic consequences.

Over one million students do the NAPLAN tests so there are over one million stories. Once the unreliability is considered new stories emerge for our improvers, coasters and strugglers. Improvers, for example, could simply be those students reported below their level one year, and above their level the next. Most students would be coasters. In statistics, this is regression to the mean.

While most students would receive a NAPLAN score close to where they should, about 10%, or more than 100,000 students, receive a misleading message. This includes students who may have tried hard to improve, only to be randomly reported below their real level. It also includes students who are coasting, but are randomly reported as excelling. Both types of misleading messages affect student motivation. That these little tragedies are occurring in large numbers is likely to be undermining Australia’s international performance.

NAPLAN doesn’t assess curriculum, it only  “broadly reflect aspects of literacy and numeracy within the curriculum in all jurisdictions” (ACARA, 2016). If teachers were to teach only ‘aspects of curriculum’, and provide student feedback in the haphazard fashion of NAPLAN, they would be ridiculed.

Teachers are being held accountable to dubious statistics. For example, the American Educational Research Association (2015) strongly cautions against the use of value-added-models. Yet Australia reports student progress without reservation or qualification on the My School website (myschool.edu.au). This is not in the interest of students, teachers, or schools. In whose interest this reporting is occurring remains opaque.

Australia’s education measurement industry is a plagued with vested interests. With over 300,000 Australian teachers, everybody wants a piece of the pie. Teacher training, teacher supply, and teacher development provide commercial opportunities. This feeding frenzy is a disgrace and should stop.

addendum: Link to my rejoinder to ACARA’s Twitter response

ACARA. (2016). NAPLAN Achievement in reading, writing, language conventions and numeracy: National report for 2016. ACARA: Sydney.

American Educational Research Association. (2015). AERA Statement on Use of Value-Added Models (VAM) for the Evaluation of Educators and Educator Preparation Programs [Press release]

Wu, M. (2016). What national testing data can tell us. In B. Lingard, G. Thompson, & S. Sellar (Eds.), National testing in schools: An Australian assessment. London: Routledge.

Comments are closed.