Education experts have warned that computer glitches during the latest NAPLAN tests may render this year's data unreliable.
Dr Jessica Holloway and Dr Steven Lewis – researchers from Deakin University’s Centre for Research for Educational Impact (REDI) – said widespread technical disruption on the first day of the online testing had compromised the validity of the results.
“We saw a lot of lost time during the tests and additional pressure for students having to rush to finish or re-sit the test another day,” Dr Holloway said.
“There is a great deal of emotional stress involved in the testing process and these problems would have created further anxieties for students who were mentally prepared to take the test one day, but, after encountering technical glitches, needed to re-sit the test on another day.”
More than one million students in Years 3, 5, 7 and 9 sat the NAPLAN tests in May and about half of all schools completed the tests online as part of Australian Curriculum Assessment and Reporting Authority's plans to move the entire testing process online by 2021.
“We know that significant numbers of students across several states experienced delays ranging from several minutes to much longer but it is not possible to adequately identify all the students who were impacted,” she said.
Dr Holloway said the technical glitches further weaken NAPLAN's aims to create a 'level playing field' of academic comparison across Australia.
“This year's problems further erode the claim that NAPLAN enables comparisons between schools,” Dr Holloway said.
“It is not a level playing field if some students sat the tests online and some completed paper tests and how do you compare the progress of a cohort if they took the paper test last time and online test this year?”
Dr Lewis said it was not clear what the tests were actually measuring.
“NAPLAN and similar tests around the world tell us more about equity issues than the quality of teachers or schools,” Dr Lewis said.
He said that while these assessments are notionally about measuring students' literacy and numeracy competencies, the results are strongly influenced by differences in advantage and disadvantage between groups of students and are not an accurate way of assessing students', schools', or teachers' performance.
“They may provide data that is potentially useful to schooling systems but these one-off tests fail to meaningfully capture the complex work of schools and teachers, or comprehensively reflect the learning and abilities of students,” Dr Lewis said.