“Automatic Scoring Engines” are the growing rage among education “reformers” as a tool for grading writing assessments. What are they? “Robo-grader” is a more understandable description. Some may point out that computers have been used to grade tests for years. Yes, but the tests in the past were normed, standardized, multiple choice tests with right or wrong answers. Now we have assessments with open-ended response and essay questions.
We have been told repeatedly that the old multiple choice tests are inaccurate and inferior; essay questions help assess the higher order thinking skills.
So please explain, why are my child’s higher order thinking skills being assessed by a computer which has absolutely no higher order thinking skills????
Les Perelman, a research affiliate at Massachusetts Institute of Technology, has been critical of robo-graders, one example of which is the Educational Testing Service’s (ETS’s) “e-rater Engine” which is part of the Criterion online writing evaluation service. Students taking the Graduate Record Exam would be evaluated by this capability. Other companies such as Pearson Educational Technologies are also developing similar capabilities. Pearson’s is called “WriteToLearn.”
Perelman says, the problem is–the computer cannot discern truth from falsehood; it can only evaluate the length and difficulty of words in the response, the lengths of paragraphs, the grammatical rules followed, and other programmable elements of writing. The essay could contain glaring factual errors or complete nonsense yet still follow the writing conventions required.
The Educational Testing Services is using the e-rater for students taking the Graduate Record Exam to enter grad school. In the future, this type of robo-grading technology could come to Washington State K-12 Schools. It was referenced in the Memorandum of Understanding between Washington State and the Federal Department of Education when Washington State signed on to be the lead state in the Smarter Balanced Assessment Consortium.