Home » Assessments - MSP, HSPE, Smarter Balanced, Data Collection and Privacy, News, Language Arts

Essays graded by a robo-grader don’t need to make sense (Part 2 of a series)

November 2, 2014

As we said before, robo-graders just evaluate whether a piece of writing contains complex sentences, long words, observes the punctuation and grammar conventions, and has other writing features which can be programmed into a computer. The computer can’t tell if the writer has made factual errors. According to Les Perelman, Director of Writing at the Massachusetts Institute of Technology, “E-Rater doesn’t care if you say the War of 1812 started in 1945.”

To prove his point, he and three students from Harvard and MIT created an app which generates essays that  the robo-grader will deem well-written, according to the algorithms of its programming. They call their program BABEL– Basic Automatic B.S. Essay Language Generator.

Read their hilarious essay. The essay received a top score of 6 points.

Now, because of Perelman’s criticisms–which the assessment company cannot refute–the Educational Testing Service is refusing to cooperate in further verification trials. See the article about Mr. Perelman being censored.

Tags: ,

Digg this!Add to del.icio.us!Stumble this!Add to Techorati!Share on Facebook!Seed Newsvine!Reddit!Add to Yahoo!

 

  © 2025 CURE Washington   |   Powered by WordPress   |   Theme base by Techblissonline.com