Home » testing

Articles tagged with: testing

A Case for REAL Math Teaching

     Math abilities of US public school students were mediocre even before the introduction of Common Core math in 2010. This is, in large part, due to the constructivist method of teaching math used since the 1990s.  Administrators think this method leads to “equity”, but it actually makes matters worse.  Ted Nutting, retired Seattle high school math teacher, explains the problems of this math teaching method in the following commentary.
Read more…

Would you let your children reveal their innermost feelings and thoughts to a perfect stranger? No?
Yet when they take the Common Core assessments, they are asked to reveal their thoughts and beliefs to a computer which will track, record, and retain every response.  In fact, the school need not use just assessments to mine this information. They can use any assignment done on a computer or tablet, or even use an “education game” as a stealth assessment.

Please be aware, that the Common Core assessments are not “standardized tests” as we knew them from the past. The Common Core assessments are adaptive to each child, so each child receives different questions–this is not “standardization.” (We have explained the difference between standardized tests and assessments in a previous post.)

Education expert Mercedes Schneider comments on the use and abuse of power surrounding the assessments. She refers to the PARCC assessments, but the comments can also be applied to the Smarter Balanced assessment used in Washington State. Read her blog post, “The Powerful, Enforced Silence around Standardized Testing.

The GED, the General Educational Development test, is produced by the American Council on Education. In 2011, it  partnered with Pearson and the GED was aligned  with the Common Core State Standards. Pearson is heavily involved with both the Smarter Balanced Assessment Consortium (SBAC) and with Partnership for the Assessment of Readiness for College and Careers (PARCC) which are the assessment consortia developing assessments for the Common Core State Standards.

Beginning in 2014, students taking the GED took the revised test aligned with the Common Core State Standards

According to the e-newsletter SCENE, about 540,000 students earned their GED in 2013. In 2014 with the changes in the GED, that number has dropped to about 55,000. Read the article.

Some states have stopped requiring the GED and have begun to require alternate testing. See the map.

The GED’s new alignment with the Common core means that even students that were homeschooled must become knowledgeable about the Common Core standards if they want to earn their GED.

The new GED test will not only flunk more students but will discourage many from even attempting the exam.

As we said before, robo-graders just evaluate whether a piece of writing contains complex sentences, long words, observes the punctuation and grammar conventions, and has other writing features which can be programmed into a computer. The computer can’t tell if the writer has made factual errors. According to Les Perelman, Director of Writing at the Massachusetts Institute of Technology, “E-Rater doesn’t care if you say the War of 1812 started in 1945.”

To prove his point, he and three students from Harvard and MIT created an app which generates essays that  the robo-grader will deem well-written, according to the algorithms of its programming. They call their program BABEL– Basic Automatic B.S. Essay Language Generator.

Read their hilarious essay. The essay received a top score of 6 points.

Now, because of Perelman’s criticisms–which the assessment company cannot refute–the Educational Testing Service is refusing to cooperate in further verification trials. See the article about Mr. Perelman being censored.

“Automatic Scoring Engines” are the growing rage among education “reformers” as a tool for grading writing assessments. What are they? “Robo-grader” is a more understandable description.  Some may point out that computers have been used to grade tests for years. Yes, but the tests in the past were normed, standardized, multiple choice tests with right or wrong answers. Now we have assessments with open-ended response and essay questions.

We have been told repeatedly that the old multiple choice tests are inaccurate and inferior; essay questions help assess the higher order thinking skills.

So please explain, why are my child’s higher order thinking skills being assessed by a computer which has absolutely no higher order thinking skills????

Les Perelman, a research affiliate at Massachusetts Institute of Technology, has been critical of robo-graders, one example of which is the Educational Testing Service’s (ETS’s)  “e-rater Engine” which is part of the Criterion online writing evaluation service. Students taking the Graduate Record Exam would be evaluated by this capability. Other companies such as Pearson Educational Technologies are also developing similar capabilities. Pearson’s is called “WriteToLearn.”

Perelman says, the problem is–the computer cannot discern truth from falsehood; it can only evaluate the length and difficulty of words in the response, the lengths of paragraphs, the grammatical rules followed, and other programmable elements of writing. The essay could contain glaring factual errors or complete nonsense yet still follow the writing conventions required.

The Educational Testing Services is using the e-rater for students taking the Graduate Record Exam to enter grad school. In the future, this type of robo-grading technology could come to Washington State K-12 Schools. It was referenced in the Memorandum of Understanding between Washington State and the Federal Department of Education when Washington State signed on to be the lead state in the Smarter Balanced Assessment Consortium.

Read the article, “Facing a Robo-Reader? Just Keep Obfuscating Mellifluously”.

Explanation of the difference between assessments and tests, the terms “valid”and “reliable” and other information.

The following opinion piece was sent as a press release on June 4, 2001, from the offices of Sen. Harold Hochstatter (R – Moses Lake) and Sen. Val Stevens (R – Arlington). Although the specific supporting statistics cited are somewhat dated, the problem they illustrate has not been solved. The WASL is still biased against boys.

Did you know the WASL is an “assessment” not a “test”. What is the difference between “valid” and “reliable” in an assessment or test? Find out more.

The WASL is a scam. The scores are easily manipulated. Commentary written in 2000.

  © 2025 CURE Washington   |   Powered by WordPress   |   Theme base by Techblissonline.com