A new automated reader developed by the Educational Testing Service (ETS) can grade an impressive 16,000 student essays in 20 seconds. And, according to experts like Mark Shermis, dean of the College of Education at the University of Akron, the robot reader is just about as accurate as human reader at assessing a student’s prose. Humans are great at grading essays, but we can only slog through about 30 essays per hour.
For some in the world of testing ETS’s “e-Rater” is the wave of the future. It’s fast, accurate and never needs a coffee break. Humans, however, have one major plus: They’re able to detect truth and nuance - things about the written word that are totally lost on a computer that can only detect more quantifiable properties like the difficulty of words and essay length.
Critics say if students figure out the computer’s algorithms they can easily game the system and get a stellar grade with an essay full of wordy nonsense. Should speed trump the human element when it comes to the tedious task of grading piles of essays? Is there a way to combine the old technology and the new?
GUESTS
Mark Shermis, Ph.D, Dean of the College of Education, University of Akron; Study author, Contrasting State-of-the-Art Automated Scoring of Essays: Analysis
Les Perelman, Director, Writing Across the Curriculum in the Writing & Humanistic Studies program at MIT (Massachusetts Institute of Technology); Chair of the Consortium for Research and Evaluation of Writing