Here’s an interesting and (somewhat disturbing) article on a new technology from the Education Testing Service, those same folks who’ve invented the SAT’s and other tests and made standardized testing a nation-wide phenomena. Apparently, their new technology can grade some 16,000 essays in under a minute. But what are the consequences for true creative expression through writing? Will such automated grading be improving students’ creativity or making it conform to the rules of the machine? Check it out:
Hey prof, grade my essay? There’s (kind of) an app for that.
The e-Rater, an automated essay-grading system developed by the Educational Testing Service, can grade up to 16,000 essays a minute. For educators across America, such a creation could mean a far easier job or even spell disaster in the form of “u-n-e-m-p-l-o-y-m-e-n-t.”
But we’re talking essays here, not math problems. When it comes to composition, right and wrong answers aren’t always objective, nor do they always exist. So how effective are these robo-graders? And should they be trusted?
Incorporating robo-grading into academia will in time alter the way in which students write, being taught to fool a machine instead of establish a compelling argument in a creative way.
A recent study by the University of Akron College of Education compared the ratings of man and machine for some 22,000 short essays and found little difference in the final grades awarded.
“In terms of being able to replicate the mean [ratings] and standard deviation of human readers, the automated scoring engines did remarkably well,” Mark Shermis, the study’s lead author, said in an interview with Inside Higher Ed.
Are these robo-graders really this smart? Or has the general ability of students to write well become so predictably shallow that grading is a formulaic cakewalk? There’s definitely a need for speculation.
For the full article by Samuel Cleary see E-Raters will kill creativity.