When the College Board announced on March 5 that it would be revising several elements of its SAT tests, including entirely scrapping the essay section—effectively reversing course on the biggest change in the last half century in the college placement exams—it might have seemed like a perfect opportunity to investigate the role of tests in education, and how well they live up to their claims of being objective evaluators of student progress. For much of the US media, though, the SAT changes served mostly as an opportunity to beat the drum of “Is our children learning?”
The revisions, announced with fanfare by College Board president David Coleman, are certainly dramatic. The essay test, introduced in 2005, will become optional, restoring the SAT’s traditional 1,600-point scale: 800 for math, 800 for English. Excessively obscure vocabulary words will be removed and replaced by terms more common in college courses. And the penalty for wrong guesses—also introduced in recent years—will be eliminated, so that a wrong answer will count the same as no answer at all.
For some reporters and pundits, especially on the right, the main concern here was whether the new tests would be dumbed-down compared to the old ones: “While some have praised the test redesign for better aligning with what’s actually taught in schools, others say there are risks of lowering standards” (USNews.com, 3/10/14).
“When the going gets tough, well, why not just make the going easier?” asked conservative Washington Post columnist Kathleen Parker (3/7/14) in a widely syndicated op-ed, warning that the College Board was just concerned about inflating scores to reflect “the gradual degradation of pre-college education.” Forbes.com op-ed contributor George Leef (3/13/14) complained that the new SAT rules would be “like playing The Masters from the ladies’ tees.”
USA Today (3/6/14), meanwhile, took the opposite tack, calling the changes a move “to toughen the test,” under the headline “Sharpen Those Pencils, Kids: The SAT Is Getting Harder.”
The stated goal of the SAT redesign, though, wasn’t primarily to make the tests easier or harder—after all, students are still competing for the same number of college admissions slots. Instead, it’s supposed to make it fairer. The College Board, in fact, noted this in its press statement (3/5/14), and some coverage dutifully repeated it: NBC News (3/5/14) described the changes as intended to “level the playing field among those taking the exam,” citing the $4.5 billion-a-year test-prep industry that gives students with money to spend an advantage.
It was a curious statement in one regard: Why would the SAT sections that were eliminated be particularly easy for students to game with test prep? Shouldn’t essay questions, at least in theory, be a better gauge of actual academic skills than the multiple-choice questions that make up the rest of the exam?
The answer points to what many testing experts say is the Achilles’ heel of standardized tests: how they’re scored. Whereas multiple-choice questions are often criticized for being mindless fill-in-the-bubble work, essay tests have a different problem: score reliability. With millions of test papers having to be scored every year, testing companies have turned to what FairTest public education director Robert Schaeffer (Minneapolis City Pages, 2/23/11) has called “essay-scoring sweatshops” to process them all quickly.
As former test scorer Dan DiMaggio wrote in his Monthly Review essay “The Loneliness of the Long-Distance Scorer” (12/10), it’s not possible to make a considered assessment of essay quality when “at 30 cents per paper, you have to score 40 papers an hour to make $12 an hour.” As a result, he wrote, “for all the months of preparation and the dozens of hours of class time spent writing practice essays, a student’s writing probably will be processed and scored in about a minute.”
To address this, test companies provide “rubrics” that test scorers can use to make scoring easier. Unfortunately, these lists of scoring shortcuts—say, one point for providing a “relevant opinion” and one for providing each of two examples from the cited text, as former test scoring leader Todd Farley described in his tell-all memoir Making the Grades—barely take into account whether an essay makes any sense.
As MIT writing professor Les Perelman told the New York Times (3/6/14), “You can tell them the War of 1812 began in 1945” and still get a good score. Boston Globe columnist Joanna Weiss (3/14/14) summarized Perelman’s advice for SAT essay writers:
Write long, he advised. Use big, fancy words—“myriad” is a winner—and don’t worry about using them correctly. Include a quotation, even if it has nothing to do with the subject at hand. (“My favorite,” he wrote, “is ‘the only thing we have to fear is fear itself.’”)
These are tricks most easily learned in test-prep courses—something that helps explain why high-income students, who can afford to pay as much as $10,000 for one-on-one tutoring (Dallas Morning News, 8/17/11), score much higher on average than their low-income peers.
The Washington Post’s Wonkblog (3/5/14) published charts showing that students from families with more than $200,000 in income score nearly 400 points higher than those with below $20,000—and scores also rise with the number of times students take the preparatory PSATs, which are more commonly offered at wealthier schools. And the disparities are almost as big for the retained math and reading sections as they are for the now-discarded essay test.
Perelman, who helped inspire Coleman’s SAT changes, is not overly hopeful about the new essay-free test either, noting that a Maryland study found that family income and education levels are both tightly correlated with scores on the multiple-choice sections as well (Baltimore Sun, 9/26/13). “SAT tests correlate so highly with income,” Perelman tells Extra!, “that in reality you don’t really need to give the test—all you need is the parent’s income tax statement.”
It was the rare news outlet that explored this deeper question, and rarer still one that investigated whether the revisions to the test would do much to answer it. On NPR (3/6/14), Education Week’s Erik Robelen summarized the problem:
You know, these days kids from more affluent families, they can afford these high-priced test preps, workshops and seminars, and this is an idea to help level the playing field. There are going to be some barriers, though, to how much that will actually level the playing field though, I would imagine.
The Washington Post (3/6/14) quoted Wake Forest sociologist Joseph Soares—“There’s no reason to think that fiddling with the test is in any way going to increase its fairness”—but didn’t further investigate the issue in its long article on the SAT changes.
It’s criticisms like this that help explain one persistent problem with the SAT: It isn’t very good at predicting the one thing it’s supposed to, which is how well a student will succeed in college. As Bard College president Leon Botstein wrote in a Time.com op-ed (3/7/14):
The blunt fact is that the SAT has never been a good predictor of academic achievement in college. High school grades adjusted to account for the curriculum and academic programs in the high school from which a student graduates are.... The richer one is, the better one does on the SAT. Nothing that is now proposed by the College Board breaks the fundamental role the SAT plays in perpetuating economic and therefore educational inequality.
Time’s own coverage of the SAT announcement (3/5/14), meanwhile, merely quoted Coleman’s widely cited soundbite: “If there are no more secrets, it’s very hard to pay for them.” The biggest secret of the SAT, though, is that no matter its changes, it’s likely to remain a test less of the power of students’ brains than of the size of their parents’ wallets.