With competing political agendas ready to balkanize public education, reporting on test scores is no less important than reporting on campaign contributions, corporate balance sheets or federal budget numbers. More so, in fact, because the “facts” of test numbers are often used to adorn arguments that cannot otherwise justify attention.
A case in point is the spate of stories that reported higher test scores for African-American students at voucher-funded schools. Filling their back-to-school pages in August and September 2000, the media jumped to reported the findings of Paul Peterson, a Harvard professor and fellow at the conservative Hoover Institute
The press, which has become accustomed to using test scores to measure the pulse of education, relied uncritically on Peterson’s word for two weeks. “Voucher Study Finds Gain for Black Students,” headlined the Los Angeles Times (8/28/00). “Vouchers Raise the Bar,” said the Atlanta Journal and Constitution (8/30/00). “School Vouchers Help Black Kids, Says Study,” reported the New York Daily News (8/28/00), the same day that USA Today announced, “Test Scores Rise for Pupils with Vouchers.”
The Dayton Daily News (8/28/00) gave op-ed space to Peterson himself, where he wrote, “If the trend observed over the first two years continues, the black/white test gap could be eliminated in subsequent years of education for black students who used a voucher to switch from public to private school.” According to the Columbus Dispatch (9/10/00), “While this research is not definitive proof…it adds to a growing body of evidence that voucher programs offer hope to thousands of low-income, inner-city pupils trapped in failing public schools.”
Columnists sang a dirge for public schools. For New York Times columnist William Safire (8/31/00), it was “hard evidence” that could be used in the fight against “the government’s near monopoly of public education.” Accusing Vice President Al Gore of “ignoring voucher facts,” Rachelle Cohen asserted in the Boston Herald (8/31/00), “What researchers found–and document with enough facts and figures and test scores to dazzle even the most wonkish of candidates–is that vouchers work.”
Before facts crept in, the press used these claims to justify diverting students from impoverished public schools into allegedly superior, voucher-funded alternatives. “Black students perform better when given a voucher than when in small classes,” opined the Atlanta Journal (9/8/00). Their conclusion, at odds with most educational research, came down to a checkbook solution: “Reducing class size is enormously expensive; public scholarship programs may cost nothing extra.” (The vouchers given each student amounted to $1,700.)
After years of reporting education gaps between black and white, rich and poor, public- and private-school students, the media embraced these findings as a welcome relief. Numbers, percentiles, averages rounded to one decimal point would prove to skeptics what right-wing interest groups had not: Vouchers could rescue failing students, a flawed curriculum, misguided educators and crumbling schools. The data were clear, the implications specific. But were they real?
Handle with care
While Hoover Institute fellows and pro-voucher columnists shaped media perceptions for two weeks, David Myers was chafing. Myers was Peterson’s partner on the New York City research, working at Mathematica Policy Research, Inc., a statistical research outfit. Until Kate Zernike’s New York Times story (9/15/00) picked up on Mathematica’s repudiation of Peterson’s spin, he had been ignored.
Myers pointed to the highly uneven distribution of the benefits found in the voucher study. White and Latino students, who were also included in the study, showed no significant gains, and the benefits for black students were concentrated in only one of the three cities studied, Washington, D.C. In New York, students in one grade showed improvement, while three other classes did not. “Because the gains are so concentrated in this single group, one needs to be very cautious in setting policy based on the overall modest impacts on test scores,” Myers told the Times.
The Times pointed out the crucial information that vouchers were turned down by many families that were offered them–as many as 47 percent in D.C.–and that those who did accept them tended to have higher incomes and higher educational levels. Peterson dismissed these serious problems with the findings by insisting that “an average is an average.”
The New York Times‘ questions sparked a new round of stories, including the Associated Press (9/16/00) and the Washington Post (9/19/00). But while the original, uncritical stories ran in dozens of papers across the country, the skeptical follow-ups appeared in a relative handful. Had reporters bothered to look at the data instead of a press release early on, they would have known when an average was not an average; when facts, not opinion, should have driven a story.
Phyllis Vine is a freelance journalist and historian.