It was “artificial intelligence history,” wrote CBSNews.com (6/9/14). We had crossed, according to New York Daily News columnist Harry Siegel (6/10/14), “the latest line we’d drawn separating man from machine.” The news that a supercomputer in England had, for the first time, passed the famous “Turing test” by tricking a panel of judges into believing that it was actually a human being sparked superlatives and excited speculation on both sides of the Atlantic.
For anyone familiar with the computing field, however, something didn’t add up. First off, the contest had little to do with actual artificial intelligence (Daily Beast, 6/10/14): The winner, a program called “Eugene Goostman,” was a chatbot, more along the lines of Apple’s Siri than an actual supercomputer. Its gimmick, which dates back to the 1966 “therapist” program Eliza, is a simple one: Combine a few stock phrases with occasional references to keywords in the questioner’s statements, add in occasional misdirection (Eugene Goostman claimed to be a Ukrainian teen speaking in English, helping explain the occasional awkward answer) and hope that someone is fooled.

“The ‘standard interpretation’ of the Turing test, in which player C, the interrogator, is tasked with trying to determine which player–A or B–is a computer and which is a human. The interrogator is limited to using the responses to written questions to make the determination.” -Wikipedia
Moreover, this was hardly the first time that a computer program claimed to have passed the Turing test, first proposed by the pioneering British computer scientist Alan Turing in 1950. An annual Loebner Prize has been awarded since 1990 to the bot that best fools judges into thinking it could be human—in fact, Goostman itself had won a similar Turing competition in 2012, although by fooling only 29 percent of judges into thinking it was human as opposed to 33 percent this year, clearing what Mashable (6/9/14) called the “official Turing test threshold.” (In fact, Turing only predicted that a computer could fool human judges 30 percent of the time by the year 2000; he never said that that would mean “passing” the test—Guardian, 6/11/14.)
Any curious journalist had an easy way of checking claims of an unprecedented superintelligent computer: Go chat with Goostman itself at the website prince tonai.com. (It has since been taken down.) Apparently none of the initial reporters did so before writing their stories—or, at least, none tried to stump it with questions as simple as that posed by MIT computer science professor Scott Aaronson (NPR, 6/9/14), who asked, “Which is bigger, a shoebox or Mount Everest?” and received the answer: “I can’t make a choice right now. I should think it out later.”
Instead, the vast majority of media outlets merely repeated the claims made in the press release. CBC News (6/9/14) quoted the University of Reading as calling it a “historic milestone in artificial intelligence,” while NBCNews.com (6/9/14) cited the test organizer’s assertion that, in NBC’s paraphrase, “a computer that can think and act like a person will be an asset to battling cyber-crime.”
Even a little digging would have revealed that Kevin Warwick, the British engineering professor who arranged the test—and the press release—was, as TechDirt (6/9/14) put it, “somewhat infamous for his ridiculous claims to the press, which gullible reporters repeat without question”:
All the way back in 2000, we were writing about all the ridiculous press he got for claiming to be the world’s first “cyborg” for implanting a chip in his arm. There was even a — since taken down — Kevin Warwick Watch website that mocked and categorized all of his media appearances in which gullible reporters simply repeated all of his nutty claims. Warwick had gone quiet for a while, but back in 2010, we wrote about how his lab was getting bogus press for claiming to have “the first human infected with a computer virus.”
The Goostman affair was embarrassing for news outlets that had to walk back their initial excitement, but more worryingly, it pointed up the dangers of “stenography journalism”: the tendency for journalists to merely repeat what they are told, whether by powerful individuals or impressive-sounding press releases, regardless of whether or not it’s true. (One memorable example: Washington Post reporter Paul Kane’s defense of not calling Sen. Olympia Snowe on a blatant inconsistency because “that’s what she said” and “we are not opinion writers whose job is to play some sorta gotcha game with lawmakers”—Media Matters, 4/9/09.)
This shortcut is certainly not new, but it may be on the rise with Web outlets’ desire for a constant stream of traffic—as when ESPN sports business reporter Darren Rovell was revealed to have posted multiple items that were thinly rewritten press releases (Deadspin, 5/21/14, 5/28/14), including a segment for ABCNews.com (5/15/14) that consisted entirely of the New Era clothing company having Rovell bat against retired pitcher Mariano Rivera as a promotion for its baseball caps.
This recycling of marketing materials straight to the news pages has gotten so ubiquitous, in fact, that a PR executive in Florida wrote to the editor of the Jacksonville Daily Record (JimRomenesko.com, 6/24/14) to demand unironically that its staffers get a byline on reprinted press releases, declaring, “When you publish our work with your name, that is plagiarism.” Daily Record editor Marilyn Young replied that she’d edited a two-page release down to five paragraphs, and properly credited it as “according to a news release”—though she didn’t indicate how this was supposed to serve readers, even accurately sourced.
One might expect that this behavior would be less common in the world of science, where it’s easy enough to find experts who can confirm or deny the relevance of a press release’s claims. Yet as MIT’s Knight Science Journalism website (2/14/14) revealed recently, the Washington Post’s science section regularly ran lightly rewritten press releases from such sources as the University of Zurich (on female viewers rating winning Tour de France racers more highly on their looks) and Stanford University (on groups that gossip fostering better cooperation) without seeking to determine whether the findings were valid or whether other scientists held dissenting views.
Real journalism, needless to say, re-quires questioning outrageous claims, not merely reprinting them, which seems like a job that could be done by…well, a robot. Maybe Goostman would make a good Washington Post hire—after all, he too lacks interest in playing gotcha games about whether Mount Everest is bigger than a shoebox.



