![]() | ![]() |
In high school, my teacher Ms. Sargon imagined that I could be a science editor at the New York Times. My career took a different path, but I'm grateful for the time I spent on the high school newspaper (it gave me many lessons on how to write) and I always thought that journalism would have been a good alternative. Lately, however, I've been appalled by several instances of shoddy reporting. I realize that there have always been good and bad reporters, as there are in any profession, but I think that the profession as a whole has become too lax in accepting thew following four flaws:
Here are some examples of the four problems.
Google has almost twice as many search ad clickthroughs as runner-up Yahoo. In December, Google had 16.5 trillion ad clickthrough, compared with Yahoo's 9 trillion, according to Nielsen/NetRatings.
To me, 16.5 trillion ad clicks in a month is obviously an error, and I thought that anyone with a sixth-grade education could immediately see that. Fortunately, I have a sixth-grader in my house, so I asked her. "No way, Dad; none of my friends click on the ads" she said. I pointed out that somebody must click on them; they did bring in $6 billion this year. She pondered that and asked "How many people in the world?" I told her about 6 billion, and she was able to estimate that maybe a third of them had computers and half of those clicked on the ads, leaving 1 billion people to generate 16.5 trillion clicks, which makes 16,500 clicks per user in December. "That is so not happening, Dad" she concluded, thereby demonstrating a minimum requirement for reporting: the ability to ask the right question ("How many people in the world?"), do some simple arithmetic, and follow through to a conclusion.
Another way to come to that conclusion would be to estimate how much revenue those 16.5 trillion clicks generate. A quick look at the Google AdWords FAQ shows that the minimum price for an ad is $0.01, so these clicks would produce at least $165 billion in revenue. But the very next paragraph of the InformationWeek article says:
Google earned $3.64 billion from U.S. online ad revenues in 2005, representing 69% of all paid search advertising, according to eMarketer.
How could they make more than $165 billion in December, but only $3.64 billion for the year?
Yet another way to see this is to do one more minute of research. If you search for [what is the GDP of the USA] you see the answer $11,750,000,000,000. Now search for [Google "the average cost-per-click is"] and you'll see a wide variety of estimates, mostly from $0.10 to $1.00. A front-page article in the Wall Street Journal on 18 Jan. 2006 estimates Google's average cost-per-click as $0.50. With that estimate the 16.5 trillion clicks in December translate to $8.25 trillion, more than 70% of the US GDP for the whole year. That can't be right.
Finally, another report on Nielsen/NetRatings says that "The activity at more than 60 search sites makes up the total search volume upon which percentages are based -- 5.1 billion searches in this month." Of these, Google gets a 46.3% share of the searches. 46.3% of 5.1 billion is 2.4 billion, so Nielsen/NetRatings (as quoted by these two reporters) is saying that each Google search results in 7,000 clicks.
Exercise for the reader: is the Nielsen/NetRatings estimate of 2.4 billion Google searches per month (which translates to 80 million per day) accurate? Consider research such as ["million google searches per day"] and note the year those estimates were made.
[Addendum, 2008:] This article was originally written in 2006, but a good example came up in 2008. As the United States was debating a $700 billion "bailout" plan, we saw many dozens of stories discussing the $700 million bailout". Factors of 1000 matter!
Why am I making such a big deal about one or two bad numbers? To point out that it is so easy to correct these mistakes before they are published if you are willing to do a little thinking rather than just parroting.
The disturbing part about the article is when Atkin says
Riedel's sales pitch was given independent corroboration last year when Kari Russell, a food science student at the University of Tennessee, published some research into wine glasses. Her test was comparatively simple. She poured samples of Merlot into three different shaped glasses (champagne flute, a Y-shaped martini glass and a broad, but tapered Bordeaux glass). As she did so, she noticed that the concentration of one particular phenol, gallic acid, increased in all three. Twenty minutes later it had dropped significantly in the Bordeaux glass, producing a rounder wine. (emphasis mine -- PN)
Was this corroboration? Russell showed that gallic acid content changes, but if Atkins had taken the trouble to actually read Russell's research (or had talked to her), he would have found that her research actually concluded that this change had no measurable effect on taste:
In the second phase of her study, Russell asked a sensory panel, made mostly of students, to sample wine that had been held in different shaped glasses. While the majority of panelists didn't notice any difference in taste, a professor did seem to be able to taste a difference.
So only one subject (it doesn't say out of how many, but to have any chance at significance you'd want at least 10 subjects in your study) claimed he could taste a difference. But instead of calling the metaphorical wine glass 90% empty, Atkin (and several other reporters) concluded that there was a difference in taste, when they should have concluded that there was not.
This story was cited in Seth Godin's new book All Marketers Are Liars. I accept Godin's premise, but I hope that reporters could be truth-tellers, rather than dupes of the liars. Unfortunately, it seems that Atkin wanted to believe that he could have a better wine-drinking experience, so he deceived himself into believing Riedel's pitch and misrepresenting Russell's research.
Shortly thereafter I read another Globe article about an inventor who said it was important to be creative and spontaneous, and that therefore he always started each day by shaving a different side of his face first. I wrote again to point out that, assuming he has only two sides to his face, that this was no more spontaneous than shaving the same side first every day. (I didn't know about Kolmogorov complexity at the time, but I had the right idea.) The Globe ignored me again.
Around 2000, there was a news report that stated that most men should not be ashamed about the length of their penis, because a scientific study had shown that 95% of men were within the range of average. This result was presented as a finding about penis size (thereby making it newsworthy material) but in fact it had nothing at all to do with penises, and was merely a restating of the statistical definition that 95% of data points on a bell curve fall within two standard deviations of the mean. But I suppose you wouldn't attract as many readers with the headline "95% of data points within two s.d. of mean!" (Note: I apologize that I don't have a link to this news article, but as anyone with any familiarity with the Internet knows, there are a lot of pages mentioning "penis", and I couldn't find the article.)
In a positive development, President Bush stressed math and science education in his 2006 State of the Union address. Ironically, the Forbes coverage of the speech says that:
Bush proposes to spend $5.9 billion in fiscal 2007 on a plan the White House has dubbed the "American Competitiveness Initiative." Two-thirds of the money - $4.6 billion - would be used to pay for tax credits ...Last I checked, 2/3 of $5.9 billion was $3.9 billion, not $4.6 billion.
These examples annoyed me, but I admit they may not be particularly important. The next example, however, is clearly of great importance to international public policy.
You'd always go out and find the two sides of the story. So never mind there were nine hundred environmentalists -- experts -- who believed in global warming, to every two who didn't. In a story of four minutes, it almost looked like it was fifty-fifty: "There's a debate over whether there's global warming." But journalists, in an attempt not to come down on one side, would portray to the audience: "There's this raging battle over global warming."Rick Piltz, senior associate with the Climate Change Science Program, resigned in March 2005 "because of a number of differences with the Bush administration's approach to climate change and climate science." In an interview he was asked "Do journalists do a good job of investigating and explaining?" and stated:
Well, how many [U.S.] newspapers even have a science section, other than the New York Times? Even if they have a science reporter, they don't give the reporter much space. Then there's this concept of balance that's framed in such a way as to really enable intellectual superficiality. Give a quote from each side. Then you can do the "he said--she said." There's no sense that it's the journalist's job to dig and see if either side has more merit. At least provide some context. You can't get that from the media.
The consensus among climate researchers is outlined by the report of the Intergovernmental Panel on Climate Change:
Human activities ... are modifying the concentration of atmospheric constituents ... that absorb or scatter radiant energy. Most of the observed warming over the last 50 years is likely to have been due to the increase in greenhouse gas concentrations.This conclusion (or very similar wording) is endorsed by the National Academy of Sciences, The American Meteorological Society, the American Geophysical Union and its parent organization, the American Institute of Physics, the national science academies of the G8 nations, Brazil, China, and India. and the American Association for the Advancement of Science, which wrote
The scientists at the briefing said global warming is not as controversial as sometimes portrayed by skeptics. Those who study the issue agree that human industrial activity is having an impact on climate, they said, and policy makers need to understand the depth of scientific consensus on key issues.The consensus was examined in a Science study by Prof. Naomi Oreskes (Dec. 2004) in which she surveyed 928 scientific journal articles that matched the search [global climate change]. Of these, according to Oreskes, 75% agreed with the consensus view (either implicitly or explicitly), 25% took no stand one way or the other, and none rejected the consensus. Even reporters who are predisposed to covering "both sides" of a story, like a sporting event, can see that when all the scientific organizations and 75% of 928 papers are on one side, with zero scientific organizations or scientific papers on the other side (at least from this sample), the game is over.
But before calling the game, let's consider some criticism of Oreskes and its coverage in the press. In a letter to Science, Social anthropologist Benny Peiser claimed that when he attempted to replicate Oreskes' study, he found 34 articles that "reject or doubt" the consensus, thereby falsifying Oreskes' results. Science chose not to publish the letter, and Peiser claimed bias on the part of Science, and was the subject of sympathetic articles in The Telgraph (1 May 2005) and CNSNews.com (7 Dec. 2004).
Certainly nobody wants to stifle legitimate scientific debate, so was Science wrong to reject the letter? Were the Telegraph and CNSNews right to publish these articles? My feeling is that it hinges not on whether Peiser is complaining, but on whether he has a legitimate complaint. It should be the reporter's job to determine this. Any reporter could do this with about two hours of work. That's what I did: I looked at the evidence myself (see my separate web page for details). I read 59 scientific article abstracts, and classified them into the categories Oreskes used. My analysis was consistent with the Oreskes article. I found that only two of the 34 papers Peiser cites as rejecting the consensus actually do reject it, and that these were both opinion pieces rather than scientific articles; that's why they were not included in Oreskes' study. Therefore I conclude Science was right to reject Peiser's letter, and the Telegraph and CNSNews were wrong to print articles on Peiser without critical evaluation. Update (June 2007): Peiser has backed off his claims, and now says there is actually only one out of the 34 papers actually rejects the consensus, and that one is an editorial, not a scientific paper.
You might also want to check out another case of "scurrilous parroting" (their phrase, not mine) on global warming.