Saturday, December 22, 2012

The Downside of Rating Systems In Political Fact Checking

http://www.marketplace.org/sites/default/files/styles/primary-image-610x340/public/list-truth-chalkboard.jpg

In his farewell message to fans of political fact checking, departing FactCheck.org director Brooks Jackson reflects on the growth of the fact checking industry, the merits of fact checking, criticisms of fact checkers, and various legitimate pitfalls made by various fact-checking sites. Among the pitfalls he discussed, the one that most caught my eye dealt with the ratings systems fact checkers use so often:
Rating statements with devices such as “truth-o-meters” or “Pinocchios” are popular with readers, and successful attention-grabbers. But such ratings are by their nature subjective — the difference between one or two “Pinocchios” is a matter of personal judgment, and debatable. Some statements are clearly true, and some provably false, but there’s no agreed method for determining the precise degree of mendacity in any statement that falls somewhere in between. Rating systems have also led to embarrassment. A senator who said a “majority” of Americans are conservative was rated “mostly true” (and later “half true”) even though the statement was false. The story cited a poll showing only 40 percent of Americans rated themselves conservative. That’s more than said they were moderate (35 percent) or liberal (21 percent) but still far from a majority. The senator had a point, but stated it incorrectly, thereby exaggerating. A simple “truth-o-meter” had no suitable category for that. Our approach would have been to say that it was false. But we would also note that the senator would have been correct to say Americans are more likely to call themselves conservative than moderate, or liberal, when given those three choices.
While I disagree that ratings systems are entirely subjective (most have specific rules for each category), I do agree that the organization of categories is not rigorous. There are obviously statements that do not easily fit into any given category. Furthermore, these rating systems can be a distraction, giving the reader an incentive to merely look at the rating and ignore the actual fact checking. Although I do understand not everyone has time to read an tire article over every claim that has been checked, simple summaries (such as the ones used by FactCheck.org) at least give the reader a basic idea as to what was right and/or wrong with the checked claim. I will admit ratings systems have doubtless contributed to the rising popularity of fact checking. But it isn't clear whether or not they actually do more harm than good.

In addition to discussing the pitfalls of fact checking, Jackson also made some very good points about the actual purpose of fact checking in political discourse:
"Complaining that fact-checkers failed to stop politicians from lying is like complaining that a firefighter failed to prevent an arsonist from starting a fire.
Furthermore, it seems to me that anyone who asks the very political operatives behind the 2012 falsehoods to rate our performance is pretty much interviewing the arsonists about the merits of the firefighters. We don’t write to impress politicians or their hirelings. We write to help the voters — and we don’t expect to get an invitation to dinners at the White House. We can’t stop politicians from trying to bamboozle voters. But we can make voters harder to fool."
Indeed there is an extremely important role for fact checkers to play in political discourse. And Jackson sums it up quite nicely. Scientific skepticism, which fact checkers apply to politics, has a role in nearly every aspect of life, including politics. In this spirit I thank Mr. Jackson for the quality work both he and his team have done over the past nine years. FactCheck.org is my favorite fact checking site and I wish him the best of luck in the future!

No comments:

Post a Comment