Monday, January 16, 2012

Politifact: Selection Bias or Bad Statistics

In a recent post where I critiqued a list of issues that should "keep discerning readers from trusting Politifact", I was alerted to an article examining potential selection bias in how Politifact chooses which statements to rate. The article, entitled "Selection Bias? PolitiFact Rates Republican Statements as False at 3 Times the Rate of Democrats," was written by Eric Ostermeier, "Research Associate at the Humphrey School's Center for the Study of Politics and Governance." This article has also been cited in a recent attack on fact checkers by The Weekly Standard (3-post critique: 1,2,3). Given the attention this article received, I thought it would be useful to see if this article actually gives any good reason to justify suspicion of selection bias at Politifact.
First off, it should be noted that selection bias occurs whenever a sample, taken for statistical analysis, has not been chosen randomly. It should be clear, given the title, as well as a few other indicators shown later, this article is focusing on party-related selection bias. However, we should not jump to too many conclusions yet. It is important to actually read the article to see if the title is even justified.

MISLEADING TITLES

" [TITLE:] Selection Bias? PolitiFact Rates Republican Statements as False at 3 Times the Rate of Democrats.
[BODY:] PolitiFact assigns "Pants on Fire" or "False" ratings to 39 percent of Republican statements compared to just 12 percent of Democrats since January 2010 "
The difference between the title and body is subtle, but incredibly important. The title suggests the trend of "false" ratings is an overall trend throughout the 4 1/2 year history of Politifact. However, the actual article just looks at an 11 month period, mostly 2010. The reader should take note that this is not a random sample. In fact, as I will show later on in this article, this distinction actually seriously challenges any possible conclusion of party-related selection bias.

A BIT OF BACKGROUND
" PolitiFact, the high profile political fact-checking operation at the St. Petersburg Times, has been criticized by those on the right from time to time for alleged bias in its grading of statements made by political figures and organizations. "
At least in my experience, these allegations are often incredibly poor (also see linked critiques for analysis of other articles alleging bias within Politifact).

" The organization (and now its more than a half dozen state offshoots) grades statements made by politicians, pundits, reporters, interest groups, and even the occasional comedian (anyone 'driving the political discourse') on a six point "Truth-O-Meter" scale: True, Mostly True, Half True, Barely True, False, and Pants On Fire for "ridiculously" false claims.
But although PolitiFact provides a blueprint as to how statements are rated, it does not detail how statements are selected.
For while there is no doubt members of both political parties make numerous factual as well as inaccurate statements - and everything in between - there remains a fundamental question of which statements (by which politicians) are targeted for analysis in the first place. "
Politifact actually does now provide a description of how it chooses its statements:  

"Every day, PolitiFact staffers look for statements that can be checked. We comb through speeches, news stories, press releases, campaign brochures, TV ads, Facebook postings and transcripts of TV and radio interviews. Because we can't possibly check all claims, we select the most newsworthy and significant ones. 
In deciding which statements to check, we ask ourselves these questions:
  • Is the statement rooted in a fact that is verifiable? We don’t check opinions, and we recognize that in the world of speechmaking and political rhetoric, there is license for hyperbole.
  • Is the statement leaving a particular impression that may be misleading?
  • Is the statement significant? We avoid minor "gotchas"’ on claims that obviously represent a slip of the tongue.
  • Is the statement likely to be passed on and repeated by others?
  • Would a typical person hear or read the statement and wonder: Is that true?"

This page was not available when Ostermeier wrote this article, so I cannot criticize him for failing to notice it. However, it should be noted that Ostermeier's statement is no longer true (by no fault of his own, of course).

At this point, the article outlines its findings, drawing no conclusions at the moment.

IS IT APPROPRIATE TO DRAW CONCLUSIONS FROM 2010 ALONE?

Before we go any further, let us return to a previous point I wrote earlier. Remember the sample of posts from this article included all posts from January 2010 through January 2011. So we should ask ourselves if the chosen dates are truly arbitrary. To help answer this question,  I took a look, at data from 2007 to see if it differed from 2010 (note that when I say 2010 in this section, I'm including January 2011 to reflect Ostermeier's article):

 "Let's compare these results to the findings of Ostermeier for 2010:
  •  Out of 39 statements rated "False" or "Pants on Fire," each main party received approximately 50% of the rulings. This is much different from 2010, where Republicans received 75.5% of these ratings.
  • Politifact has devoted approximately equal time between Republicans (124, 52%)  and Democrats (110, 46%). This is nearly the same as 2010.
  • Republicans were graded in the "False" or "Pants on Fire" categories 16% of the time (39% in 2010) while Democrats were rated in these categories 16% of the time as well (12% in 2010). This absolutely nowhere close to the "super-majority" found in 2010.
  • 73% of Republican statements in 2007 received one of the top 3 ratings (47% in 2010) as opposed to 75% of Democratic statements (75% in 2010). When looking at "True" statements alone, Republicans beat Democrats with 5% more of their statements receiving the coveted award than Democrats. It is hard to see how one party was considered more or less truthful than the others. It depends on how much credit you give the "Half True" rating, as opposed to the "True" rating.
  • Republicans received a slightly larger percentage of "Mostly False" ratings than Democrats (1.79%). This is the same as in 2010. However, this only results in a 2% difference between Republicans and Democrats for the bottom 3 ratings. This is MUCH different from the 28% difference in 2010. 
As you can see, the results from 2007 can seriously undermine many possible conclusions that a person could draw from the 2010 data."

OSTERMEIER STARTS POKING AT CONCLUSIONS

However, Ostermeier starts unjustly suggesting certain conclusions and not others.
" What is particularly interesting about these findings is that the political party in control of the Presidency, the US Senate, and the US House during almost the entirety of the period under analysis was the Democrats, not the Republicans.
And yet, PolitiFact chose to highlight untrue statements made by those in the party out of power. "(emphasis mine)
It sounds like Ostermeier has already come to the conclusion that Politifact is essentially guilty of party-related selection bias since they "chose to highlight untrue statements made by those in the party out of power." This does not necessarily mean Politifact is choosing these statements because they came from Republicans, just that they are the party out of power. The data from 2007 does not provide us with any hint as to the truth of this possibility. Neither party was really "in power" during that time. Democrats controlled both the House and Senate while Republicans controlled the White House. However, Ostermeier fails to ask a few questions:
  • Do parties out of power tend to be  more likely to spread falsehoods, possibly in order to discredit the party in power (The party in power may not need to spread falsehoods as often since they are already in power)?
  • Was there a sensational movement (such as the TEA Party) that may have caused  more air time to go towards the party out of power, increasing the chance of falsehoods (see first bullet)?
  • Did the party out of power decide its number one priority should be to oust the current party from power? Do these kinds of campaigns tend to produce falsehoods?
  • Did the party out of power feel the need to attack legislation (Obamacare) coming from the party in power, regardless of factual accuracy (See first bullet)?
  • Was the party out of power often given uncritical air time from incredibly popular news outlets?
These are just some of the questions Ostermeier should have posed to readers before coming to the conclusion that Politifact "chose to highlight untrue statements made by those in the party out of power." Essentially, he neglected to ask if the party out of power (or the media) highlighted its own untrue statements?

OSTERMEIER DOES ACTUALLY MAKE A GOOD POINT

" An examination of the more than 80 statements PolitiFact graded over the past 13 months by ideological groups and individuals who have not held elective office, conservatives only received slightly harsher ratings than liberals. "

One would think this alone would challenge the suspicion of a liberal bias, maybe even the suspicion of Democrat-centered bias. Is Politifact biased against Republicans, but not Conservatives? That sounds like it would take quite a conspiracy theory-styled rationale to justify. In fact, could there be other possibilities that could explain this kind of trend in 2010?
  •  Were there sensational primary elections within the Republican Party that may have contributed to extra coverage? Did this election deal with a bloc of furious voters, who may have been more prone to falsehoods?
  • Could it be that at least a few former politicians became political commentators (also remember party out of power points)?
FINDING ANSWERS:
" When PolitiFact Editor Bill Adair was on C-SPAN's Washington Journal in August of 2009, he explained how statements are picked: 
"We choose to check things we are curious about. If we look at something and we think that an elected official or talk show host is wrong, then we will fact-check it."

If that is the methodology, then why is it that PolitiFact takes Republicans to the woodshed much more frequently than Democrats?

One could theoretically argue that one political party has made a disproportionately higher number of false claims than the other, and that this is subsequently reflected in the distribution of ratings on the PolitiFact site. "
First off, it should be noted that Bill Adair has essentially admitted Politifact's selection of statements is not actually random. However, since his selection criteria does no appear to be dependent on political affiliation, the criteria can still be thought of as random in regards to it.

Ostermeier does suggest a possible reason for why Republican politicians are getting the brunt of bad ratings. In fact, I can imagine a Democrat saying Ostermeier's data  "shows that Republicans are more prone to falsehoods than Democrats." Now, as our 2007 data shows, whatever trend Ostermeier found in 2010 did not exist in 2007. We could ask ourselves if something changed in the Republican Party (like the TEA Party) that may have made the party more likely to embrace falsehoods. Notice that the stats for Democrat statements were the same in 2007 as they were in 2010. The big difference came in Republican statements, possibly suggesting a change in the Republican party. Now I'm not saying this is definitely the answer. However, I see no evidence that eliminates this possibility in favor of any possibility involving party-related selection bias.
" However, there is no evidence offered by PolitiFact that this is their calculus in decision-making. "
I'm not 100% sure why he says "However." If one political party is making "a disproportionately higher number of false claims than the other," wouldn't one expect a random large sample of those statements to also contain a disproportionately higher number of false claims (leading to "false" ratings from fact checkers like Politifact)? I'm not sure what the discrepancy is. If it truly is the case that Republicans are making more false claims than Democrats, and Politifact has an approximately equal number of false claims for each party, would this not be an indication of party-related selection bias? Politifact would be trying, possibly with some kind of "fairness doctrine" to make Republicans look more honest than they truly are. As Bill Adair noted, "we choose to check things we are curious about. If we look at something and we think that an elected official or talk show host is wrong, then we will fact-check it." No mechanisms are in place to ensure some kind of contrived fairness when one political party is more likely than the other to play fast and loose with the truth.
"Nor does PolitiFact claim on its site to present a 'fair and balanced' selection of statements, or that the statements rated are representative of the general truthfulness of the nation's political parties or the elected officials involved. "
This is correct. However, as the Washington Post Glen Kessler pointed out in a recent interview with NPR "I don't really keep track of, you know, how many Democrats or how many Republicans I'm looking at until, you know, at the end of the year, I count it up." There is no indication Politifact does any different. So it may not be the case that Politifact highlights any explicit methods for ensuring fairness (contrived or not). However, as professional journalists, is it possible they employed standard journalistic practices of objectivity so common-sense they didn't see the need to mention them?

It is possible that, given the tendency for journalists to be themselves personally liberal, that we should expect party-related selection bias as a default? It is a possibility (like many others). However, unless the entire staff is liberal (a claim that would require evidence), one has to wonder if even a few conservative journalists may actually be enough to prevent other liberal journalists from falling prey to party-related selection bias. Remember, Politifact articles are reviewed by a panel of at least 3 editors, further decreasing the probability of party-related selection bias.

In addition, a person with that hypothesis would have to explain why ratings did not favor politicians from either party over the other in 2007. Did Democrats lie more in 2007, causing party-related selection bias to make it seem like Democrats were just as honest as Republicans? Or was Politifact just not biased back then, as opposed to now? At least one would need to be investigated before drawing any conclusions.

Overall, although Politifact does not explicitly call itself "fair and balanced" (one may actually be skeptical of news organizations that do), once cannot come to the conclusion that they aren't actually fair and balanced. This would be a classic "argument from ignorance."
"And yet... 
In defending PolitiFact's "statements by ruling" summaries - tables that combine all ratings given by PolitiFact to an individual or group - Adair explained: 
"We are really creating a tremendous database of independent journalism that's assessing these things, and it's valuable for people to see how often is President Obama right and how often was Senator McCain right. I think of it as like the back of a baseball card. You know - that it's sort of someone's career statistics. You know - it's sort of what's their batting average." (C-SPAN Washington Journal, August 4, 2009) "
I will admit Bill Adair does oversell the importance of Politifact's summaries and tables. Since the selected statements are not truly random, only so much can be extrapolated from them. It may be a good idea for Politifact to provide a disclaimer stating that statements from individual politicians are not randomly chosen. They are instead chosen via the criteria outlined in their "Principles." However, with no evidence that party-related selection bias is a factor in their decision making, there would be no reason to post any disclaimer to suggest it is a factor.

HOW FAR SHOULD THE MEDIA GO TO APPEAR UNBIASED?
" Adair is also on record for lamenting the media's kneejerk inclination to treat both sides of an issue equally, particularly when one side has the facts wrong. 
In an interview with the New York Times in April 2010, Adair said: 
"The media in general has shied away from fact checking to a large extent because of fears that we'd be called biased, and also because I think it's hard journalism. It's a lot easier to give the on-the-one-hand, on-the-other-hand kind of journalism and leave it to readers to sort it out. But that isn't good enough these days. The information age has made things so chaotic, I think it's our obligation in the mainstream media to help people sort out what's true and what's not." "
Bill Adair makes a good point. Why should the media present both sides equally when one side is clearly wrong? In addition, why should the media present both parties as being equally truthful when one is actually playing fast and loose with the truth? It would seem odd that a political party that once lambasted the Fairness Doctrine would have any problem with what Bill Adair is saying.

SHIFTING THE BURDEN OF PROOF
" The question is not whether PolitiFact will ultimately convert skeptics on the right that they do not have ulterior motives in the selection of what statements are rated, but whether the organization can give a convincing argument that either a) Republicans in fact do lie much more than Democrats, or b) if they do not, that it is immaterial that PolitiFact covers political discourse with a frame that suggests this is the case. "

This is an interesting quote. My original critique of this quote may have been a bit over-simplified1. As a result I thought it would be better to do a bit more thorough job on this quote, due to the implications it contains:

Notice how his last statement claims that Politifact frames Republicans in a way that suggests they lie more than Democrats. Before judging the validity of the rest of this statement, it would probably be a good idea to see if Politifact even does this at all. First off, notice that Politifact has never had an overall Dem vs GOP report card. Ostermeier was the one who did this. So this means that certain conclusions one may be tempted to draw from Ostermeier's study are not ones people may necessarily draw from Politifact itself. This is because all Ostermeier did was total up all statements made by Democrats and Republicans, and compared them. Of course he could have easily done the same with any arbitrary breakdown of statements: Men versus women, majorities versus minorities, southerners versus northerners, etc... If we found similar disparities in the data to the Dem vs GOP data, would we be accusing them of covering "political discourse with a frame that suggests this is the case?" The ultimate problem is that, Politifact rates individuals (and specific groups), not overall groupings of those individuals. In fact, Adair's earlier statement about his report cards shows that he is really only focusing on individuals. If one were to infer any trend about an entire grouping of those individuals, not only would they would need a sufficient sample size of individuals within those groups, but they would also need to filter out individuals with only a few statements to their name. It should be clear to anyone with common sense that, when you only have a small number of statements from an individual, you cannot come to any conclusion about the honesty of that individual with any decent level of certainty. As a result, people with only a few grades to their names would have to be ignored. For example, if you eliminated candidates with fewer than 5 statements to their name in 2010, you would have to eliminate Jim DeMint, Kevin McCarthy, Mike Prendergast, Dan Coats, and whole host of others... This will probably leave you with a very small sample of individuals. Seeing as how there are literally thousands of Republican politicians at all levels of government, you cannot come up with a half-way reasonable level of certainty over the honesty of Republican politicians as a whole from just Politifact's ratings alone. These kinds of things should be pretty intuitive to people just looking at Politifact's articles and report cards. As a result, Politifact can hardly be blamed for creating a framework suggesting Republicans lie more when they have never actually done such a thing, unlike Ostermeier. So as a result, does Politifact have the burden to give Republicans a convincing argument that "Republicans in fact do lie much more than Democrats?" At the very least, this is a straw man since Politifact never even implicitly made that claim. It was an observation based on a context-ignoring analysis performed by Ostermeier himself.

But let's say Politifact did look at a large number of claims from a large number of politicians to where you can gauge the honesty of enough individuals from each party to come up with a conclusion that one party does in fact lie more than the other. If Politifact is merely presenting the data they found, without coming to any conclusion, does Politifact really have the burden of proof? Whatever conclusions that are made about the data must be supported by the person making the conclusion. So if we are stuck in some dichotomy where either Politifact is biased, or Republicans lie more, should Politifact be the one with the burden of proof? As we can see, the data is agnostic to either claim. So if Politifact is making no claims, yet someone wants to claim they are biased, is it Politifact's burden to prove that Republicans lie more to keep these people from being justified in their belief in Politifact's bias? As Ostermeier points out, it is not a question over whether or not Politifact can prove outright that they don't have "ulterior motives." However, if Politifact fails to answer either of Ostermeier's questions, what can be their conclusion? Ostermeier gives Republicans no questions of their own to answer before coming to a conclusion. And as Ostermeier's last paragraph shows, he is comfortable supporting one particular conclusion over any others:


OSTERMEIER CONCLUDES BIAS
" In his August 2009 C-SPAN interview, Adair explained how the Pants on Fire rating was the site's most popular feature, and the rationale for its inclusion on the Truth-O-Meter scale: 
"We don't take this stuff too seriously. It's politics, but it's a sport too." 
By levying 23 Pants on Fire ratings to Republicans over the past year compared to just 4 to Democrats, it appears the sport of choice is game hunting - and the game is elephants. " (emphasis mine)
Up until this point, Ostermeier has been pretty good at not owning the conclusion that his data suggests Democrat-favored selection bias at Politifact. He has appeared sympathetic to concerns of this sort of bias by specifically highlighting them and not investigating other possible explanations. However, this last paragraph indicates he does actually believe that there is some kind of bias of this sort at Politifact. However, his logic in regards to "Pants on Fire" ratings suffers from the same problems readers may have when concluding in favor of any kind of party-related selection bias at Politifact (see "Conclusion" below). He fails to investigate other possible causes, instead flipping the burden of proof on Politifact to demonstrate it is not in fact guilty. In fact, if he were to look back at 2007, he would see a near polar opposite trend of assignments for "Pants on Fire" ratings. Democrats were the recipients of over half the "Pants on Fire" ratings in 2007. In addition, they received twice as many "Pants on Fire" ratings as Republicans that year, despite the fact that Democrats actually had slightly fewer total statements rated. Now there weren't a lot of "Pants on Fire" ratings that year, but it is clear the trend from 2010 was absolutely non-existent in 2007. Shouldn't that make Ostermeier wonder what changed in 2010? Was it Politifact or the Republican Party? To my knowledge no evidence has been given for the former (or even suggested), yet the rise of the TEA Party in 2010 shows the latter is certainly true.

CONCLUSION

Although it is hard to draw too many conclusions from Ostermeier's data sets. A few questions still need to be investigated before coming to any conclusions (some of these come from a previous post):
  •  Could it be that many of these statements came from a single issue (or small collection of issues) where Republicans may have been more likely to spread falsehoods? A month-by-month breakdown could be useful in this regard. This kind of breakdown could also help investigate the question as to whether the Republican primaries had anything to do with the trends in 2010, as I pointed out earlier.
  • Could it be that the perceived rise of anti-intellectualism in the Republican Party has made it less likely to care about factual accuracy? 
  • Could the Republican party be more ideological and less prone to checking facts that could challenge their ideology? 
  • Could it be that the Republican tendency to call neutral fact-checking sites "biased" creates a tendency to play fast and loose with the truth, knowing their constituents already distrust any site that could shed light on their claims? 
These are all possibilities (although I am not necessarily claiming any of them to be true). So did the article control for or check any of these, or alert readers to these possibilities? NOPE.2

Throughout most of this article, Ostermeier presents interesting data for Politifact in 2010. Although there are many possible explanations of the data, Ostermeier only really focuses on one. In addition, he ignores at least one other for no better reason than he should have for ignoring the one he focuses on as well. Ostermeier fails to ask many questions, such as whether or not the trends he found in 2010 even existed in previous years. A quick analysis of Politifact in 2007 showed this is not the case for at least one year. In the end, Ostermeier makes it clear he thinks Politifact is guilty of party-related selection bias, succumbing to the same fallacies any reader would if he/she came to the same conclusion from the information presented in this article alone (not that other articles I've seen do any better).

Overall, this article demonstrates a particularly erroneous way of detecting bias. A mere look at ratings totals cannot give a reader much information about bias. Even if one were to tally up all of Politifact's articles, few conclusions could be reached. Out of Politifact's 4 1/2 years in fact checking, 3 have been under a Democrat president, and 3 1/2 under a Democrat controlled congress. With the possibility of the party out of power being more prone to spreading falsehoods, one cannot instantly jump to conclusions if Republicans end up having more bad ratings than Democrats. I admit that demonstrating bias can be very tough. However, if someone is having THAT much trouble demonstrating bias, maybe they should wonder if they are even justified in suspecting bias in the first place. If Ostermeier had done this, maybe this article would have been more useful. Instead this article is just another failed attempt to try and poison the well for Politifact.

1 - Update 1-31-12: Original critique: "Why should the organization have to give a convincing argument that Republicans lie more than Democrats in order to convince Republicans they do not have ulterior motives? Essentially this is the argument that, unless Politifact gives evidence it does not has ulterior motives, Republicans are justified in believing it does. This again, is a textbook argument from ignorance (see link above). Shouldn't Republicans have to ask for somewhat definitive evidence that Politifact in fact does have ulterior motives before accepting that claim as true? In fact, couldn't the data found in this entire article point to Republicans politicians spreading more falsehoods than Democrat politicians in 2010 just as much as it could point to Politifact having ulterior motives? In fact, the data from 2007 significantly challenges the latter while having practically no effect on the former."

2 - Update 1-31-12: I took this bulleted list from the "SHIFTING THE BURDEN OF PROOF" section and placed it in the conclusion, to help with the revised flow of the critique.

Sunday, January 15, 2012

Supposed Politifact Bias In 2010 Non-Existent In 2007


In a recent post where I critiqued a list of issues that should "keep discerning readers from trusting Politifact", I was alerted to an article examining potential selection bias in how Politifact chooses which statements to rate. The article, entitled "Selection Bias? PolitiFact Rates Republican Statements as False at 3 Times the Rate of Democrats," was written by Eric Ostermeier, "Research Associate at the Humphrey School's Center for the Study of Politics and Governance." This article has also been cited in a recent attack on fact checkers by The Weekly Standard (3-post critique: 1,2,3). Given the attention this article received, I thought it would be useful to see if this article actually gives any good reason to justify suspicion of selection bias at Politifact.

First off, it should be noted that selection bias occurs whenever a sample, taken for statistical analysis, has not been chosen randomly. As any reader can plainly see, this article does not use a random sample of statements from Politifact. Ostermeier chose statements from January 2010 through January 2011. If these dates were truly arbitrary, then it would not matter that these samples were not chosen randomly. However, as I intend to show in this post, choosing different dates can lead to remarkably different results. I will do a point-by-point critique of Ostermeier's article in another post. However, I thought I would look to see if choosing different dates made a difference to Ostermeier's results.


IS IT APPROPRIATE TO DRAW CONCLUSIONS FROM 2010 ALONE?

It is interesting to read Ostermeier's findings and see just how badly Republican officeholders are favored in the lower ratings categories during 2010 (note that when I say 2010 in this post, I'm including January 2011 to reflect Ostermeier's article). However, remember what I said earlier about how these findings are not based on a truly random sample. This should beg the question over whether this trend has existed in previous years. I decided to do a quick analysis of the 8 months of 2007 when Politifact first started rating statements from politicians. I chose this time period because it was relatively short and easy to analyze. My results turned out to be quite a bit different from Ostermeier's:



 Let's compare these results to the findings of Ostermeier for 2010:
  •  Out of 39 statements rated "False" or "Pants on Fire," each main party received approximately 50% of the rulings. This is much different from 2010, where Republicans received 75.5% of these ratings.
  • Politifact has devoted approximately equal time between Republicans (124, 52%)  and Democrats (110, 46%). This is nearly the same as 2010.
  • Republicans were graded in the "False" or "Pants on Fire" categories 16% of the time (39% in 2010) while Democrats were rated in these categories 16% of the time as well (12% in 2010). This absolutely nowhere close to the "super-majority" found in 2010.
  • 73% of Republican statements in 2007 received one of the top 3 ratings (47% in 2010) as opposed to 75% of Democratic statements (75% in 2010). When looking at "True" statements alone, Republicans beat Democrats with 5% more of their statements receiving the coveted award than Democrats. It is hard to see how one party was considered more or less truthful than the others. It depends on how much credit you give the "Half True" rating, as opposed to the "True" rating.
  • Republicans received a slightly larger percentage of "Mostly False" ratings than Democrats (1.79%). This is the same as in 2010. However, this only results in a 2% difference between Republicans and Democrats for the bottom 3 ratings. This is MUCH different from the 28% difference in 2010. 
As you can see, the results from 2007 can seriously undermine many possible conclusions that a person could draw from the 2010 data. The fact that the kind of results you see are dependent on the dates chosen show this is not a random sample. In fact, focusing solely on 2010 to hint at the possibility of Democrat-centered selection bias within Politifact would actually be an great example of cherry-picking data, a well known fallacy.

This is a great example of why every reader should remember his/her critical thinking cap when analyzing statistics. Don't leave home without it!

Saturday, January 14, 2012

NPR gives anti-fact-checkers a new voice



On the January 10th, 2012 episode of NPR's Talk of The Nation, host Neil Conan, The Weekly Standard's Mark Hemingway, and The Washington Post's Fact Checker Glenn Kessler discussed fact-checking in American politics. The episode appeared to be based on Hemingway's recent article entitled "Lies, Damned Lies, and ‘Fact Checking’." I thought it would be useful to provide a critique of the statements and claims given in this episode. However, many of Hemingway's statements have already been addressed in my recent post "The Weekly Standard Attacks Fact Checkers [Parts 1, 2 3]" so I will not cover them here. If you listened to the program and heard a claim not addressed in this particular post, check my post about Hemingway's article.

A SPECTRUM OF TRUTH
"CONAN: There have been others, including Mark Hemingway, but other stories about fact checkers in the past few weeks that have come to the conclusion that, well, among other things, it's hard to find absolute assertions of facts by politicians...
[KESSLER:]...Now, you know, I have this gradation where you get one to four Pinocchios...you know, that's a reflection of the fact that there's a gradation there. You know, there are facts that are vaguely true but are taken out of context, or there are facts that are not very illustrative of the point you're trying to make."
This is actually a very important point.Statements can be true, but misleading. Statements can be technically false but true in context (for example, a cited number can be off by a bit). In a statement with multiple claims, some can be true while others false.
"HEMINGWAY: ...I don't think anybody's against checking the actual fact. It's just that it comes down to, you know, like what you mentioned before, where you have situations where, you know, you have debates that are far too nuanced to say this is, you know, correct or this is incorrect."
This is exactly why Politifact actually has a grade of ratings, from "True" to "Pants on Fire," just to reflect this sort of reality. In fact, I'm not aware of any fact checkers that grade based solely on absolute truth or falsehood. So this complaint may be completely contrived.

ON TO THE CALLS

Soon after, Conan began taking calls. His first call was from Brian.
"BRIAN:  ...But when I started rethinking the fact-checking model was the PolitiFact Lie of the Year of 2011 about, you know, Republicans voting to end Medicare. And that was an issue where there was so much gray area, and it was so opinion-based that for them to call that the lie of the year as an absolute statement just absolutely shot their credibility with me."
I have yet to weigh in on this issue. However, Kessler's response is worth noting:
"KESSLER: That's right, that's right. And I think the case of the - you know, what the issue at hand was the Democrats saying that the Republicans were planning to kill Medicare, which they then illustrated with television ads that included literally tossing granny over the cliff... And, you know, when you get down to it, you can have an argument about whether or not what the House Republicans want to do with Medicare was a radical change or not, but it was not killing the program." (emphasis mine)
It seems Kessler hit the nail right on the head. Hemingway then chimed in:
"HEMINGWAY: Well, I think this PolitiFact Lie of the Year thing actually is a very useful conversation because it illustrates in a very good way where the fact-checking things can really muck up the debate... I think that PolitiFact was fairly egregious when they pronounced this the Lie of the Year... So yes, I mean, yes, Paul Ryan is, you know, maybe killing is too strong and unhelpful, but it's also not helpful to say that he's not changing the fundamental nature of the program either."
Once again, Politifact already had this covered. In the first paragraph from the "Lie of the Year 2011" article, Politifact mentioned "Introduced by U.S. Rep. Paul Ryan of Wisconsin, the plan kept Medicare intact for people 55 or older, but dramatically changed the program for everyone else by privatizing it and providing government subsidies. (emphasis mine)" There is no way Hemingway missed this. It was on the FIRST PARAGRAPH of the article he is criticizing. In fact, a fair chunk of space is devoted to outlining these changes.

Of course Hemingway also had to bring up past instances were Republicans suffered the "Lie of The Year" humiliation.
"HEMINGWAY: I would also point out that just two years ago, PolitiFact's Lie of the Year was Sarah Palin referring to death panels, which refers to something, the Independent Payment Advisory Board, which is the part of the Patient Protection and Affordable Care Act that deals with Medicare. And they declared that the Lie of the Year. And, you know, personally I think it was obvious that Sarah Palin was indulging in a bit of rhetorical hyperbole"
Sarah Palin didn't just call them "Death Panels." She also provided a whole slew of false claims about what these "Death Panels" do: "The America I know and love is not one in which my parents or my baby with Down Syndrome will have to stand in front of Obama's 'death panel' so his bureaucrats can decide, based on a subjective judgment of their 'level of productivity in society,' whether they are worthy of health care . Such a system is downright evil. (emphasis mine)" As Politifact then explained, "There is no panel in any version of the health care bills in Congress that judges a person's "level of productivity in society" to determine whether they are "worthy" of health care (emphasis mine)." In fact, Politifact pointed out the health care bill explicitly said ""Nothing in this section shall be construed to permit the Commission or the Center to mandate coverage, reimbursement, or other policies for any public or private payer." In other words, comparative effectiveness research will tell you whether treatment A is better than treatment B. But the bill as written won't mandate which treatment doctors and patients have to select. (emphasis mine)" As you can see. This is beyond just rhetoric. She made a specific claim, which was determined not only to be false, but outrageously false.  Once again this begs the question over whether or not Hemingway actually reads these articles he talks about.

WHAT DOES FACT CHECKING SAY ABOUT TRADITIONAL JOURNALISM?

Neil Conan then points out another criticism of Fact Checkers, to which Hemingway Responds:
"CONAN: One of the criticisms, Mark Hemingway, that's been raised is that in fact fact-checkers undermine their own organizations, and elsewhere in the newspaper, aren't those reporters supposed to be checking facts, too?
HEMINGWAY: Yeah, and I think this is a big problem here is what we have is the major media outlets have so given themselves over to analysis and other things like that that what happens is that now...
CONAN: As a result of the changing news business.
HEMINGWAY: Yes, as a result of the changing news business, a lot of factors, have so given themselves over to analysis that people read newspapers anymore, and they're like, well, where's the basic information that I want? So then when we get into these complex matters or disputes, along come the fact-checkers to answer these questions because it wasn't resolved in the initial, you know, reporting."
Hemingway assumes that standard reporting is actually supposed to check the facts. But we end up with an issue here. For clarity on this, lets turn to his article in The Weekly Standard: "Aside from fact-checking debates afterward... the Washington Post and Bloomberg... actually took the novel tack of running “fact checks” on what the candidates were saying in real time. While presidential candidates should not be above being held accountable for what they say in such a forum, there is good reason to be skeptical that instantaneous evaluations will ever prove useful or fair. (emphasis mine)" So on one hand, he blames the media for not fact checking when reporting. However, reporting tends to happen very fast, often in real time. As a result, would this not make the problems outlined in his quote much more common? I personally, would love to see more fact-checking in normal reporting. However, I understand there are downsides. As a result, Hemingway's criticism may be no more than wishful thinking. Luckily Glenn Kessler points out the need for fact checking in the real world of journalism.
"CONAN: And, you know, Glenn, you wrote a piece that said wait a minute, we're not supposed to be replacing journalism, we're complimentary.
KESSLER: That's right, I view it as a supplement, and in fact when I was a political reporter, I was often frustrated that I would be covering the day-to-day statements of the candidates and never really had an opportunity to step back and really examine what the truth was behind that statement. And what I try to do with these columns is not only focus on a particular statement but also give resources for readers to go do their own research. I provide links to all my documentation, to all the reports that I've looked at to reach my conclusions, and so I view it as in part, you know, an education process for everyone involved."
Kessler does a good job detailing the need for Fact Checkers in today's reporting.  The process can often be long (unless the claims have been fact checked already). Fact checking columns provide a place where reporters can focus solely on the accuracy of one or a few claims, and not get bogged down in standard time constraints and other reporting duties unnecessary to the practice of fact checking.

THE US IS BANKRUPT?! RUN FOR THE HILLS!

Another caller, name Matt, called in to complain about a fact checking column (assumed to be Kessler's) where Ron Paul was awarded "four Pinocchios" for claiming the US is bankrupt:
"MATT: ...And I just - and I found it totally unhelpful. It's kind of what you were talking about. You're just quibbling about definitions. I think anybody who is following the economic situation would say that we're in dire trouble, and so to say that - you know, and also Ron Paul's been very clear what he means by bankrupt. So if you wanted to talk about that, you could look into exactly what he means. He's been talking about it for 30 years...
CONAN: And bankrupt, I think the argument, if you use the term bankrupt, you should say what you mean.
KESSLER: Exactly. I mean, that's a very strong term. And the fact of the matter is, you know, U.S. Treasury bonds are the gold standard around the world, and you can say the United States is in economic distress or is headed towards bankruptcy or something like that, but to make a flat declaration that the United States is bankrupt I think is incorrect."
This is an interesting little question. As Kessler pointed out in his article, "Under no definition is the United States bankrupt." So this means Ron Paul is using the term with his own personal definition. As Neil and Kessler point out, this is a very strong term and needs clarification. Ron Paul may have been clarifying this to his fans for the last 30 years,but he had no way of knowing that his audience had seen these "clarifications." He made this statement during an Iowa Republican debate at a time when fewer than 10% of Republicans supported him. So what are the chances a significant number of people didn't know what he meant by the word "bankrupt?" Remember that fact checkers have to grade these statements based on how they can expect a reasonable person would interpret them.

I should also note that issues arise when people decide to use their own special vocabularies. Why did Ron Paul specifically use the term "bankrupt?" If, in a public speech, I called someone a "racist" and later pointed to earlier writings where I used the term to mean "anyone that uses the term "black" for anything," does that still excuse the fact that, as people understood it, I was saying that person is bigoted toward people of other races? Is Ron Paul attempting to defeat the purpose of language as a means of communication by using his own personal dictionary? Hemingway continued:
"I mean, does any American not know what Ron Paul means when he says America's bankrupt? I mean, the national debt has increased $4 trillion, about 40 percent in three years. We have, you know, things like $30 trillion unfunded Medicare liabilities. We have no money."
While all of these issues are definitely pressing, they hardly constitute a situation where "We have no money" and/or "We are bankrupt." Those terms have specific meanings. One could argue that, since we are both in debt, as well as in a deficit, we have no money. However, this situation is in no way unique in American History. Nonetheless, to my knowledge Hemingway has not provided evidence that a significant number of reasonable people would even interpret Ron Paul's phrase in any way other than how it is defined in the dictionary1. Hemingway continued:

"Yes, in the specific sense, there is no extra-national legal court that we can go to and file something, you know, Chapter 11 with the United Nations or something like that. But everyone knows we're out of money, and everyone knows what Ron Paul means, and to go in and sort of nitpick that just isn't helpful to the political dialogue, I think."
Once again, Hemingway asserts that "everyone knows what Ron Paul means." Has he ever provided evidence for this? Now he is probably right that most people do not think Ron Paul meant the US has filed "Chapter 11 with the United Nations or something like that." But he forgets that someone can use the term bankrupt outside of the legal sense: "Financially ruined; impoverished." As Kessler pointed out in his article, "the United States is able to pay its debts, and its bonds are still regarded as the gold standard in the financial markets." In addition, Ron Paul made this statement in regards to the S&P downgrade. However, Politifact points out this downgrade is no sign of bankruptcy: "The United States had a AAA credit rating, which S&P defines as having "extremely strong capacity to meet its financial commitments." It's the highest rating you can get. After the downgrade, the United States has a AA+ credit rating. It's just one notch down from AAA on a scale that has more than 20 notches. S&P now says the United States "has very strong capacity to meet its financial commitments." The "+" shows that the U.S. rating is on the high side of AA. Meanwhile, two other ratings agencies, Moody's and Fitch, didn't change their U.S. ratings — America still has the highest rating they offer. (emphasis mine)" This is hardly bankruptcy.

GIVE POLITICIANS A BREAK!
"HEMINGWAY: Well, yeah, but the other thing is I never say this because, you know, I'm inherently suspicious of politicians; I think all Americans should be. But you also have to give these guys a break. I mean, they're trying to communicate to a mass audience, you know, by talking confidently, which is why they make so many mistakes, which is why they give Glenn plenty of fodder on one hand. But on the other hand, I mean, you know, cut them some slack. I mean, they're trying to push a message here..."
Thank you Hemingway for pointing out one major reason we need fact checking operations. What Hemingway considers "fodder" for fact checkers is also potentially dangerous misinformation for a voting populace. Is Hemingway okay with the idea of misinformed voters? Or should we excuse politicians for "trying to push a message?" If they have to push a message based on factual inaccuracies, should that not give voters reason to question the accuracy of the message itself?

THE REST

The rest of the show dealt with issues I already critiqued in my posts over Hemingway's article (linked at the top of this post).  

Also, it is sad that Kessler did not jump to defend Politifact when Hemingway attacked them (luckily my previous posts address his claims anyway). Kessler and Politifact do seem to have a friendly working relationship though. Kessler has even cited Politifact before.




1 - Although citing the dictionary to prove that someone's definition of a word is wrong can be a fallacy, this is not what is being shown here. What we are showing is that, if the definition used by someone is not contained in the dictionary, it is reasonable to assume a significant number of people will not understand the word in the way that person meant it.

Why I don't Like Tebow




What if Tim Tebow were Muslim?

Tuesday, January 10, 2012

The Weekly Standard Attacks Fact Checkers (Part 3)

 

This is part 3 of a 3-part series entitled "The Weekly Standard Attacks Fact Checkers." Part 1 can be read here. Part 2 can be read here

SEEN IT BEFORE

Hemingway later decides to mention a familiar article.  
While there’s been little examination of the broader phenomenon of media fact checking, the University of Minnesota Humphrey School of Public Affairs recently took a close look at PolitiFact. Here’s what they found:
"A Smart Politics content analysis of more than 500 PolitiFact stories from January 2010 through January 2011 finds that current and former Republican officeholders have been assigned substantially harsher grades by the news organization than their Democratic counterparts. In total, 74 of the 98 statements by political figures judged “false” or “pants on fire” over the last 13 months were given to Republicans, or 76 percent, compared to just 22 statements for Democrats (22 percent)."
You can believe that Republicans lie more than three times as often as Democrats. Or you can believe that, at a minimum, PolitiFact is engaging in a great deal of selection bias, to say nothing of pushing tendentious arguments of its own.

At the very least, this is a nice little false dichotomy, I have already examined this "study" in a previous post1 and found it to be shoddy work. The writer did not implement practically any controls in the study. He also dismissed or ignored alternate explanations for reasons that should have given him equal justification to dismiss his own hypothesized explanation. I'd venture to guess this article would be thrown out in a second if submitted to a legitimate peer-reviewed journal.
On August 17, Kessler wrote an item supporting President Obama’s denial at a town hall in Iowa that Vice President Joe Biden had called Tea Party activists “terrorists” in a meeting with congressional Democrats. In the process, Kessler had singled out Politico for breaking the story. ...
After supplying a rudimentary summary of what happened, Kessler reached a conclusion that is at once unsure of itself and sharply judgmental. “Frankly, we are dubious that Biden actually said this. And if he did, he was simply echoing what another speaker said, in a private conversation, as opposed to making a public statement.” In response, Smith unloaded on Kessler. “Either [Biden] said it, or he didn’t. That’s the fact to check here. The way to check it is to report it out, not to attack the people who did report it out and label their reporting ‘dubious’ based on nothing more than instinct and the questionable and utterly self-interested word of politicians and their staffers.” Provoked by Kessler, Politico took the unusual step of actually detailing how the Biden story was nailed down. Politico maintains that Biden’s remarks were confirmed by five different sources in the room with Biden, and that they were in contact with the vice president’s office for hours before the story ran. Biden’s office had ample opportunity to answer the reporters’ account before it ran and didn’t dispute it. ...
But instead of looking at these facts, it appears Glenn Kessler engaged in what his colleague Greg Sargent referred to as all “the usual he-said-she-said crap that often mars political reporting”​—​but with the extra dollop of sanctimony that comes from writing under the “pseudo-scientific banner” of “The Fact Checker.”
To a certain degree, they do have a point. However, Kessler has also pointed out the article was never meant as an attack on Politico, just "as a guide for readers on how to tell the difference between verified fact and journalistic rumor." Such a distinction is necessary for readers to make informed decisions over how much credibility they should give a story. He reported that there were conflicting accounts of what happened, which should be no surprise given the unreliability of eyewitness testimony. Politico's sources could have easily misinterpreted what he said, which is all to plausible. If I said "dealing with you is like dealing with a child," does that mean I actually called you a "child"? A few people overhearing my conversation may have thought so, even if that was not my intent. One could argue I implicitly called you a child, but it should be very clear how easily they could be constructing a straw man. Joe Biden agreeing with the idea that "negotiating with Tea Partiers is reminiscent of negotiations with terrorists" is a far cry from him explicitly calling them "terrorists." The difference is crucial to any adult who feels they have matured past the age where they may accuse their friends of "saying an icky word!" These are the kinds of issues brought up by Kessler, and an examination of the available evidence submitted by politico, as well as Kessler (in his original article), shows one should be skeptical of any headline that reads “Biden: Tea Stands for ‘Terrorist.’” It should also be noted that Kessler even admitted he did a poor job of writing the article.

AND HERE COMES THE BIAS

Well that is the last of the examples given by Hemingway to help support their barrage of claims. After a few more paragraphs of unsubstantiated claims, Hemingway finished with this little doozie:
So with 2012 just around the corner, brace yourself for a fact-checking deluge. Just remember: The fact checker is less often a referee than a fan with a rooting interest in the outcome.
So if it wasn't completely clear before, Hemingway is claiming these fact checkers are biased. And given the fact that all instances mentioned in this article are against Republicans, its clear he is accusing them of being left-biased. Any evidence provided in this article has been shown to be shoddy at best. In addition, one would expect that, if these people REALLY have a vested interest in liberal success, they wouldn't post articles that really damage strong liberal talking points.
  • One popular talking point against Obama, during this election season, is that he has failed to keep his promises. Obama tried to counter this by saying he has kept 60% of his promises, to which Politifact gave him a "false". The real number is 30%. He has kept more of his promises than people give him credit for, and only broken about 10%, so there is a grain of truth to this. However, since he doubled the actual number, he rated a false. Remember how Hemingway complained when Rand Paul was given a false for quadrupling is numbers? I wonder why there wasn't equal outrage from Hemingway over a relatively less-deserved "false" rating for president Obama? Is it because Obama isn't conservative and thus does not fit into his liberal bias paradigm?
  • Politifact, likely the most target of these sites for liberal bias claims often cites conservative/libertarian think tanks such as Cato Institute, Heritage Foundation, and American Enterprise Institute (btw, also notice the many popular Democratic talking points that are debunked in these articles).
  • The 2011 "Lie of The Year" was the incredibly popular campaign claim from Democrats that "Republicans plan on eliminating medicare." This claim already helped propel at least one Democrat to victory in a heavily Republican district. Seeing just how valuable this message is to Democrats in the coming election, it seems highly unlikely the message would be debunked by anyone remotely interested in Democratic success, let alone be given the less-than-coveted "Lie of The Year" award.
And of course, these are only a few examples that show just how absurd the Hemingway liberal bias claim really is. Could these fact checking sites be producing these just to try and make themselves look non-partisan? Outside the subjective choice of "Lie of The Year," that would seem to be quite the claim to substantiate, and is itself an example of the popular ad hoc fallacy, "evidence against bias is evidence for bias." However, without this kind of conspiratorial response, it seems the best explanation is that these fact checkers are in fact not fans "with a rooting interest in the outcome."

SO WHAT'S THE BOTTOM LINE?

This all isn't to say that fact checkers are not prone to subjectivity, or mistakes. They are human after all. At the very least, this should underscore the need for readers to never suspend a reasonable amount of skepticism, even when reading fact checks. However, unlike everyday political pundits, fact checkers still attempt to remain accurate and objective. If politicians and media pundits did the same, perhaps fact checkers would no longer be so sorely needed.

Overall, the central thesis of Hemingway's article appears to be that, if a politician says his/her claim is opinion, it should be free from the scrutiny of fact checkers. However, the practice of fact checking is essentially a form of Skepticism. Fact checkers assume the role of non-partisan skeptics who concern themselves with reality first and foremost. A common practice of skeptics is to highlight common fallacies so others can spot them. This underlies the fact that a person can come to accept a false belief based on fallacies, as well as facts. For example, Politifact pointed out that Rand Paul failed to provide context with his claim, which is essentially a form of cherry-picking. It is also the job of the skeptic to help others identify shoddy or less-than-optimal evidence, which should inspire doubts in skeptical minded individuals. Glen Kessler did just this when he pointed out the difference between a public statement and private rumor. In general, if a statement contains fallacies, can be verified true or false, or leaves out important context, there is no reason why skeptics should not be able to critique said statement. Should the breadth of a false or fallacious statement mean it should be left alone? Should skeptics take off their critical thinking caps when reading the Opinion page? Any good Skeptic knows that all statements are up for scrutiny because statements that do not reflect reality deserve no place within public discourse. As practicing skeptics, political fact checkers should be expected to hold to this very same principle, exposing bunk in political discussion, regardless of where it occurs.




1 - Update 1-16-12: I have also completed a comprehensive critique of this article as well.

The Weekly Standard Attacks Fact Checkers (Part 2)


This is part 2 of a 3-part series entitled "The Weekly Standard Attacks Fact Checkers." Part 1 can be read here.

DEFENDING THE TRUTH-O-METER

The AP's Fact Check was not the only target of this article. What is a good attack on the practice of Fact Checking without a good shot at Politifact?
Here’s a not-atypical case study. On November 7, 2010, newly elected Senator Rand Paul appeared on ABC’s This Week with Christiane Amanpour. One of the topics of discussion was pay for federal workers. “The average federal employee makes $120,000 a year,” Paul said. “The average private employee makes $60,000 a year.” Given that the news these days often boils down to debates over byzantine policy details, Paul’s statement is about as close to an empirically verifiable fact as you’re likely to hear a politician utter. And the numbers are reasonably clear. According to the latest data from the Bureau of Economic Analysis​—​yes, that’s a government agency​—​federal workers earned average pay and benefits of $123,049 in 2009 while private workers made on average $61,051 in total compensation...
(section removed due to lack of relevance)
Not only is what Senator Paul said about federal pay verifiably true, his simple recitation of the most basic facts of the matter doesn’t even begin to illustrate the extent of the problem. Yet PolitiFact rated Senator Paul’s statement “false.” According to PolitiFact’s editors, because Paul did not explicitly say the figures he was citing include pay and benefits, he was being misleading. The average reader would assume he was only talking about salary. “BEA found that federal civilian employees earned $81,258 in salary, compared to $50,464 for private-sector workers. That cuts the federal pay advantage almost exactly in half, to nearly $31,000,” writes PolitiFact. So the average federal employee makes a mere $31,000 more a year in salary than the average private sector worker​—​but also gets a benefits package worth four times what the average private sector worker gets. PolitiFact further muddies the waters by suggesting that the discrepancy between public and private sector averages isn’t an apples-to-apples comparison. Again, Andrew Biggs, the former Social Security Administration deputy commissioner for policy, and Jason Richwine of the Center for Data Analysis, writing in these pages (“Yes, They’re Overpaid: The Truth About Federal Workers’ Compensation,” February 14, 2011), observed that the most favorable studies of federal worker compensation “controlling for age, education, experience, race, gender, marital status, immigration status, state of residence, and so on” still find federal workers are overpaid by as much as 22 percent.
At the very least, this criticism underscores the principle that most falsehoods have a small grain of truth. However, Politifact has to give Rand Paul's statement a rating based on the expected interpretation of a reasonable person. Since he included both pay and benefits, without specifying so, this alone can arguably justify a bad rating as a viewer could easily overestimate the amount of money a federal worker makes. In addition, Rand Paul makes it seem as though federal pay is DOUBLE private sector pay. Politifact notes that, once the comparison is apples-to-apples, that number is closer to $7000, or a little more than 10% greater than comparable private sector jobs. How does Hemingway respond? He retorts that "the most favorable studies" (if you read the article, you will notice that all these studies are either by conservative think tanks or cherry picked government jobs) actually put that number as low as 22%. However, 22% is still a far cry from 100%. So, even by Hemingway's standards, the actual differences are much less than what Rand Paul cites. How does quadrupling the difference in pay not warrant a major downgrade in rating?

THE AP IS NOT OFF THE HOOK YET

Hemingway then returns to his attacks on the AP's fact checking operations. During the nomination of Justice Elena Kagan, the AP ran a Fact Check article dealing with claims that she was anti-military. Hemingway decided the AP was fact checking opinion and not actual facts.
Again, here are the facts: Kagan was a dean at a law school that had banned ROTC over what she referred to as the military’s “repugnant” ban on openly gay service. This was, not surprisingly, an issue raised when she was nominated for her current position on the Supreme Court. The AP’s own fact check even noted that she filed a legal brief in support of colleges that wanted to uphold their policies restricting military recruiters on campus, though she opted not to join the lawsuit. Whether the fact that Kagan valued making a statement about gay rights over supporting the vital national security effort of military recruitment amounts to being “antimilitary” is quite obviously a matter of opinion, as is the charge that she’s an “ivory tower peacenik.”
Actually the prefix "anti" literally means "one that is opposed." So it stands to reason that "anti-military" means "one that is opposed to the military." This is a factual claim, not an opinion. It is also demonstrably false. The AP points out: "In the heat of the recruitment debate, and after, Kagan praised military service as "the noblest of all professions" and a "socially valuable career path" that should be open to all. "I know how much my security and freedom and indeed everything else I value depend on all of you," she told West Point cadets." She even sent out an email defending "the school's earlier decision to set aside restrictions on the armed forces and to allow - not ban - military recruitment. (emphasis mine)." Maybe Hemingway can explain  how supporting the military can possibly be consistent with being anti-military.

It was actually Newt Gingrich who made the claim on "Fox News Sunday." Fact checkers have to evaluate statements based on how a reasonable person would interpret them. There should be no argument that someone who heard Newt Gingrich's statement would think Elena Kagan is actually against the military, based on his statement alone. Furthermore,  "Whether the fact that Kagan valued making a statement about gay rights over supporting the vital national security effort of military recruitment amounts to being “antimilitary” is quite obviously a matter of opinion" is not a standard I'd venture to think Hemingway would follow consistently. If Democrats were to say that Republicans are utterly opposed to fiscal responsibility because they favor avoiding new taxes over balancing the budget, I doubt Hemingway would treat such a claim as merely opinion.
Revealingly, the inflammatory phrase “ivory tower peacenik” was never actually used by Kagan’s critics​—​it was from the AP headline and the first sentence of its fact check: “Elena Kagan is no ivory-tower peacenik.” Here the AP pulled off a seriously impressive feat of yellow journalism. By caricaturing the tone of the actual criticisms, the AP set up a straw man for its “fact check” to knock down before the reader even got past the headline.
It should be noted that the AP never claimed Republicans called Kagan an "ivory-tower peacenik." In order for this to actually be a straw man, the AP would have had to criticize this characterization of Justice Kagan. In fact, nowhere in the article did the AP do that. The irony here is that Hemingway himself essentially criticized a straw man instead of what the AP actually wrote. Hemingway should also know "ivory-tower Peacenik" is merely a catchy title. Seeing as how nowhere in this article does Hemingway actually diagnose fact checkers as liars, as his title suggests, the question arises over whether or not he is, by his own standards, guilty of yellow journalism. Since I doubt he would stoop to this level of hypocrisy, I doubt he could fairly justify the claim the AP is itself guilty of yellow journalism.

The Weekly Standard's attack on the AP Fact Checking operation continued:
At the most basic level, the media’s new “fact checkers” remain obdurately unwilling to let opinions simply be opinions. Earlier this year the AP fact checked a column by former GOP presidential candidate Tim Pawlenty in which the former Minnesota governor asserted that “Obamacare is unconstitutional.” Contra Pawlenty, the AP intoned, “Obama’s health care overhaul might be unconstitutional in Pawlenty’s opinion, but it is not in fact unless the Supreme Court says so.” The AP aligns itself here with the myth of judicial supremacy, namely the mistaken idea that the Supreme Court has a monopoly on deciding what is and is not constitutional. But aside from this amateur-hour excursion into legal theory, the AP betrays a more basic problem of reading comprehension: Pawlenty’s USA Today column appeared in a section of the newspaper clearly labeled OPINION in large, bold letters.
Hemingway has completely confused me. The AP clearly admitted that Pawlenty was merely expressing his opinion. So why is Hemingway accusing them of faulty reading comprehension? Did Hemingway forget the quote he JUST posted in the preceding paragraph?

As to the issue of the "myth of judicial supremacy," The AP may be treating the Supreme Court as a consensus opinion with the final say over whether or not Obamacare is Constitutional. In effect, they may be merely reminding people that Obamacare has not been ruled upon by the Supreme Court so it should not be considered a legal fact that Obamacare is unconstitutional, just personal opinion. I will admit I personally do not like the way the AP worded the phrase. It is definitely over-simplified, but Hemingway has failed to justify calling it "amateur-hour excusion into legal theory."

In addition, it should also be noted that when a person makes a statement that can be reasonably verified as true or false, it ceases to just be an opinion, but an actual FACT. And just because a claim appears on the Opinion page does not mean it is just opinion. It is common for writers to present facts on the Opinion page to help justify their opinions. If I said the earth was flat on an Opinion page, does that make my claim any less false? As I have noted before, Fact Checkers have to assess these claims based on how they would expect a reasonable person would interpret the claims. As a result, the AP provided information enough to cover many reasonable interpretations.

“The AP also did an extensive investigation into Obama’s handling of the Gulf spill, and concluded it ‘shows little resemblance to Katrina,’ ” writes Sargent. “As [liberal Washington Monthly blogger] Steve Benen noted in lauding this effort, the AP definitively debunked a key media narrative as ‘baseless.’ ” One could ask whether the BP oil spill was being compared with Katrina simply because of its relative proximity and public opinion that the Obama administration handled the crisis similarly poorly. But why bother? The very idea of fact checking a broad comparison should send readers who give a damn about facts screaming for the exits.

Unlike checking individual facts, broad comparisons create a much larger task for a fact checker. Although one could theoretically find a near unlimited set of potential comparisons, the overwhelming majority of these will likely be trivial. As a result, many broad comparisons can be reduced to a small manageable set of significant comparable facts. When this is the case, broad comparisons are actually quite useful. For instance, it is possible to compare significant aspects of Romneycare and Obamacare to determine just how similar those bills actually are. It is also possible to determine if comparisons of a given politician to Hitler are justified. To be fair, some broad comparisons cannot be reduced to a manageable set of significant check-able facts. However a reader who runs "screaming for the exits" just because they are confronted with a broad comparison has likely fallen into the fallacy that all comparisons not comprised of a single fact must be incomparably large. Such a fallacy has no place in the practice of Fact Checkers.



To be continued in part 3 of "The Weekly Standard Attacks Fact Checkers"