In a recent post where I critiqued a list of issues that should "keep discerning readers from trusting Politifact", I was alerted to an article examining potential selection bias in how Politifact chooses which statements to rate. The article, entitled "Selection Bias? PolitiFact Rates Republican Statements as False at 3 Times the Rate of Democrats," was written by Eric Ostermeier, "Research Associate at the Humphrey School's Center for the Study of Politics and Governance." This article has also been cited in a recent attack on fact checkers by The Weekly Standard (3-post critique: 1,2,3). Given the attention this article received, I thought it would be useful to see if this article actually gives any good reason to justify suspicion of selection bias at Politifact.
First off, it should be noted that selection bias occurs whenever a sample, taken for statistical analysis, has not been chosen randomly. As any reader can plainly see, this article does not use a random sample of statements from Politifact. Ostermeier chose statements from January 2010 through January 2011. If these dates were truly arbitrary, then it would not matter that these samples were not chosen randomly. However, as I intend to show in this post, choosing different dates can lead to remarkably different results. I will do a point-by-point critique of Ostermeier's article in another post. However, I thought I would look to see if choosing different dates made a difference to Ostermeier's results.
IS IT APPROPRIATE TO DRAW CONCLUSIONS FROM 2010 ALONE?
It is interesting to read Ostermeier's findings and see just how badly Republican officeholders are favored in the lower ratings categories during 2010 (note that when I say 2010 in this post, I'm including January 2011 to reflect Ostermeier's article). However, remember what I said earlier about how these findings are not based on a truly random sample. This should beg the question over whether this trend has existed in previous years. I decided to do a quick analysis of the 8 months of 2007 when Politifact first started rating statements from politicians. I chose this time period because it was relatively short and easy to analyze. My results turned out to be quite a bit different from Ostermeier's:
Let's compare these results to the findings of Ostermeier for 2010:
- Out of 39 statements rated "False" or "Pants on Fire," each main party received approximately 50% of the rulings. This is much different from 2010, where Republicans received 75.5% of these ratings.
- Politifact has devoted approximately equal time between Republicans (124, 52%) and Democrats (110, 46%). This is nearly the same as 2010.
- Republicans were graded in the "False" or "Pants on Fire" categories 16% of the time (39% in 2010) while Democrats were rated in these categories 16% of the time as well (12% in 2010). This absolutely nowhere close to the "super-majority" found in 2010.
- 73% of Republican statements in 2007 received one of the top 3 ratings (47% in 2010) as opposed to 75% of Democratic statements (75% in 2010). When looking at "True" statements alone, Republicans beat Democrats with 5% more of their statements receiving the coveted award than Democrats. It is hard to see how one party was considered more or less truthful than the others. It depends on how much credit you give the "Half True" rating, as opposed to the "True" rating.
- Republicans received a slightly larger percentage of "Mostly False" ratings than Democrats (1.79%). This is the same as in 2010. However, this only results in a 2% difference between Republicans and Democrats for the bottom 3 ratings. This is MUCH different from the 28% difference in 2010.
This is a great example of why every reader should remember his/her critical thinking cap when analyzing statistics. Don't leave home without it!