Sunday, January 8, 2012

Politifact Bias (or not) [Defending the Truth-O-Meter]

The Blog "Politifact Bias" seems to be a pretty popular blog for conservatives who want to confirm their poorly-justified belief that the neutral fact checking site known as Politifact is actually left-biased (at least according to Google Search Results). I have yet to look at this blog directly since they rarely publish original posts. However, I thought I may as well check out their "PolitiFact 2011: A review" post to get an idea of what they considered to be significant evidence of the diagnosed liberal bias of Politifact in 2011.1 As far as I can tell, this blog merely collects criticisms of Politifact they find around the internet. In my experience, these criticisms can be extremely poor. They listed "a rundown of the issues that should keep discerning readers from trusting PolitiFact:"
1)  PolitiFact persistently ignores the effects of selection bias.  It simply isn't plausible that editors who are very probably predominantly liberal will choose stories of interest on a neutral basis without some systematic check on ideological bias.  PolitiFact, for example, continues to publish candidate report cards as though selection bias has no effect on the report card data.
Basically, the linked article4 is a great example for a subject I was planning on writing about, entitled "The Wrong Ways To Detect Bias." One of the methods I was planning on talking about was exactly what these people did. They just compiled a list of articles on Politifact over the course of a year and came to the conclusion that, since the worst ratings tend to favor republicans more than democrats, they must be biased in their selection in favor of the left. The problem with this is that their conclusion simply does not follow. Their are other possible reasons for this kind of result that have nothing specifically to do with Politifact being biased.
  1. To give them the most credit, it is possible there may be some liberal selection bias at work here, but not in the way they think. Politifact often gets ideas for statements to check from their readers. It is possible most of their readers are liberal, which souds quite plausible given the fact that conservatives seem so ready to accuse Politifact of bias. This is no fault of Politifact, nor does it show any bias on their part. The fault lies squarely with Republicans who chose not to submit statements to Politifact and instead waste their time complaining about perceived bias (a vicious cycle). It should also be noted that the definition of Selection Bias is "A type of bias caused by choosing non-random data for statistical analysis." Well duh! The article itself mentions that Politifact Bill Adair noted "We choose to check things we are curious about. If we look at something and we think that an elected official or talk show host is wrong, then we will fact-check it." That right there should show that there is selection bias, though not liberal selection bias. However, the article intends to show that the bias is toward the left, instead of toward important statements. If the article was intended to merely accuse Politifact of selection bias, they would basically be attacking a straw man since the main goal of Politifact is not to tally true and false statements to figure out which candidates are more honest (although this does happen occasionally). It should also be noted the selection of Politifact articles used in this calculation are very much non-random. As the article mentioned, the time period selected was when Democrats were in power. This could have easily affected the results, as I will get into later.

  2. Could it be that Republicans were telling falsehoods more often during that time period? The article does mention this. "One could theoretically argue that one political party has made a disproportionately higher number of false claims than the other, and that this is subsequently reflected in the distribution of ratings on the PolitiFact site. However, there is no evidence offered by PolitiFact that this is their calculus in decision-making." Yet they don't look for evidence that their explanation is the better one. To show the fallacy in this kind of thinking, let me present an analogy: Let's say i have two dogs that roam free in my house all day, Rex and Dex. I come home one day to find my couch clawed to pieces. I cannot find evidence that Rex did it, so I assume Dex did it (even though I found no evidence specifically pointing to Dex). This is exactly the same fallacy they employ. They don't specifically have evidence of a Republican tendency towards falsehoods so they assume it must be a Politifact tendency towards liberal bias. Like I noted earlier, the article did not randomly chose articles. Could it be that, since the Republican Party was out of power, they had a greater tendency towards falsehoods? Could it be that the ratings success of Fox News created a podium for more Republican falsehoods? Could it be that, with the rise of the right-wing Tea Party, extreme views have tended to produce more falsehoods? Could it be that the possible rise of anti-intellectualism in the Republican Party has made it more likely to back falsehoods? Could it be that the Republican Party itself is more prone to being dishonest? Could the Republican party be more ideological and less prone to checking facts that could challenge their ideology? Could it be that the Republican tendency to call neutral fact-checking sites "biased" creates a tendency to play fast and loose with the truth? These are all possibilities (although I am not necessarily claiming any of them to be true). So did the article control for or check any of these? NOPE. They could have checked other neutral fact-checking sites to see if they came to similar conclusions as Politifact. They could have compared these results with articles from before Obama was elected. They just dismissed these hypotheses for no better reason than they would have for dismissing their own hypothesis.

  3. It could be that the significance of the truer statements was much higher than that of the false statements. Not all statements are equal. Chris Christie's statement about the city of Newark's graduation rate has far less of an impact on party image than the Democratic claims about Republicans trying to end Medicare. An overall look at a large number of graded statements can give you a decent look at a candidate's honesty, but you need to actually check the statements to see how significant they are. If Politifact is giving Democrats more high-profile false's then it doesn't matter that there are fewer. The primary focus of Politifact once again is to read the about ratings they give these statements. So a handful of high profile falsehoods for Democrats can look much worse than a bunch of inconsequential falsehoods for Republicans.

It should be pretty clear that merely tallying Politifact ratings doesn't give you anything remotely conclusive about the theoretical bias of a site.
2)  PolitiFact continues to publish obviously non-objective stories without employing the journalistic custom of using labels like "commentary," "opinion" or even "news analysis."  Readers are implicitly led to believe that stories like an editorial "Lie of the Year" selection are objective news stories.
Any evidence for this? A link or two maybe? If they wrote an article about this, it would be useful at this point. I may search for an article in their archives and come back to this later. As it stands, this is an unsubstantiated statement. I will, however, agree that the "Lie of The Year" title is a bad title since there is a difference between people who lie (know their statements are false) and people who are careless with the truth (don't necessarily know their statements are false). However, Politifact does have some leverage here since the 2009, 2010, and 2011 Lies of the Year have been tackled by Politifact, as well as other Fact Checking sites, so often, it is hard to imagine these politicians do not know their claims have been debunked. It is also good to note that Politifact lets readers know the "Lie of The Year" is just a consensus opinion of their editors and reporters, not objective news. So readers are actually not led to believe the Lie of The Year is an objective news story.
3)  PolitiFact continues to routinely apply its principles of analysis unevenly, as with its interpretation of job creation claims (are the claims assumed to refer to gross job creation or net job creation?).
The link provided goes to a re-post of this exact same blog entry. I'm interested to see what they are talking about. If I find evidence I will post my critique.3
4)  PolitiFact has yet to shake its penchant for snark.  Snark has no place in objective reporting (see #2 above).  Unfortunately, PolitiFact treats it like a selling point instead of a weakness, and PolitiFact's intentional use of it has apparently influenced Annenberg Fact Check to follow suit.
I'm not 100% sure what he means by "Snark"2 However, I will assume he means "Use of sarcasm or malice in speech." I'm not sure I have ever seen sarcasm. But I fail to see the problem here. Politifact is in the business of spreading awareness of "Fact Checking." It helps keep readers interested and coming back for more (although it's nowhere near as common as in Glenn Kessler's articles). Overall, it has no negative effects on the actual research. In fact, it can help get readers fired up about falsehoods and dishonesty. Other than personal taste, I see no problem with this. In fact, any case of "snark" I have ever seen has been quite subtle, meaning I cannot see it honestly turning away any readers that care more about reality than ideology.

It seems as though none of these "issues" should "keep discerning readers from trusting Politifact" so long as they care about quality fact-checking to help ensure they don't subscribe to false beliefs. Only one of these "issues" even had a linked article, and the content of that linked article came nowhere near demonstrating bias at Politifact. I will empathize that showing the existence of bias is no easy task (usually). But if you are having THAT much trouble exposing bias, maybe you should wonder if the bias really exists at all...


1 - Update 1-10-12: The writers of "Politifact Bias" have informed me that this post was meant to be just a summary, and was not meant to give significant evidence of bias. However, this does not excuse the fact that they did list "the issues that should keep discerning readers from trusting PolitiFact," without providing evidence for a few of them. This does not change the rest of this post because all this post does is check to see if these "issues" significantly exist at all.

2 - Update 1-10-12: The writers of "Politifact Bias" have informed me that, by snark, they mean that Politifact is conveying their own biases. When I asked for an example, they provided me with a quote from a July 2011 politifact article talking about Newt Gingrich:"With the 2012 election in their sights, Republican candidates spend most of their time trying to prove that President Barack Obama and the Democrats will make the economy worse." However this is a terrible example given that this is not an uncontroversial fact. There is no hint of the reporter's feelings, biases, or prejudices. Nor is there any hint of unnecessary partisan adjectives. It sounds as if this example fails to substantiate their claim that Politifact is guilty of "snark." 


Update 1-10-12: I have edited this post to enhance readability. No significant changes were made to the content.
 
3 Update 1-10-12: A writer of "Politifact Bias" has informed me that the link was to a keyword search on one of his blogs (seems a bit unnecessarily distracting and cryptic). He verified a certain post did a good job underscoring his complaint. In this post he mostly just lists each case and states whether it was helped or harmed by the inclusion/exclusion of "gross jobs." The problem is he mostly fails to investigate context. There are some statements where it is appropriate to look at net jobs, such as a statement that looks at "jobs added" as a whole. Conversely, there are statements where it is appropriate to ignore lost jobs. When a politician states the stimulus did not create one job, it doesn't matter how many jobs were already lost as a result of the recession, just so long as at least one job was directly created by the stimulus. Since the mechanism for lost jobs and the mechanisms for gained jobs are different, it is more than appropriate to ignore what has nothing to do with the mechanism under question. For example, If I went to a village under attack by militants and pulled a single child to safety, thereby saving it's life, should I not be credited with saving that kid's life just because more than one life was lost in that village from the attack? Any reasonable person would find this absurd. Granted he attempted an investigation into the context of choosing based on the word "created." I will give him some credit for that, but he did not investigate whether or not Politifact was consistent when the mechanism that "created" jobs was the same as the mechanism that caused jobs to be lost. He just simply focused on the word "created." As the stimulus example shows, his list definitely ignores important context. It should also be noted that Politifact has actually made the context used in their decision making well known (see last link).

4 - Update 1-16-12: I have also completed a comprehensive critique of this article as well.

45 comments:

  1. You're pretty good at jumping to conclusions. Don't fret. More later. I'm busy. :-)

    ReplyDelete
  2. Absolutely: I've been busy conducting a study that objectively verifies PolitiFact's ideological bias. I've got all the information for the national operation collected and sorted, and I'm in the process of readying it for publication.

    Oh, you meant an example of you jumping to conclusions. Sure. You wrote:

    As far as I can tell, this blog merely collects criticisms of Politifact they find around the internet.

    You could have easily found content at PolitiFact Bias produced entirely by the authors, including (but not limited to) criticisms of PolitiFact posted at our independent blogs.

    How about another? You wrote (concerning Dr. Ostermeier's research):

    They just compiled a list of articles on Politifact over the course of a year and came to the conclusion that, since the worst ratings tend to favor republicans more than democrats, they must be biased in their selection in favor of the left.

    I pointed out to you in a commentary thread how you misreported Ostermeier's method. Ostermeier pointed out that ratings for officeholders and former officeholders for each party were fairly equally divided as to the total number. If the selection process was blind to party, then we should expect the proportion of ratings in each category to turn out fairly even over time. And if one party was guilty of more dishonesty then that would show up in the total number of ratings rather than as a disparity in the average truthfulness of one party compared to the other.

    You're one among many who totally ignored that aspect of Ostermeier's work.

    (original content at PFB):
    http://politifactbias.blogspot.com/2011/12/bill-adair-you-who-criticize-us-are-in.html

    ReplyDelete
  3. "You could have easily found content at PolitiFact Bias produced entirely by the authors, including (but not limited to) criticisms of PolitiFact posted at our independent blogs."

    I was talking about the Politifact Bias blog itself. Most of what I saw was just reports of other people's stuff with a little bit of commentary. You are correct that I shouldn't have used the word "merely." As far as I can tell, the word "mostly" would be a better fit. Just pointing to an article and saying one or two things seems be a pretty common practice on your site. And one or two small comments on an article someone writes hardly constitutes much if any original work. Not that this is a bad thing. I do this quite a bit on my blog as well. However, it makes me see no reason to go through and dedicate too much time to critiquing your site. And, after the conversation I had with you on your blog, I see even less of a reason. Honestly, how often do I need to reinvent the wheel and explain to you the most basic of concepts? Other than a post covering your errors in calling Politifact inconsistent over "net" vs "gross" jobs, I see no reason to continue reading the drivel on your site.

    "I pointed out to you in a commentary thread how you misreported Ostermeier's method. Ostermeier pointed out that ratings for officeholders and former officeholders for each party were fairly equally divided as to the total number. If the selection process was blind to party, then we should expect the proportion of ratings in each category to turn out fairly even over time. And if one party was guilty of more dishonesty then that would show up in the total number of ratings rather than as a disparity in the average truthfulness of one party compared to the other."
    That was not the intent of the entire article. He did indeed make a point early on that the number of statements were equal for both parties. But I see no place where he uses that fact anywhere else to make a point. I also see nowhere where he makes the specific point you are making. Where is it?

    ReplyDelete
  4. In fact, although he did mostly steer away from supporting certain conclusions, he did occasionally endorse a few:
    "PolitiFact chose to highlight untrue statements made by those in the party out of power."


    "One could theoretically argue that one political party has made a disproportionately higher number of false claims than the other, and that this is subsequently reflected in the distribution of ratings on the PolitiFact site.

    However, there is no evidence offered by PolitiFact that this is their calculus in decision-making.
    "
    Where does he make any points here relating to the number of reviewed officeholders?

    His response is that Politifact doesn't make any direct indication they are trying to be fair and balanced in their decision making, which, as I pointed out in my comprehensive critique, cannot alone show bias without succumbing to an argument from ignorance.

    "The question is not whether PolitiFact will ultimately convert skeptics on the right that they do not have ulterior motives in the selection of what statements are rated, but whether the organization can give a convincing argument that either a) Republicans in fact do lie much more than Democrats"
    He does actually endorse an argument from ignorance here where politifact is given the task of proving they have no "ulterior motives" before Republicans can reject the idea that they do.

    "By levying 23 Pants on Fire ratings to Republicans over the past year compared to just 4 to Democrats, it appears the sport of choice is game hunting - and the game is elephants."
    This is an explicit endorsement of the party-related selection bias explanation. And of course, one look at 2007 and this conclusion seems quite ridiculous.

    ReplyDelete
  5. I have actually gone through and done a comprehensive critique on his article:
    http://contentinreality.blogspot.com/2012/01/politifact-selection-bias-or-bad.html


    "Absolutely: I've been busy conducting a study that objectively verifies PolitiFact's ideological bias. I've got all the information for the national operation collected and sorted, and I'm in the process of readying it for publication."
    Given what I've seen of your work, you may excuse me if I am less than impressed.


    "If one party was guilty of more dishonesty then that would show up in the total number of ratings rather than as a disparity in the average truthfulness of one party compared to the other."
    This looks to be an argument you are making yourself. However, I am unconvinced as to why this would be the case. It seems like you would have to start with the assumption that politifact's methods are good for finding false claims. Given the fact that, in 2007 ~75% of all ratings were in the top 3 ratings, I see no reason to think this is the case. Plus, the repeating of falsehoods already debunked (many of politifact's ratings are for already debunked statements), could skew the results as well. Either way, your premise is far from intuitive so you may need to back it up.

    ReplyDelete
  6. You are correct that I shouldn't have used the word "merely."

    Ok, that's one leap to a conclusion.

    The second ...

    That was not the intent of the entire article. He did indeed make a point early on that the number of statements were equal for both parties. But I see no place where he uses that fact anywhere else to make a point. I also see nowhere where he makes the specific point you are making.

    You're correct that Ostermeier doesn't offer a specific explanation, but it's obvious to a researcher. An even number of overall ratings for each party suggests random sampling (it's the type of number one should expect from a designed random sample intended to gauge party truthfulness). Ostermeier also quoted Adair saying that PF editors look for statements that are incorrect. Why the much higher failure rate with Democratic Party subjects? Does their ability to detect error partially short out?

    But I see you've already revisited the subject in more detail in another post, backing off your assumption that Ostermeier "just compiled a list of articles on Politifact over the course of a year and came to the conclusion that, since the worst ratings tend to favor republicans more than democrats, they must be biased in their selection in favor of the left."

    So that's two. Want another?

    ReplyDelete
  7. "You're correct that Ostermeier doesn't offer a specific explanation, but it's obvious to a researcher."
    Once again, this is YOUR conclusion. Where did HE ever suggest this?

    "An even number of overall ratings for each party suggests random sampling (it's the type of number one should expect from a designed random sample intended to gauge party truthfulness)."
    Unless you have ever read any article from Politifact where it is clear they don't take Random samples. In fact, it wouldn't make sense for a Fact Checking site to randomly select statements for reasons I explained in our previous conversation on your post.

    "Ostermeier also quoted Adair saying that PF editors look for statements that are incorrect. Why the much higher failure rate with Democratic Party subjects? Does their ability to detect error partially short out?
    I already answered this: "It seems like you would have to start with the assumption that politifact's methods are good for finding false claims. Given the fact that, in 2007 ~75% of all ratings were in the top 3 ratings, I see no reason to think this is the case. Plus, the repeating of falsehoods already debunked (many of politifact's ratings are for already debunked statements), could skew the results as well."

    "But I see you've already revisited the subject in more detail in another post, backing off your assumption that Ostermeier "just compiled a list of articles on Politifact over the course of a year and came to the conclusion that, since the worst ratings tend to favor republicans more than democrats, they must be biased in their selection in favor of the left."
    Actually, the conclusion of that article states nearly the same thing:
    "Throughout most of this article, Ostermeier presents interesting data for Politifact in 2010. Although there are many possible explanations of the data, Ostermeier only really focuses on one. In addition, he ignores at least one other for no better reason than he should have for ignoring the one he focuses on as well. Ostermeier fails to ask many questions, such as whether or not the trends he found in 2010 even existed in previous years. A quick analysis of Politifact in 2007 showed this is not the case for at least one year. In the end, Ostermeier makes it clear he thinks Politifact is guilty of party-related selection bias, succumbing to the same fallacies any reader would if he/she came to the same conclusion from the information presented in this article alone"

    "So that's two. Want another?"
    You are still at one.

    ReplyDelete
  8. "I have yet to look at this blog directly since they rarely publish original posts."
    Actually, looking back at this, I think I meant to actually print "mostly" instead of "merely." Ill still give you his but it looks more like a typo than a jump to a conclusion.

    ReplyDelete
  9. "Where did HE ever suggest this?"

    It's implicit in the data. Examine yourself for argumentum ad ignorantiam.

    You are still at one.

    Not if you don't have a better argument for Ostermeier not considering the distribution of stories other than via the appeal to silence. It is the equivalent of jumping to a conclusion.

    ReplyDelete
  10. "It's implicit in the data."
    Where?

    "Not if you don't have a better argument for Ostermeier not considering the distribution of stories other than via the appeal to silence. It is the equivalent of jumping to a conclusion."

    I'm still waiting for you to demonstrate to me that he made this point at all. You are the one claiming he did so you have the burden of proof. You claim I missed the point yet you cannot show me where he ACTUALLY made this point. Instead you give me special pleading ("it's obvious to a researcher"). Either show me where it is or give it up. Until you do, you are still at one.

    ReplyDelete
  11. "I'm still waiting for you to demonstrate to me that he made this point at all."

    I know that's what you're doing. That's why I explained to you that your approach fits the pattern of argumentum ad ignorantiam.

    "You are the one claiming he did so you have the burden of proof."

    No, you are the one who claimed that all he did was compare the ratings for one party as opposed to the other, which is a claim that he did not take other factors into account. I'm saying that you jumped to conclusions. As you made the original claim the burden rests with you. And, as I've pointed out, you'll need something better than an appeal to silence.

    You claim I missed the point yet you cannot show me where he ACTUALLY made this point.

    Sure I can (assuming that the fallacy of invincible ignorance will not come into play). But that's a different subject. The present subject is whether or not you jumped to conclusions.

    "Instead you give me special pleading ("it's obvious to a researcher")."

    LMAO. That's not special pleading unless you construct an entirely new context for the remark. And if you do that it's a straw man. Something obvious to a researcher need not be invisible to the layman.

    ReplyDelete
  12. "I know that's what you're doing. That's why I explained to you that your approach fits the pattern of argumentum ad ignorantiam."
    An argument from ignorance only applies if I'm making a claim. However, you are the one making the claim:
    "Ostermeier pointed out that ratings for officeholders and former officeholders for each party were fairly equally divided as to the total number. If the selection process was blind to party, then we should expect the proportion of ratings in each category to turn out fairly even over time."
    I am merely asking you for evidence of the claim, not making a counter claim myself. This is no fallacy.

    "No, you are the one who claimed that all he did was compare the ratings for one party as opposed to the other, which is a claim that he did not take other factors into account."
    Which I provided evidence for by showing every claim he made. You are claiming he said other things. You have the burden of proof. I cannot prove he did not make a claim. This is impossible.

    "I'm saying that you jumped to conclusions. As you made the original claim the burden rests with you. And, as I've pointed out, you'll need something better than an appeal to silence."
    Your counter to my claim is that I missed a claim. This is a claim on your part, for which you have the burden of proof. Once again, I cannot prove a negative.

    "Sure I can (assuming that the fallacy of invincible ignorance will not come into play). But that's a different subject. The present subject is whether or not you jumped to conclusions."
    It is this subject because, in order to substantiate the claim that I jumped to conclusions, you have to show me where he made a claim I did not list.

    "LMAO. That's not special pleading unless you construct an entirely new context for the remark. And if you do that it's a straw man. Something obvious to a researcher need not be invisible to the layman."
    Instead of giving me evidence, you attempted to argue you don't need evidence for this claim because "it's obvious to a researcher." You can't claim something is true because "its obvious to a researcher." That's the equivalent of saying "you just don't understand because it is a researcher." Stop dodging the question. If you have a claim, you need to substantiate it or stop saying it.

    ReplyDelete
  13. "An argument from ignorance only applies if I'm making a claim. However, you are the one making the claim"

    The claim you quote represents a change of topic. Our topic is your claim "They just compiled a list of articles on Politifact over the course of a year and came to the conclusion that, since the worst ratings tend to favor republicans more than democrats, they must be biased in their selection in favor of the left." You asked for an example of you jumping to conclusions. That's the example. If you justify your claim by Ostermeier's silence you're very likely guilty of the fallacy of argumentum ad ignorantiam. Dodging won't help.

    "I provided evidence for by showing every claim he made."

    And since you didn't find it you're arguing from silence. No? Is that a good argument, IYO?

    "I cannot prove he did not make a claim. This is impossible."

    Gracious me, it almost looks like you're admitting that you had to assume your conclusion.

    "Your counter to my claim is that I missed a claim."

    If by that you mean that I'm saying the claim is implicit in the post, yes, that was my counterclaim. But I can always leave you with the burden of proof if I like since you made the claim in question (that Ostermeier did not consider factors other than the proportions of harsh grades in drawing his conclusion).

    It is this subject because, in order to substantiate the claim that I jumped to conclusions, you have to show me where he made a claim I did not list.

    Really. Why can't I point to your reliance on argumentiam ad ignorantiam along with your (incorrect, btw) claim that you can't prove a negative?

    "Instead of giving me evidence, you attempted to argue you don't need evidence for this claim because 'it's obvious to a researcher.'"

    Incorrect. Let's review: You're correct that Ostermeier doesn't offer a specific explanation, but it's obvious to a researcher. An even number of overall ratings for each party suggests random sampling (it's the type of number one should expect from a designed random sample intended to gauge party truthfulness). Ostermeier also quoted Adair saying that PF editors look for statements that are incorrect. Why the much higher failure rate with Democratic Party subjects? Does their ability to detect error partially short out?

    That's not stonewalling you on evidence, it's explaining to you the inference that a researcher would make. Try again.

    ReplyDelete
  14. "The claim you quote represents a change of topic. Our topic is your claim "They just compiled a list of articles on Politifact over the course of a year and came to the conclusion that, since the worst ratings tend to favor republicans more than democrats, they must be biased in their selection in favor of the left." You asked for an example of you jumping to conclusions. That's the example. If you justify your claim by Ostermeier's silence you're very likely guilty of the fallacy of argumentum ad ignorantiam. Dodging won't help."
    Yes I made THAT claim. But I also backed it up. I showed where he presented data and I showed where he made the claim. I also pointed out the large number of questions he should have asked. For the purposes of that article, he jumped to conclusions. He never even claimed he did anything more than what he wrote about, so this hardly counts as an argument from silence. If he said "i investigated all possibilities and came to the conclusion that Politifact is guilty of Bias, (but they are not listed)" and I concluded "he didn't do that because he never listed those investigated possibilities" then yes I would be guilty of the argument from silence. However, he never made mention of any of that in the article. He instead wrote as if he came to that conclusion, based on the data he presented.

    "And since you didn't find it you're arguing from silence. No? Is that a good argument, IYO?"
    Let me give you an analogy. I go into a 10x10x10 room, look around, in every corner, every nook and cranny (aka a comprehensive examination of a room) and I find no giant beasts, so I conclude that none is there. You say, "but there is an 8 foot tall gorilla in the room. And since you didn't find it you're arguing from silence. No? Is that a good argument, IYO?"
    You see how absurd such an argument is.
    Now lets see if this analogy applies: I look at Ostermeier's article, check every paragraph, all data, suggestions, and conclusions (aka a comprehensive examination of a the article) and I find no other arguments other than the ones i listed, so I conclude that no more exist. You say, "but there is another argument. And since you didn't find it you're arguing from silence. No? Is that a good argument, IYO?"
    I did my due diligence and did a comprehensive analysis of the article. We are in a case where we would expect him to give the evidence in his article.

    "Gracious me, it almost looks like you're admitting that you had to assume your conclusion."
    No I provided evidence for the conclusion. I obviously cannot prove it, as you want me to do.

    "If by that you mean that I'm saying the claim is implicit in the post, yes, that was my counterclaim. But I can always leave you with the burden of proof if I like since you made the claim in question (that Ostermeier did not consider factors other than the proportions of harsh grades in drawing his conclusion)."
    You have provided no evidence that Ostermeier ever made this claim. You just weaseled out of it by calling it implicit. Show me where me makes any suggestions that this is the claim he makes.

    ReplyDelete
  15. "Really. Why can't I point to your reliance on argumentiam ad ignorantiam along with your (incorrect, btw) claim that you can't prove a negative?"
    You can, but I already showed you how absurd it is to call that fallacy in this situation. There is an expectation that, If he was factoring other data into his conclusion, he would mention it, just like there is an expectation that if there really was a Gorilla in the room, I would be able to see it.

    "Incorrect. Let's review: You're correct that Ostermeier doesn't offer a specific explanation, but it's obvious to a researcher. An even number of overall ratings for each party suggests random sampling (it's the type of number one should expect from a designed random sample intended to gauge party truthfulness). Ostermeier also quoted Adair saying that PF editors look for statements that are incorrect. Why the much higher failure rate with Democratic Party subjects? Does their ability to detect error partially short out?"
    AS I pointed out already, this still sounds like it is YOUR conclusion. Because he presents data and YOU draw a conclusion about it doesn't mean he did. You need to provide evidence HE made this claim before you can fault me for ignoring it.

    "That's not stonewalling you on evidence, it's explaining to you the inference that a researcher would make."
    Yet you give me no evidence he actually made this claim. If you are trying to say a researcher would make this inference, I don't agree. It requires at least one assumption he cannot make (Politifact's methods are efficient at finding false claims). Other assumptions he did not point out (ex: "repeating of false claims was at a minimum"). So I am unconvinced he made this assumption.

    ReplyDelete
  16. "Yes I made THAT claim. But I also backed it up. I showed where he presented data and I showed where he made the claim"

    Oh, yeah, that was great. That was the time you took his kicker line at the end out of context to make it seem like it was his conclusion. Brilliant stuff. Here's a clue: Sometimes what is written down does not reflect everything that was done. That's your cue to moan again about how you can't prove a negative.

    "Let me give you an analogy."

    So, by analogy are you telling me that you searched completely through Ostermeier's reasoning process (speaking of absurdities)?

    "We are in a case where we would expect him to give the evidence in his article"

    Are you telling me that if he doesn't give the evidence in the article that you are therefore justified in making an assumption? Or do you wish to keep greater distance from the precipice than that?

    Why do you think he mentioned the even breakdown of total ratings according to party? Because it was of no importance?

    "You have provided no evidence that Ostermeier ever made this claim."

    Let's suppose for the sake of argument that you're right. Why would I need to provide any evidence of it unless you had passed to me your burden of proof for showing that it was not part of his reasoning process? If you don't bear your burden of proof then it appears to follow that you jumped to conclusions.

    "You need to provide evidence HE made this claim before you can fault me for ignoring it."

    So, implicit claims are impossible (Gotta be careful here--that could almost pass as an implicit claim that implicit claims are possible!)?

    2
    2
    4

    Not that I'm implying anything.

    "If you are trying to say a researcher would make this inference, I don't agree. It requires at least one assumption he cannot make (Politifact's methods are efficient at finding false claims)."

    lol

    Try again. That assumption isn't necessary. The default assumption (to indicate a lack of bias) is that PF's methods are equally efficient whether Democrats or Republicans are involved. The degree of efficiency doesn't matter. The disparity in the data matters. If everyone gets a "True" there's no evidence of bias. If everyone gets a "False" there's no evidence of bias.

    ReplyDelete
  17. "Oh, yeah, that was great. That was the time you took his kicker line at the end out of context to make it seem like it was his conclusion. Brilliant stuff. Here's a clue: Sometimes what is written down does not reflect everything that was done. That's your cue to moan again about how you can't prove a negative."
    You still say I took his line out of context. Evidence Please?

    "So, by analogy are you telling me that you searched completely through Ostermeier's reasoning process (speaking of absurdities)?"
    Absolutely. I dedicated a lengthy blog entry to it if you didn't notice.

    Are you telling me that if he doesn't give the evidence in the article that you are therefore justified in making an assumption? Or do you wish to keep greater distance from the precipice than that?

    When he appears to use the evidence in the article to draw his conclusions then yes. I can't just assume every article I read has some hidden evidence unknown to me and unmentioned by the reader. That is absurd.

    "Why do you think he mentioned the even breakdown of total ratings according to party? Because it was of no importance?"
    He never explicitly uses the statistic but I would venture to guess if someone were to see a ton of False' and PoFs, they may be tempted to think maybe Republicans were just graded more often.

    "Let's suppose for the sake of argument that you're right. Why would I need to provide any evidence of it unless you had passed to me your burden of proof for showing that it was not part of his reasoning process? If you don't bear your burden of proof then it appears to follow that you jumped to conclusions."
    Once again. You made the claim that Ostermeier made this point. If you had merely accused me of an argument from ignorance, that would be one thing (and I have already responded to that). However you ALSO claimed that Ostermeier had a specific point. THAT IS A CLAIM, NOT JUST a rejection of my claim.

    "So, implicit claims are impossible"
    No, you just did not demonstrate an implicit claim existed. I already addressed your reasoning for thinking one did. Good job in your attack of that straw man.

    "Try again. That assumption isn't necessary. The default assumption (to indicate a lack of bias) is that PF's methods are equally efficient whether Democrats or Republicans are involved. The degree of efficiency doesn't matter. The disparity in the data matters. If everyone gets a "True" there's no evidence of bias. If everyone gets a "False" there's no evidence of bias."
    It seems you misunderstood my use of the word "efficiency." I was not using efficiency in a deterministic way. "Reliability" may be easier to understand. If their methods of ensuring the statements they catch are false statements are not reliable, then you cannot expect any arbitrary categorization of the data (into approximately equal groups) to show approximately equal distributions. In this case of Party categorization, you cannot expect the distributions of each category to be equal either.
    Also, as I noted before, you would have to assume that false claims have not been repeated over and over. If Republicans had repeated false claims over and over, Politifact would likely catch it every time, adding to the count of false claims for Republicans and distorting your assumptions because efficiency and reliability are both known to be optimal in those cases. This would distort the distribution even if reliability was known and predictable (not essentially stochastic-like).

    ReplyDelete
  18. Also, your distributions could be distorted by an influz of new claims not known. The accuracy of some claims may be known and selected because of that. Other possibly not. As a result, it also depends on the nature of the claims. A significant period of consistently known claims can distort the findings as well as a significant period of unknown claims. Whether they are right or wrong can be completely up to chance when selected.

    ReplyDelete
  19. "Evidence Please?"

    Asked and answered.
    http://politifactbias.blogspot.com/2012/01/politifact-2011-review.html#comment-form

    "I dedicated a lengthy blog entry to it"

    I look forward to seeing the mind reading portion of the essay.

    "I can't just assume every article I read has some hidden evidence unknown to me and unmentioned by the reader."

    Okay, so you protect yourself from that error by assuming that you noticed all the evidence. :-)

    "(I)f someone were to see a ton of False' and PoFs, they may be tempted to think maybe Republicans were just graded more often."

    Yes, and? Have I not explained to you that if Adair's description of the selection process is accurate then that is exactly what we should expect if one side lies more than the other?

    "THAT IS A CLAIM, NOT JUST a rejection of my claim."

    True, but if you use my claim as an excuse for not justifying your claim then you are guilty of shifting the burden of proof. What I'm doing is saying "Don't take my claim as true. Let's just stick with your claim." Support your claim.

    (Implicit claims are possible but) "you just did not demonstrate an implicit claim existed."

    Right, but I don't need to bear the burden of proof for showing your claim false. You either support your claim or you lend support for my claim that you jumped to conclusions. So it is not for me to prove that the implicit claim is there. It is for you to show that it isn't. If it is possible you either deal with the possibility or confess to some degree of jumping to conclusions.

    "Good job in your attack of that straw man."

    LMAO. Yeah, you can make it look like a straw man by cutting off the question mark with your edit. Sweet irony in that. We'll throw a ticker-tape parade the day you accurately identify a fallacy.

    "If their methods of ensuring the statements they catch are false statements are not reliable, then you cannot expect any arbitrary categorization of the data (into approximately equal groups) to show approximately equal distributions."

    How is that supposed to follow? Let's suppose that PF's methods for identifying dubious statements are unreliable. And they rate 100 percent of those statements by Republicans "False" and 100 percent of those statements by Democrats "True." And let's suppose that the final ratings are accurate, just for the sake of argument. Are you seriously trying to tell me that we have no evidence of selection bias in those proportions?

    "In this case of Party categorization, you cannot expect the distributions of each category to be equal either."

    Why not? Regardless of whether the method is efficient in identifying dubious statements, using the same method for identification should show a neutral distribution trend. If it doesn't then this suggests that the method isn't the same depending on the party.

    "you would have to assume that false claims have not been repeated over and over."

    Rubbish. It's still up to PolitiFact's editors to decided which claims to rate on the Truth-O-Meter. Suppose that every Republican in Congress states that Obamacare will create "death panels." Do you truly think that we'll see a PF rating for every such claim? Ridiculous. Like we should worry about this anyway if PF isn't efficient at choosing false statements to rate. We might just as well expect them to rate the same statement "True" every time they see it. If PF picks supposedly known false statements for repeated ratings and does not choose known true statements for repeated ratings that simply describes one aspect of the selection bias.

    "The accuracy of some claims may be known and selected because of that."

    That's a selection bias. Why not measure every selection bias together?

    ReplyDelete
  20. "Asked and answered.
    http://politifactbias.blogspot.com/2012/01/politifact-2011-review.html#comment-form

    This "evidence" is once again you claiming I took it out of context by providing your own analysis and passing it off as his. You have yet to demonstrate he made the point you say he did.

    "I look forward to seeing the mind reading portion of the essay."
    Why do I need to read his mind? Are you saying I cannot critique an essay unless I can read his mind?

    "Okay, so you protect yourself from that error by assuming that you noticed all the evidence."
    I did.

    "Yes, and? Have I not explained to you that if Adair's description of the selection process is accurate then that is exactly what we should expect if one side lies more than the other?"
    I've challenged that because it is based on faulty premises.

    "True, but if you use my claim as an excuse for not justifying your claim then you are guilty of shifting the burden of proof. What I'm doing is saying "Don't take my claim as true. Let's just stick with your claim." Support your claim."
    I did not. I provided evidence of my claim with my blog post, as well as highlights in this very comment reel. So I did not use it as an excuse. You are getting two things mixed up here. So am I to take it you are dropping your claim about this supposed point Ostermeier was making?

    "Right, but I don't need to bear the burden of proof for showing your claim false. You either support your claim or you lend support for my claim that you jumped to conclusions. So it is not for me to prove that the implicit claim is there. It is for you to show that it isn't. If it is possible you either deal with the possibility or confess to some degree of jumping to conclusions."
    If you are wanting to demonstrate my claim false, then you have the burden of proof. If you want to say my claim was unjustified, then you can ask for evidence, which I've already given you. However, the claim that Ostermeier made this specific point you are arguing is a separate claim. You need to demonstrate he actually made that point, or you cannot claim that he did.

    "LMAO. Yeah, you can make it look like a straw man by cutting off the question mark with your edit. Sweet irony in that. We'll throw a ticker-tape parade the day you accurately identify a fallacy."
    The question mark is irrelevant as you would be asking me a question I didn't imply any support for. Unless, of course that question was pointless.

    ReplyDelete
  21. "How is that supposed to follow? Let's suppose that PF's methods for identifying dubious statements are unreliable. And they rate 100 percent of those statements by Republicans "False" and 100 percent of those statements by Democrats "True." And let's suppose that the final ratings are accurate, just for the sake of argument. Are you seriously trying to tell me that we have no evidence of selection bias in those proportions?...
    Why not? Regardless of whether the method is efficient in identifying dubious statements, using the same method for identification should show a neutral distribution trend. If it doesn't then this suggests that the method isn't the same depending on the party."




    An extreme would possibly be reason for concern (for other reasons), but I will explain this to you in simpler terms first since you still do not seem to get where I'm coming from:
    1. If Politifact editors are not reliable or efficient in finding "false" statements (I actually have another point about this I will get after I'm done responding) then, given a large set of non-arbitrary statements, you cannot predict how many of those will be "true" or "false" (within the spectrum), unless you know enough information about the statements to be able to tell if they are likely to be correctly or incorrectly chosen as "false" by Politifact. I didn't realize it til your last comment, but you were assuming all statements were arbitrary. But that is not the case. Take Obamacare for example. Politifact looked at Obamacare statements all throughout 2009. This means they were well aware of the legislation and were more likely to catch falses, exaggerations, and misleading statements about it. If Republicans had the habit in 2010 to make these kinds of statements, while Democrats had the habit to make arbitrary statements Politifact could misidentify as false in 2010 as well (due to their unreliable methods for determining false statements), then there's no reason to necessarily expect both distributions to be equal.
    So back to the hypothetical case you gave. If we did see a case where one party had all "false"s and one had all trues. That would mean that every claim Republicans made would have been ones Politifact can easily identify as false, and Democrats all claims that appear false, but are actually true. Since this is highly unlikely, we could be safe in assuming there is something fishy. However, this is not the case here. You cannot come to any conclusion about the claims until you investigate them. Until then, you cannot come to any conclusion about how you would expect the distribution to work.

    ReplyDelete
  22. "Rubbish. It's still up to PolitiFact's editors to decided which claims to rate on the Truth-O-Meter. Suppose that every Republican in Congress states that Obamacare will create "death panels." Do you truly think that we'll see a PF rating for every such claim? "
    Wow, you really didn't think about what I was saying at all, did you? Take the lies of the year:
    The 2010 lie of the year was basically repeated 3 times in 2010 after the first initial instance.
    http://www.politifact.com/truth-o-meter/article/2010/dec/16/lie-year-government-takeover-health-care/

    The 2011 Lie of the Year was repeated 8 times in 2011 after the first initial instance
    http://www.politifact.com/truth-o-meter/article/2011/dec/20/lie-year-democrats-claims-republicans-voted-end-me/

    This is the kind of situation I am talking about. If a person presents a false claim, and it is repeated later on (possibly in a slightly different context), then Politifact will be able to recognize it right away and call it false.
    For instance, Politifact hears the claim that Eric Cantor wants to abolish medicare, and they rate it false. Now, once someone else says a week later that the Republican plan will abolish medicare within 10 years, they can be pretty confident it doesn't since they already graded a near identical statement just a week earlier.

    "Ridiculous. Like we should worry about this anyway if PF isn't efficient at choosing false statements to rate. We might just as well expect them to rate the same statement "True" every time they see it."
    Not necessarily, remember all Ostermeier could see was Bill Adair's statement that Politifact chose statements that could be wrong. If they know it's true, then they should unlikely to pick it up (may be rare exceptions).

    "If PF picks supposedly known false statements for repeated ratings and does not choose known true statements for repeated ratings that simply describes one aspect of the selection bias...
    That's a selection bias. Why not measure every selection bias together?"

    Yes but I already conceded this kind of selection bias exists. It should be apparent from their methods of choosing statements (including bill adair's quotes in Ostermeier's article). However, it is not party-related selection bias.

    ReplyDelete
  23. And onto the other point. I just realized that, if we were to assume Politifact did have perfect efficiency, what would we expect the overall distribution to look like? We certainly wouldn't expect all claims to be "false" or Pof. That would mean we had to make the assumption that Politifact will ignore any statement that isn't completely false ("mostly false" "half true"). I don't think I need to go into detail over why this expectation would be ridiculous. Remember that, "half-true" statements are also essentially "half-false" and would still be picked up. In fact, one could argue that "mostly true" could still be picked up because it is essentially "barely false".
    So given an equal number of mostly arbitrary statements, as well as arbitrary reliability for Politifact, you would expect the percentage of "half-true" and below to be close. In 2010, the number is a lot closer when you include "half true"s: 68.5% for Republicans vs 57.6% for Democrats. If you assume Politifact won't let a statement that is "mostly true" (barely false) slide, the number is closer to 84.8 and 84.4 for Republicans and Democrats respectively. So if Politifact thinks a statement COULD be wrong, then they will check it. In what category it goes completely depends on how far off the claims actually are from the truth, something completely out of Politifact's control.

    ReplyDelete
  24. "This "evidence" is once again you claiming I took it out of context by providing your own analysis and passing it off as his."

    Your summary is wildly inaccurate.

    "But this potential selection bias - if there is one at PolitiFact - seems to be aimed more at Republican officeholders than conservatives per se."
    --Ostermeier

    "The question is not whether PolitiFact will ultimately convert skeptics on the right that they do not have ulterior motives in the selection of what statements are rated, but whether the organization can give a convincing argument that either a) Republicans in fact do lie much more than Democrats, or b) if they do not, that it is immaterial that PolitiFact covers political discourse with a frame that suggests this is the case."
    --Ostermeier

    Ostermeier's point is to let PolitiFact and others know that its presentation gives a misleading appearance of certainty that Republicans lie more than Democrats. And that's why it's quite certain that you took the last line out of context. He's not concluding that the bias exists. He's saying that the evidence makes it look as though the bias exists. There's a difference that you've ignored ("you look about 35 years old" "how old do you think I am?" "Forty-three. I saw your driver's license.")

    "I've challenged that because it is based on faulty premises."

    That's a very charitable way of referring to your nonsensical objections. There's no necessary assumption about accuracy (only that accuracy should be approximately the same for both parties), and choosing statements already checked is just another example of selections that bias the sample.

    "So am I to take it you are dropping your claim about this supposed point Ostermeier was making?"

    Yes, only because it appears to serve to distract you from bearing your burden of proof. In support of your original claim you claimed you examined Ostermeier's entire reasoning process ("(A)re you telling me that you searched completely through Ostermeier's reasoning process (speaking of absurdities)?"
    Absolutely
    ). Please support that claim, since your argument in support of your original claim relies on the appeal to silence if the more recent claim is not supported.

    "Unless, of course that question was pointless"

    If you answer "No" then I can ask you what evidence we should look for to detect an implicit claim and design the argument around your specifications. If you answer "Yes" then you're implicitly admitting to disingenuous argumentation. Socratic method. Rocket science. ;-)

    "The 2011 Lie of the Year was repeated 8 times in 2011 after the first initial instance"

    It was rated only once by PF national.

    "The 2010 lie of the year was basically repeated 3 times in 2010 after the first initial instance."

    PF national never did a rating for it.

    PF national does repeat the same fact on occasion (more lately), but it's rare, and they do it with Democrats and Republicans both. Your numbers exaggerate the potential effects, and it does not undermine the accuracy of the inference. The one validity in your point is that a study that attempts a firm conclusion based on the stated inference should account for repeat fact checks. It's not a significant weakness for Ostermeier's study.

    ReplyDelete
  25. "Your summary is wildly inaccurate.
    Ostermeier's point is to let PolitiFact and others know that its presentation gives a misleading appearance of certainty that Republicans lie more than Democrats."

    Yet Ostermeier puts the burden of proof on Politifact to demonstrate that "either a) Republicans in fact do lie much more than Democrats, or b) if they do not, that it is immaterial that PolitiFact covers political discourse with a frame that suggests this is the case."
    this means Republicans are fine suspecting selection bias is the cause of these ratings until Politifact does so. He focuses solely on Politifact having the burden of proof. This is a fallacy because the data is insufficient to create some kind of selection bias null hypotheses. He would have to ask many more questions to rule out a whole slew of other possibilities. This makes it clear Ostermeier is in favor of the party-related selection bias hypothesis. The last line clarifies it even more. What is interesting is this:

    You say "He's saying that the evidence makes it look as though the bias exists." Which is basically what I said, only i mentioned it is a jump to a conclusion. The evidence makes a lot of possible things apparent, yet he does point any of them out. I make these other things apparent in the questions I asked. The only other one he does point out (that republicans lie more than democrats) he shifted the burden of proof onto Politifact. Should he not shift the burden of proof onto Republicans? Why did he not ask was "whether Republicans can give a convincing argument that they in fact do not lie much more than Democrats, so Politifact's rating should not reflect as such."

    If I said "Given that you keep asking questions I already answered in my other post it look as though you did not really read my other post." Is that not me essentially accusing you of not reading my post thoroughly (even if I am still open to the possibility I am wrong and you did read my post thoroughly)?

    "That's a very charitable way of referring to your nonsensical objections."
    What a cop out. Brilliant way of avoiding my objections that actually completely nullified your argument. I made an argument, are you going to ignore it or actually show me what premise is wrong?

    ReplyDelete
  26. "There's no necessary assumption about accuracy (only that accuracy should be approximately the same for both parties),"
    And my objections showed conclusively that is a faulty premise unless you investigate the data. Without this premise, your argument fails. So challenge my objections to the premise, or stop using the argument. I'll copy it down for you again:

    "if we were to assume Politifact did have perfect efficiency, what would we expect the overall distribution to look like? We certainly wouldn't expect all claims to be "false" or Pof. That would mean we had to make the assumption that Politifact will ignore any statement that isn't completely false ("mostly false" "half true"). I don't think I need to go into detail over why this expectation would be ridiculous. Remember that, "half-true" statements are also essentially "half-false" and would still be picked up. In fact, one could argue that "mostly true" could still be picked up because it is essentially "barely false".
    So given an equal number of mostly arbitrary statements, as well as arbitrary reliability for Politifact, you would expect the percentage of "half-true" and below to be close. In 2010, the number is a lot closer when you include "half true"s: 68.5% for Republicans vs 57.6% for Democrats. If you assume Politifact won't let a statement that is "mostly true" (barely false) slide, the number is closer to 84.8 and 84.4 for Republicans and Democrats respectively. So if Politifact thinks a statement COULD be wrong, then they will check it. In what category it goes completely depends on how far off the claims actually are from the truth, something completely out of Politifact's control."
    This means we would expect both parties to have approximately the same proportion of statements that were not True, which they did in 2010 (or that they would have the same proportion of statements "half-true or below, which they were close). Add to this another factor to explain the small disparity left over:

    "If Politifact editors are not reliable or efficient in finding "false" statements (I actually have another point about this I will get after I'm done responding) then, given a large set of non-arbitrary statements, you cannot predict how many of those will be "true" or "false" (within the spectrum), unless you know enough information about the statements to be able to tell if they are likely to be correctly or incorrectly chosen as "false" by Politifact. I didn't realize it til your last comment, but you were assuming all statements were arbitrary. But that is not the case. Take Obamacare for example. Politifact looked at Obamacare statements all throughout 2009. This means they were well aware of the legislation and were more likely to catch falses, exaggerations, and misleading statements about it. If Republicans had the habit in 2010 to make these kinds of statements, while Democrats had the habit to make arbitrary statements Politifact could misidentify as false in 2010 as well (due to their unreliable methods for determining false statements), then there's no reason to necessarily expect both distributions to be equal."
    So we wouldn't necessarily expect the disparity to be near 0. It would be absolutely dependent on the nature of the claims, something neither you nor Ostermeier did.


    and choosing statements already checked is just another example of selections that bias the sample."

    Once again, this is not party-related selection bias and could go either way, as seen with the lie of the year 2011. Given that Politifact often mentions when they have previously rated similar statements, this should be expected to reflect in their report cards.

    ReplyDelete
  27. "Yes, only because it appears to serve to distract you from bearing your burden of proof."
    You made it a distraction. I gave you my point by point analysis and you gave me this absurd diagnosis of a logical fallacy.

    " In support of your original claim you claimed you examined Ostermeier's entire reasoning process ("(A)re you telling me that you searched completely through Ostermeier's reasoning process (speaking of absurdities)?"
    Absolutely). Please support that claim, "

    My other post serves as evidence. I'm not going to completely re-post it here, unless you need me to. You agreed with me about his conclusion:
    "He's not concluding that the bias exists. He's saying that the evidence makes it look as though the bias exists." The difference between these is not significant, as I pointed out with my analogy. He clearly supports the conclusion since he ignored that the evidence also makes it look as though Republicans told more falsehoods more than democrats just as well. He then ignores the latter in favor of the former for no reason that shouldn't also reject the latter, an argument from ignorance.

    "since your argument in support of your original claim relies on the appeal to silence if the more recent claim is not supported."
    It is only an appeal to silence if I did not thoroughly do my due diligence to find out what Ostermeier said, which I did with my point by point critique. Remember my gorilla in the room analogy?

    "If you answer "No" then I can ask you what evidence we should look for to detect an implicit claim and design the argument around your specifications."

    Look at the evidence I used to show that Ostermeier supports the conclusion of party-related selection bias.

    "It was rated only once by PF national."
    actually twice:
    http://www.politifact.com/truth-o-meter/statements/2011/apr/01/americans-united-change/americans-united-change-says-eric-cantor-wants-abo/
    http://www.politifact.com/truth-o-meter/statements/2011/apr/20/democratic-congressional-campaign-committee/democrats-say-republicans-voted-end-medicare-and-c/

    So did Ostermeier ignore the stuff from the state organizations? I see no indication of this in his article.

    "PF national never did a rating for it.

    PF national does repeat the same fact on occasion (more lately), but it's rare, and they do it with Democrats and Republicans both. Your numbers exaggerate the potential effects, and it does not undermine the accuracy of the inference. "

    How rare was it in 2010? Did it happen more to Republicans enough to cause some disparity? I know they do it to both parties, but it matters whether it happened more in 2010. Once again, I see no indication of Ostermeier ignoring the states. Either way, I'm not convinced it is all that rare anyway. I seem to find it quite a bit. Although that is just an anecdote, remember, in order to dismiss it, you have to show that it was rare in 2010 and did not affect the results significantly (in addition to other factors).


    "The one validity in your point is that a study that attempts a firm conclusion based on the stated inference should account for repeat fact checks. It's not a significant weakness for Ostermeier's study."
    It is a point that someone needs to address before focusing an entire article on how it looks like Politifact is guilty of party-related selection bias.

    ReplyDelete
  28. "This is a fallacy because the data is insufficient to create some kind of selection bias null hypotheses."

    I thought you had admitted that selection bias is pretty much inevitable without a random selection process? Do we have evidence that PolitiFact does not employ a random selection process or not?

    ReplyDelete
  29. "I thought you had admitted that selection bias is pretty much inevitable without a random selection process? Do we have evidence that PolitiFact does not employ a random selection process or not?"
    *party-related selection bias
    I thought it was obvious seeing as how that is what I've been talking about this whole time

    ReplyDelete
  30. Please define "Party-related" selection bias. Is it more accurately a selection bias that affects one party more than the other or is it a bias that occurs during the selection process based on party? Be as specific as you are able, please.

    ReplyDelete
  31. Okay, given that you prefer to define "party-related" selection bias as a bias that occurs during the selection process based on party, do you think that the former type of selection bias could significantly affect the outcome of a study in significant ways along party lines?

    ReplyDelete
  32. The former type could affect one party more than the other. But it depend on the situation. For instance:
    "Take Obamacare for example. Politifact looked at Obamacare statements all throughout 2009. This means they were well aware of the legislation and were more likely to catch falses, exaggerations, and misleading statements about it. If Republicans had the habit in 2010 to make these kinds of statements, while Democrats had the habit to make arbitrary statements Politifact could misidentify as false in 2010 as well (due to their unreliable methods for determining false statements), then there's no reason to necessarily expect both distributions to be equal."
    This kind of selection bias could cause disparities between parties when one party is completely in control of government, and thus more likely to try and pass legislation the other party may find controversial. As a result, the party out of power may try and discredit it, resulting in more statements that are rated half-true and below. The party in power has less incentive to make statements about the legislation, although they may be likely to exaggerate claims, resulting in more half-trues and mostly trues.

    So to answer your question, yes "the former type of selection bias could significantly affect the outcome of a study in significant ways along party lines?"
    Although I will add the caveat that the situation could change. If Republicans take over government in 2012, we may see a disparity occur in the opposite direction (think Wisconsin, for example):
    http://www.politifact.com/personalities/state-democratic-party-wisconsin/

    ReplyDelete
  33. Given that the former type of selection bias could significantly affect the outcome of a study in significant ways along party lines, should we trust the impression created by PolitiFact that Republicans lie more?

    ReplyDelete
  34. Republicans in general, no. Specific Republicans, it depends on other factors, such as the number of statements, etc... But has Politifact ever created a report card that compares the GOP to democrats overall?

    ReplyDelete
  35. Great, so we shouldn't trust the impression that Republicans lie more (as it comes from PolitiFact). Do you have that impression?

    Has PF ever created a report card that compares the GOP to Democrats overall? Um--what do you think the PolitiFact website is? The answer is yes, with the caveat that PF does not reduce the information to a single article or compact graphic except for mentions during its year-end reviews. Yet the left wing has buzzed about the proof PolitiFact provides that Republicans lie more--and PF has never thought to fact check it.

    ReplyDelete
  36. "Great, so we shouldn't trust the impression that Republicans lie more (as it comes from PolitiFact). Do you have that impression?
    "

    Politifact isn't purposely giving that impression. That impression is based on a flawed analysis of Politifact's ratings. It is possible that Republicans lieing more is an explanation, but it is one of many possible explanations for the data. There could be a small collection of Republican politicians contributing to most of these ratings, which means you cannot necessarily extend what you see from some politicians to all politicians from a party as a whole.

    "Has PF ever created a report card that compares the GOP to Democrats overall? Um--what do you think the PolitiFact website is? The answer is yes, with the caveat that PF does not reduce the information to a single article or compact graphic except for mentions during its year-end reviews."
    So nowhere is politifact explicitly endorsing some breakdown of ratings into GOP versus Dems. So essentially, all of Ostermeier's comments about politifact's report cards were essentially trivial. It should be obvious to anyone looking at these report cards that you can't come to a conclusion over politcians with few or no ratings. As a result, what is the sample size when you only include politicians that have a large number of ratings? Is it anywhere near large enough to come to a conclusion about Republican politicians as a whole? If not, then someone would not be justified in saying that the ratings on Politifact give you any good indication of party honesty. Also, if the number of politicians is small no reason to justifiably think that politifact's ratings in 2010 suggest Republican Politicians lie more, let alone Republicans in general.
    Will people get an impression over an individual candidate's report card and frequency on the site that he/she is more honest than some other arbitrary candidate? Yes. But can they infer anything about the party as a whole over such a small sample? Nope. Anyone who reads politifact's articles should be able to tell pretty easily why this is the case. Anyone who has seen blatantly false statements go unchecked from smaller politicians off the main stage should know Politifact isn't randomly sampling statements from arbitrary politicians. So if people are inferring things from Politifact as a whole that Politifact does not endorse, that is their fault. Maybe they should engage in better critical thinking activities. Statements can be broken up into all sorts of groups. Men vs Women, Minorities vs Non-Minorities. Older Politicians vs Younger Politicians... The sky is the limit. Does Politifact have to explain every disparity that may occur from breaking their ratings down this way?

    "Yet the left wing has buzzed about the proof PolitiFact provides that Republicans lie more--and PF has never thought to fact check it."
    Seeing as how Politifact rarely checks the media anymore, I'm wondering where "the left wing has buzzed about the proof PolitiFact provides that Republicans lie more"

    ReplyDelete
  37. "Politifact isn't purposely giving that impression."

    PF is well aware that people use it that way yet they continue to publish the report cards and year-end summaries with no disclaimer. That by itself is a sin of omission, and I simply don't have your confidence that they don't do it on purpose. Adair knows about Ostermeier.

    "It should be obvious to anyone looking at these report cards that you can't come to a conclusion over politcians with few or no ratings."

    PolitiFact has pointed that out. But that disclaimer implies that report cards with more ratings do help readers reach conclusions about the subjects.

    "(I)f people are inferring things from Politifact as a whole that Politifact does not endorse, that is their fault."

    One might be able to accept this defense of PolitiFact if PolitiFact allowed those it rates on the Truth-O-Meter to use it. By PF's own standard their tales are less than "True."

    "I'm wondering where ... "

    http://www.dailykos.com/story/2011/02/14/942558/-Documented-proof-that-Republicans-are-the-biggest-liars-in-politics

    http://freakoutnation.com/2011/02/14/big-shock-republicans-lie-three-times-more-than-democrats/

    http://www.pensitoreview.com/2011/02/10/its-a-politifact-republicans-lie-a-lot-more-than-democrats/

    ReplyDelete
  38. "PF is well aware that people use it that way"
    There is an issue with those sites you listed:

    1. Politifact rarely if ever directly grades anything from these sites
    2. All of these articles referred to the Ostermeier study. Attempting to tackle impressions people got of them from the Ostermeier study also means they have to directly tackle claims of bias. I personally have never seen politifact actually tackle a claim of bias (which is why i do on this blog). Why don't they? I'm not sure. I'd venture to guess that, since politifact tries to act as a non-interested critic, they would certainly have conflict of interest issues if they attempted to tackle claims about themselves itself (with trivial exceptions in regard to this point)

    "yet they continue to publish the report cards and year-end summaries with no disclaimer."
    As far as I can see, I only see references to Ostermeier's study, which has issues of its own when trying to claim Republicans lie more, which is why politifact has never produced any Dem vs GOP report cards.

    And on those year end summaries, I'm not sure I find a problem:
    2007 had pretty much all just "best of" articles:
    http://www.politifact.com/truth-o-meter/article/2007/dec/
    2008 had pretty much nothing:
    http://www.politifact.com/truth-o-meter/article/2008/dec/
    2009 had mostly "best of" articles, one on the health care bill in particular, and one on 2009 as a whole, which contained a nice disclaimer:
    "Our ratings are journalism, not social science, after all, and the items are chosen based on our news judgment and staffing, not randomly selected."
    http://www.politifact.com/truth-o-meter/article/2009/dec/
    http://www.politifact.com/truth-o-meter/article/2009/dec/22/2009-truth-took-beating/
    2010 had a collection of "best of"s mostly:
    http://www.politifact.com/truth-o-meter/article/2010/dec/
    2011 had a collection of "best of"s mostly:
    http://www.politifact.com/truth-o-meter/article/2011/dec/

    So I'm not sure what you are talking about

    ReplyDelete
  39. "That by itself is a sin of omission, and I simply don't have your confidence that they don't do it on purpose. Adair knows about Ostermeier."

    As I pointed out above, there are good reasons why they would ignore what people get from Ostermeier's study.

    "PolitiFact has pointed that out. But that disclaimer implies that report cards with more ratings do help readers reach conclusions about the subjects."
    That implication is a non-sequitur. All the disclaimer implies is that a large number of ratings is necessary to come to a conclusion. It is not sufficient. Good ole contra-positive.

    "One might be able to accept this defense of PolitiFact if PolitiFact allowed those it rates on the Truth-O-Meter to use it. By PF's own standard their tales are less than "True.""
    I've read this over and cannot make heads or tails over how this responds to the statement you copied above.

    ReplyDelete
  40. "There is an issue with those sites you listed"

    The problem is you took them to refer to the first comment rather than the last. Context matters.

    "As far as I can see ..."

    Right, but you're still drifting out of context. Note how the issue is treated in the NYT.
    http://fivethirtyeight.blogs.nytimes.com/2012/01/27/another-check-on-the-campaigns-truthiness/

    ReplyDelete
  41. "The problem is you took them to refer to the first comment rather than the last. Context matters."

    But the context is the same. Those sites you listed were meant as support for your statement ""Yet the left wing has buzzed about the proof PolitiFact provides that Republicans lie more--and PF has never thought to fact check it." So I used them to respond to your statement "PF is well aware that people use it that way".

    "Right, but you're still drifting out of context. Note how the issue is treated in the NYT.
    http://fivethirtyeight.blogs.nytimes.com/2012/01/27/another-check-on-the-campaigns-truthiness/

    Great article, although it does nothing to further your point. They are looking at individual candidates, suggesting those particular candidates have become less engaged with the truth since the campaigns. They make no statements about Republicans in general. They stick to individual candidates. They even have a beautiful disclaimer:

    "In other words, if Mitt Romney says the sky is blue, PolitiFact doesn’t bother grading the statement as true. So there is a sampling bias at play here. Accordingly, the following numbers should be interpreted with caution. They aren’t perfect indicators of the honesty of each candidate, and conclusions like “Candidate X lies the most” or “Candidate Y is the most truthful” should probably not be drawn from the data."

    So what should Politifact have done?

    ReplyDelete
  42. But the context is the same.

    Poppycock. The context isn't the same. It overlaps somewhat, but it isn't the same. PolitiFact isn't aware of the way its readers use its data simply because of responses to Ostermeier's study. It can also rely, for example, on commentary to its Facebook page. This type of reasoning from you is yet another perfect example of why it's a waste of time trying to communicate with you. And, ftm, why you should resist the temptation to blog.

    ReplyDelete
  43. "Poppycock. The context isn't the same. It overlaps somewhat, but it isn't the same."

    ""Yet the left wing has buzzed about the proof PolitiFact provides that Republicans lie more--and PF has never thought to fact check it."
    vs
    "PF is well aware that people use it that way yet they continue to publish the report cards and year-end summaries with no disclaimer."

    Both deal with Politifact "knowing" that liberals use their data to suggest bias. The only difference is in your criticism of how Politifact responds. However, the difference is trivial since you only posted articles that deal with your assertion that Politifact "knows" liberals are using their data that way. So the areas where they don't overlap is quite obviously trivial.

    "PolitiFact isn't aware of the way its readers use its data simply because of responses to Ostermeier's study."
    Is this a typo? I want to make sure before wasting time responding to what I think you meant.

    "It can also rely, for example, on commentary to its Facebook page."
    ROFLMAO! So is politifact supposed to start grading facebook comments, which often total in the hundreds per day?! So when will they have time to grade candidates and political pundits?

    "This type of reasoning from you is yet another perfect example of why it's a waste of time trying to communicate with you."
    Yeah I call out contrived arguments. It is a waste of time until you can come up with legitimate criticism instead of rhetorical tricks.

    "And, ftm, why you should resist the temptation to blog."
    I don't actually blog that often. But when I do, I tend to have concise, well thought out articles and critiques (with the occasional one comment post), as opposed to the gish gallup of anti-politifact propaganda you call your blog.

    ReplyDelete