abramdemski comments on A List of Nuances - Less Wrong

31 Post author: abramdemski 10 November 2014 05:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (24)

You are viewing a single comment's thread. Show more comments above.

Comment author: abramdemski 11 November 2014 03:28:54AM 1 point [-]

I think this is an interesting question. If the arguer is cherry-picking evidence, we should ignore that to a large degree. We are often even justified in updating in the opposite direction of a motivated argument. In the pure mathematical case, it doesn't matter anymore, so long as we are prepared to check the proof thoroughly. It seems to break down very quickly for any other situation, though.

In principle, the Bayesian answer is that we need to account for the filtering process when updating on filtered evidence. This collides with logical uncertainty when "evidence" includes logical/mathematical arguments. But, there is a largely seperate question of what we should do in practice when we encounter motivated arguments. It would be nice to have more tools for dealing with this!

Comment author: fortyeridania 12 November 2014 05:50:06AM *  1 point [-]

Yes, this in an interesting issue. One unusual (at least, I have not seen anyone advocate it seriously elsewhere) perspective is that mentioned by Tyler Cowen here. The gist is that in Bayesian terms, the fact that someone thought an issue was important enough to lie about is evidence that their claim is correct.

Comment author: RichardKennaway 12 November 2014 09:10:01PM 1 point [-]

The gist is that in Bayesian terms, the fact that someone thought an issue was important enough to lie about is evidence that their claim is correct.

Or their position on the issue could be motivated by some other issue you don't even know is on their agenda.

Or...pretty much anything.

Comment author: CCC 12 November 2014 09:24:48AM 1 point [-]

Hmmm. It's better evidence that they want you to believe the claim is correct.

For example, I might cherry-pick evidence to suggest that anyone who gives me $1 is significantly less likely to be killed by a crocodile. I don't believe that myself, but it is to my advantage that you believe it, because then I am likely to get $1.

Comment author: Jiro 12 November 2014 07:47:04PM 0 points [-]

Someone points out in the comments to that:

The Bayesian point only stands if the P(ClimateGate | AGW) > P(ClimateGate | ~AGW). That is the only way you can revise your prior upwards in light of ClimateGate