You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gjm comments on Open thread, Dec. 21 - Dec. 27, 2015 - Less Wrong Discussion

2 Post author: MrMind 21 December 2015 07:56AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (230)

You are viewing a single comment's thread. Show more comments above.

Comment author: gjm 22 December 2015 07:31:02PM 0 points [-]

All you have to do is not simultaneously use "confirm" to mean both "increase the probability of" and "assign high probability to".

As for throwing out unlikely possibilities to save on computation: that (or some other shortcut) is sometimes necessary but it's an entirely separate matter from Bayesian confirmation theory or indeed Popperian falsificationism. (Popper just says to rule things out when you've disproved them. In your example, you have a bunch of things near to 10% and Popper gives you no licence to throw any of them out.

Comment author: FrameBenignly 22 December 2015 08:39:59PM *  0 points [-]

Yes, sorry. I'm considering multiple sources which I recognize the rest of you haven't read, and trying to translate them into short comments which I'm probably not the best person to do so, so I recognize the problem I'm talking about may come out a bit garbled, but I think the quote from the Morey et al. paper I quoted above describes the problem the best.

Comment author: gjm 22 December 2015 10:03:41PM 0 points [-]

You see how Morey et al call the position they're criticizing "Overconfident Bayesianism"? That's because they're contrasting it with another way of doing Bayesianism, about which they say "we suspect that most Bayesians adhere to a similar philosophy". They explicitly say that what they're advocating is a variety of Bayesian confirmation theory.

Comment author: FrameBenignly 22 December 2015 10:34:07PM 0 points [-]

The part about deduction from the Morey et al. paper:

GS describe model testing as being outside the scope of Bayesian confirmation theory, and we agree. This should not be seen as a failure of Bayesian confirmation theory, but rather as an admission that Bayesian confirmation theory cannot describe all aspects of the data analysis cycle. It would be widely agreed that the initial generation of models is outside Bayesian confirmation theory; it should then be no surprise that subsequent generation of models is also outside its scope.

Comment author: gjm 24 December 2015 10:41:45AM 0 points [-]

Who has been claiming that Bayesian confirmation theory is a tool for generating models?

(It can kinda-sorta be used that way if you have a separate process that generates all possible models, hence the popularity of Solomonoff induction around here. But that's computationally intractable.)

Comment author: FrameBenignly 22 December 2015 10:15:20PM *  0 points [-]

As stated in my original comment, confirmation is only half the problem to be considered. The other half is inductive inference which is what many people mean when they refer to Bayesian inference. I'm not saying one way is clearly right and the other wrong, but that this is a difficult problem to which the standard solution may not be best.

You'd have to read the Andrew Gelman paper they're responding to to see a criticism of confirmation.