khafra comments on Rationality Quotes June 2013 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (778)
I believe this is a model space problem. We're looking at a toy bayesian reasoner that can be easily modeled in a human mind, predicting how it will update its hypotheses about dice in response to evidence like the same number coming up too often. Our toy bayesian, of course, assigns probability 0 to encountering evidence like "my trusted expert friend says it's loaded," so that wouldn't change its probabilities at all. But that's not a flaw in bayesian reasoning; it's a flaw in the kind of bayesian reasoner that can be easily modeled in a human mind.
This doesn't demonstrate that human reasoning that works doesn't have a bayesian core. E.g., I don't know how I would update my probabilities about a die being loaded if, say, my left arm turned into a purple tentacle and started singing "La Bamba." But it does show that even an ideal reasoner can't always out-predict a computationally limited one; if the computationally limited one has access to a much better prior, and/or a whole lot more evidence.