Jack comments on Confidence levels inside and outside an argument - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (174)
Very interesting principle, and one which I will bear in mind since I very recently had a spectacular failure to apply it.
What happens if we apply this type of thinking to Bayesian probability in general? It seems like we have to assign a small amount of probability to the claim that all our estimates are wrong, and that our methods for coming to those estimates are irredeemably flawed. This seems problematic to me, since I have no idea how to treat this probability, we can't use Bayesian updating on it for obvious reasons.
Anyone have an idea about how to deal with this? Preferably a better idea than "just don't think about it" which is my current strategy.
The issue is basically that the idealized Bayesian agent is assumed to be logically omniscient and humans clearly are not. It's an open problem in the Bayesian epistemology literature.