XiXiDu comments on Confidence levels inside and outside an argument - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (174)
Assign a probability 1-epsilon to your belief that Bayesian updating works. Your belief in "Bayesian updating works" is determined by Bayesian updating; you therefore believe with 1-epsilon probability that "Bayesian updating works with probability 1-epsilon". The base level belief is then held with probability less than 1-epsilon.
As the recursive nature of holding Bayesian beliefs about believing Bayesianly allows chains to tend toward large numbers, the probability of the base level belief tends towards zero.
There is a flaw with Bayesian updating.
I think this is just a semi-formal version of the problem of induction in Bayesian terms, though. Unfortunately the answer to the problem of induction was "pretend it doesn't exist and things work better", or something like that.
I'd love to see someone like EY tackle the above comment.
On a side note, why do I get an error if I click on the username of the parent's author?
I'm actually planning on tackling it myself in the next two weeks or so. I think there might be a solution that has a deductive justification for inductive reasoning. EY has already tackled problems like this but his post seems to be a much stronger variant on Hume's "it is custom, and it works" - plus a distinction between self-reflective loops and circular loops. That distinction is how I currently rationalise ignoring the problem of induction in everyday life.
Also - I too do not know why I don't have an overview page.