XiXiDu comments on Confidence levels inside and outside an argument - Less Wrong

129 Post author: Yvain 16 December 2010 03:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (174)

You are viewing a single comment's thread. Show more comments above.

Comment author: shokwave 18 December 2010 06:29:36PM 0 points [-]

"what if Bayesian updating is flawed"

Assign a probability 1-epsilon to your belief that Bayesian updating works. Your belief in "Bayesian updating works" is determined by Bayesian updating; you therefore believe with 1-epsilon probability that "Bayesian updating works with probability 1-epsilon". The base level belief is then held with probability less than 1-epsilon.

As the recursive nature of holding Bayesian beliefs about believing Bayesianly allows chains to tend toward large numbers, the probability of the base level belief tends towards zero.

There is a flaw with Bayesian updating.

I think this is just a semi-formal version of the problem of induction in Bayesian terms, though. Unfortunately the answer to the problem of induction was "pretend it doesn't exist and things work better", or something like that.

Comment author: XiXiDu 18 December 2010 07:57:15PM *  0 points [-]

I'd love to see someone like EY tackle the above comment.

On a side note, why do I get an error if I click on the username of the parent's author?

Comment author: shokwave 19 December 2010 05:52:20AM *  1 point [-]

I'm actually planning on tackling it myself in the next two weeks or so. I think there might be a solution that has a deductive justification for inductive reasoning. EY has already tackled problems like this but his post seems to be a much stronger variant on Hume's "it is custom, and it works" - plus a distinction between self-reflective loops and circular loops. That distinction is how I currently rationalise ignoring the problem of induction in everyday life.

Also - I too do not know why I don't have an overview page.