wedrifid comments on Confidence levels inside and outside an argument - Less Wrong

129 Post author: Yvain 16 December 2010 03:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (174)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 16 December 2010 10:14:02AM 1 point [-]

What happens if we apply this type of thinking to Bayesian probability in general? It seems like we have to assign a small amount of probability to the claim that all our estimates are wrong, and that our methods for coming to those estimates are irredeemably flawed. This seems problematic to me, since I have no idea how to treat this probability, we can't use Bayesian updating on it for obvious reasons.

There is an Eliezer post on just this subject. Anyone remember the title?

Comment author: gjm 17 December 2010 12:55:40PM *  0 points [-]

You might be thinking of Ends Don't Justify Means, which considers the question "What if I'm running on corrupt hardware". It doesn't actually say much about how a (would-be) rational agent ought to adjust its opinion-forming mechanisms to deal with that possibility, though.

[EDITED to remove superfluous apostrophe.]

Comment author: benelliott 17 December 2010 03:38:36PM 0 points [-]

I've been looking through some of Eliezer's posts on the subject and the closest I've come is "Where Recursive Justification Hits Bottom", which looks at the problem that if you start with a sufficiently bad prior you will never attain accurate beliefs.

This is a slightly different problem to the one I pointed out (though no less serious, in fact I would say it's more likely by several orders of magnitude). However, unlike that case, where there really is nothing you can do but try to self improve and hope you started above the cut-off point, my problem seems like it might have an actual solution, I just can't see what it is.