Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

orthonormal comments on The Optimizer's Curse and How to Beat It - Less Wrong

44 Post author: lukeprog 16 September 2011 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (81)

You are viewing a single comment's thread. Show more comments above.

Comment author: orthonormal 17 September 2011 01:42:34PM 2 points [-]

I'm thinking of this as "updating on whether I actually occupy the epistemic state that I think I occupy", which one hopes would be less of a problem for a superintelligence than for a human.

It reminds me of Yvain's Confidence Levels Inside and Outside an Argument.

Comment author: NancyLebovitz 17 September 2011 03:17:04PM 2 points [-]

I expect it to be a problem-- probably as serious-- for superintelligence. The universe will always be bigger and more complex than any model of it, and I'm pretty sure a mind can't fully model itself.

Superintelligences will presumably have epistemic problems we can't understand, and probably better tools for working on them, but unless I'm missing something, there's no way to make the problem go away.

Comment author: orthonormal 17 September 2011 03:53:09PM 2 points [-]

Yeah, but at least it shouldn't have all the subconscious signaling problems that compromise conscious reasoning in humans- at least I hope nobody would be dumb enough to build a superintelligence that deceives itself on account of social adaptations that don't update when the context changes...