orthonormal comments on The Optimizer's Curse and How to Beat It - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (76)
But all you've done after "adjusting" the expected value estimates was producing a new batch of expected value estimates, which just shows that the original expected value estimates were not done very carefully (if there was an improvement), or that you face the same problem all over again...
Am I missing something?
I'm thinking of this as "updating on whether I actually occupy the epistemic state that I think I occupy", which one hopes would be less of a problem for a superintelligence than for a human.
It reminds me of Yvain's Confidence Levels Inside and Outside an Argument.
I expect it to be a problem-- probably as serious-- for superintelligence. The universe will always be bigger and more complex than any model of it, and I'm pretty sure a mind can't fully model itself.
Superintelligences will presumably have epistemic problems we can't understand, and probably better tools for working on them, but unless I'm missing something, there's no way to make the problem go away.
Yeah, but at least it shouldn't have all the subconscious signaling problems that compromise conscious reasoning in humans- at least I hope nobody would be dumb enough to build a superintelligence that deceives itself on account of social adaptations that don't update when the context changes...