Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Vladimir_Nesov comments on The Optimizer's Curse and How to Beat It - Less Wrong

44 Post author: lukeprog 16 September 2011 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (81)

You are viewing a single comment's thread.

Comment author: Vladimir_Nesov 16 September 2011 09:35:37AM 15 points [-]

But all you've done after "adjusting" the expected value estimates was producing a new batch of expected value estimates, which just shows that the original expected value estimates were not done very carefully (if there was an improvement), or that you face the same problem all over again...

Am I missing something?

Comment author: orthonormal 17 September 2011 01:42:34PM 2 points [-]

I'm thinking of this as "updating on whether I actually occupy the epistemic state that I think I occupy", which one hopes would be less of a problem for a superintelligence than for a human.

It reminds me of Yvain's Confidence Levels Inside and Outside an Argument.

Comment author: NancyLebovitz 17 September 2011 03:17:04PM 2 points [-]

I expect it to be a problem-- probably as serious-- for superintelligence. The universe will always be bigger and more complex than any model of it, and I'm pretty sure a mind can't fully model itself.

Superintelligences will presumably have epistemic problems we can't understand, and probably better tools for working on them, but unless I'm missing something, there's no way to make the problem go away.

Comment author: orthonormal 17 September 2011 03:53:09PM 2 points [-]

Yeah, but at least it shouldn't have all the subconscious signaling problems that compromise conscious reasoning in humans- at least I hope nobody would be dumb enough to build a superintelligence that deceives itself on account of social adaptations that don't update when the context changes...

Comment author: CynicalOptimist 17 November 2016 06:28:11PM *  0 points [-]

Well in some circumstances, this kind of reasoning would actually change the decision you make. For example, you might have one option with a high estimate and very high confidence, and another option with an even higher estimate, but lower confidence. After applying the approach described in the article, those two options might end up switching position in the rankings.

BUT: Most of the time, I don't think this approach will make you choose a different option. If all other factors are equal, then you'll probably still pick the option that has the highest expected value. I think that what we learn from this article is more about something else: It's about understanding that the final result will probably be lower than your supposedly "unbiased" estimate. And when you understand that, you can budget accordingly.