whpearson comments on Can we create a function that provably predicts the optimization power of intelligences? - Less Wrong

-7 Post author: whpearson 28 May 2009 11:35AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (17)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 28 May 2009 01:41:31PM 1 point [-]

Suppose I disallow Mu from examining the source code of either p or o. It can examine the behaviors in advance of their execution, but not the source code. And now suppose that if Mu is allowed to know this much, then I am allowed to know the same about Mu. Then what?

Comment author: whpearson 28 May 2009 04:28:31PM 0 points [-]

Optimality through obscurity?

If we are going to allow you a chance to figure out the behavior of Mu, Mu should be given the chance to find out the behavior of Eliezer(what programs you are likely to produce etc). Only then would information parity be preserved.

Mu is standing in for the entire world, your system is a small bit of it, it is entirely reasonable to expect the world to know more about your system than you do about the behavior of the would. I'm not sure where you are taking this idea, but it is unrealistic in my view.