whpearson comments on Can we create a function that provably predicts the optimization power of intelligences? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (17)
Suppose I disallow Mu from examining the source code of either p or o. It can examine the behaviors in advance of their execution, but not the source code. And now suppose that if Mu is allowed to know this much, then I am allowed to know the same about Mu. Then what?
Optimality through obscurity?
If we are going to allow you a chance to figure out the behavior of Mu, Mu should be given the chance to find out the behavior of Eliezer(what programs you are likely to produce etc). Only then would information parity be preserved.
Mu is standing in for the entire world, your system is a small bit of it, it is entirely reasonable to expect the world to know more about your system than you do about the behavior of the would. I'm not sure where you are taking this idea, but it is unrealistic in my view.