Psychohistorian comments on Can we create a function that provably predicts the optimization power of intelligences? - Less Wrong

-7 Post author: whpearson 28 May 2009 11:35AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (17)

You are viewing a single comment's thread.

Comment author: Psychohistorian 28 May 2009 03:21:18PM 8 points [-]

Let me see if I understand this. Basically, you're making a hypothetical in which, if you do something right, a quasi-omniscient being intervenes and makes it not work. If you do something wrong, it lets that happen. Please correct me if I'm wrong, but if this is even close to what you're saying, it seems quite pointless as a hypothetical.

Comment author: whpearson 28 May 2009 07:15:57PM 2 points [-]

Let us say I am an AI, I want to replace myself with a new program, I want to be sure that this new program will perform the tasks I want done better than me (including creating new better copies of myself), I need to be able to predict how good a program is.

I don't want to have to run it and see as I will have to run it for a very long time to see if the program is better than me a long time in the future. So I want a proof that the new program is better than me. Are such proofs possible? My argument is that they are not if you cannot constrain the environment and make it not reference your proof.

Comment author: MichaelVassar 28 May 2009 07:48:13PM 1 point [-]

Honestly, this is basically a problem with most problems involving Omega etc.

Comment author: Vladimir_Nesov 28 May 2009 08:54:39PM 0 points [-]

The problems of this kind should be stated clearly, examining a specific aspect of e.g. decision theories. In this particular case, "doing something right" or "optimization power" etc. is too vague to work with.

Comment author: whpearson 29 May 2009 01:01:09AM *  0 points [-]

If you have a problem with optimization power take it up with the person that coined it.

What ever it is that you expect to go up and up and up during in a FOOMing RSI, that is the measure I want to capture and explore. Can you tell me about your hopefully less vague measure?