MichaelVassar comments on Can we create a function that provably predicts the optimization power of intelligences? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (17)
Honestly, this is basically a problem with most problems involving Omega etc.
The problems of this kind should be stated clearly, examining a specific aspect of e.g. decision theories. In this particular case, "doing something right" or "optimization power" etc. is too vague to work with.
If you have a problem with optimization power take it up with the person that coined it.
What ever it is that you expect to go up and up and up during in a FOOMing RSI, that is the measure I want to capture and explore. Can you tell me about your hopefully less vague measure?