andreas comments on Can we create a function that provably predicts the optimization power of intelligences? - Less Wrong

-7 Post author: whpearson 28 May 2009 11:35AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (17)

You are viewing a single comment's thread.

Comment author: andreas 28 May 2009 01:55:50PM 1 point [-]

Make explicit what you expect from a measure of 'optimization power' and it will be easier to judge whether your criticism applies. If your measure is an expectation, e.g. the expected probability of achieving a goal for some distribution on goals and environments, then your story does not show that such a measure is unreliable overall.

Comment author: whpearson 28 May 2009 02:40:57PM 0 points [-]

I'll add a set goal. I expect that a measure of optimization power would tell us how well a system would do in physically plausible environments. We can't control or know the environment in more detail than that so we cannot rule out forced self-reference.

Comment author: andreas 28 May 2009 03:01:51PM 0 points [-]

By "how well a system would do in physically plausible environments", do you mean on average, worst case or something else?

Comment author: whpearson 28 May 2009 03:51:37PM *  1 point [-]

Check out the link to efficient optimization at the top of the post, he never said anything about running statistics when measuring intelligence.

What we want is something that will enable a program to judge the quality of another so it will always be better when it picks a new program to rewrite to, so it can be take over the galaxy. Feel free to pick holes in this notion if you want, it is not mine.

I might rewrite the post to have the notion of strict ordering. A system is better than another if will do no worse under all physically plausible environments and is better in some. Then Mu would allow you to setup two programs that are differently ordered and ask you to bet on their ordering.

You really don't want to do statistics on the first seed AI if it is possible.

Comment author: andreas 28 May 2009 04:34:13PM 1 point [-]

Then you're arguing that, if your notion of "physically plausible environments" includes a certain class of adversely optimized situations, worst-case analysis won't work because all worst cases are equally bad.

Comment author: whpearson 28 May 2009 04:59:55PM 0 points [-]

They could all be vaporised by a near super nova or something similar before they have a chance to do anything, yup.