lessdazed comments on Wanted: backup plans for "seed AI turns out to be easy" - Less Wrong

18 Post author: Wei_Dai 28 September 2011 09:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (62)

You are viewing a single comment's thread. Show more comments above.

Comment author: iii 29 September 2011 09:52:36PM 0 points [-]

So we could just build a seed AI whose utility function is to produce a human optimal utility function?

Comment author: lessdazed 29 September 2011 10:49:50PM 1 point [-]

a human optimal utility function

This could mean several things. What do you mean?

Comment author: iii 01 October 2011 01:55:24PM 0 points [-]

I'm unfamiliar with the state of our knowledge concerning these things, so take this as you will, A perfect utility function can yield many different things one of which is the adherence to "the principal for the devlopment of value(s) in human beings" which aren't necessary the same as "values that make existing in the universe most probable" or "what people want" or "what people will always want". a human optimal utility function would be something that leads to adressing the human condition as a problem, to improve it in the manner and method it seeks to improve itself, whether that is survivability or something else. An AI that could do this perfectly right now, could always use the same process of extrapolation again for whatever the situation may develop into.

or "AI which is most instrumentally useful for (all) human beings given our most basic goals"

Comment author: lessdazed 01 October 2011 08:54:03PM 1 point [-]

A perfect utility function

As things are perfect in relation to utility functions, I still don't understand.

Comment author: iii 02 October 2011 09:00:00PM 0 points [-]

as in producing the intended result, nothing stopping us from rounding the 1 and winding up as paperclips