JGWeissman comments on Why I Moved from AI to Neuroscience, or: Uploading Worms - Less Wrong

43 Post author: davidad 13 April 2012 07:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (56)

You are viewing a single comment's thread. Show more comments above.

Comment author: JGWeissman 14 April 2012 03:42:33AM 1 point [-]

but the examples you gave seem to me to be more similar to compulsions than to utility functions

I would have liked to use examples of plugging in clearly terminal values to a general goal achieving system. But the only current or historical general goal achieving systems are humans, and it is notoriously difficult to figure out what humans' terminal values are.

My model of davidad's view is that part of general intelligence, as opposed to narrow intelligence, is varied and complex goals. We could make a narrow AI which only cared about the number of paperclips in the universe, but in order to make an intelligence that's general we need to make it also care about the future, planning, existential risk, and so on.

I am not claiming that you could give an AGI an arbitrary goal system that suppresses the "Basic AI Drives", but that those drives will be effective instrumental values, not lost purposes, and while a paperclip maximizing AGI will have sub goals such as controlling resources and improving its ability to predict the future, the achievement of those goals will help it to actually produce paperclips.

Comment author: Vaniver 14 April 2012 04:44:59AM 0 points [-]

I am not claiming that you could give an AGI an arbitrary goal system that suppresses the "Basic AI Drives", but that those drives will be effective instrumental values, not lost purposes, and while a paperclip maximizing AGI will have sub goals such as controlling resources and improving its ability to predict the future, the achievement of those goals will help it to actually produce paperclips.

It sounds like we agree: paperclips could be a genuine terminal value for AGIs, but a dead future doesn't seem all that likely from AGIs (though it might be likely from AIs in general).

Comment author: JGWeissman 14 April 2012 05:20:03AM 2 points [-]

a dead future doesn't seem all that likely from AGIs

What? A paperclip AGI with first mover advantage would self-improve beyond the point where cooperating with humans has any instrumental value, become a singleton, and tile the universe with paperclips.

Comment author: Vaniver 14 April 2012 06:42:21AM 1 point [-]

What? A paperclip AGI with first mover advantage would self-improve beyond the point where cooperating with humans has any instrumental value, become a singleton, and tile the universe with paperclips.

Oh, I agree that humans die in such a scenario, but I don't think the 'tile the universe' part counts as "dead" if the AGI has AI drives.