JGWeissman comments on Why I Moved from AI to Neuroscience, or: Uploading Worms - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (56)
I would have liked to use examples of plugging in clearly terminal values to a general goal achieving system. But the only current or historical general goal achieving systems are humans, and it is notoriously difficult to figure out what humans' terminal values are.
I am not claiming that you could give an AGI an arbitrary goal system that suppresses the "Basic AI Drives", but that those drives will be effective instrumental values, not lost purposes, and while a paperclip maximizing AGI will have sub goals such as controlling resources and improving its ability to predict the future, the achievement of those goals will help it to actually produce paperclips.
It sounds like we agree: paperclips could be a genuine terminal value for AGIs, but a dead future doesn't seem all that likely from AGIs (though it might be likely from AIs in general).
What? A paperclip AGI with first mover advantage would self-improve beyond the point where cooperating with humans has any instrumental value, become a singleton, and tile the universe with paperclips.
Oh, I agree that humans die in such a scenario, but I don't think the 'tile the universe' part counts as "dead" if the AGI has AI drives.