Recovering_irrationalist comments on Heading Toward Morality - Less Wrong

20 Post author: Eliezer_Yudkowsky 20 June 2008 08:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Recovering_irrationalist 22 June 2008 10:47:35AM 1 point [-]

Fly: A super intelligent AI might deduce or discover that other powerful entities exist in the universe and that they will adjust their behavior based on the AI's history. The AI might see some value in displaying non-greedy behavior to competing entities. I.e., it might let humanity have a tiny piece of the universe if it increases the chance that the AI will also be allowed its own piece of the universe.

Maybe before someone builds AGI we should decide that as we colonize the universe we'll treat weaker superintelligences that overthrew their creators based on how they treated those defeated creators (eg. ground down for atoms vs well cared for pets). It would be evidence to an Unfriendly AI that others would do similar, so maybe our atoms aren't so tasty after all.