cousin_it comments on Will AGI surprise the world? - Less Wrong

12 Post author: lukeprog 21 June 2014 10:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

You are viewing a single comment's thread. Show more comments above.

Comment author: skeptical_lurker 22 June 2014 11:41:16AM *  2 points [-]

(Nobody was ever tempted to say, "But as the nuclear chain reaction grows in power, it will necessarily become more moral!")

Apologies for asking an off-topic question that has certainly been discussed somewhere before, but if advanced decision theories are logically superior, then they are in some sense universal, in that a large subspace of mindspace will adopt them when the minds become intelligent enough ("Three worlds collide" seems to indicate that this is EYs opinion, at least for minds that evolved), then even a paperclip maximiser would assign some nontrivial component of its utility function to match humanity's, iff we would have done the same in the counterfactual case that FAI came first (I think this does also have to assume that at least one party has a sublinear utility curve).

In this sense, it seems that as entities grow in intelligence, they are at least likely to become more cooperative/moral.

Of course, FAI is vastly preferable to an AI that might be partially cooperative, so I am not trying to diminish the importance of FAI. I'd still like to know whether the consensus opinion is that this is plausible.

Actually I think I know one place has been discussed before - Clippy promised friendliness and someone else promised him a lot of paperclips. But I don't know of a serious discussion.

Comment author: cousin_it 23 June 2014 04:36:48PM *  3 points [-]

Just a historical note, I think Rolf Nelson was the earliest person to come up with that idea, back in 2007. Though it was phrased in terms of simulation warfare rather than acausal bargaining at first.