RobinZ comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong

16 Post author: MichaelGR 11 November 2009 03:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (682)

You are viewing a single comment's thread. Show more comments above.

Comment author: RobinZ 16 November 2009 05:30:44PM 1 point [-]

In the context of a hard-takeoff scenario (a perfectly plausible outcome, from our view), there will be no community of AIs within which any one AI will have to act. Therefore, the pressure to develop a compassionate utility function is absent, and an AI which does not already have such a function will not need to produce it.

In the context of a soft-takeoff, a community of AIs may come to dominate major world events in the same sense that humans do now, and that community may develop the various sorts of altruistic behavior selected for in such a community (reciprocal being the obvious one). However, if these AIs are never severely impeded in their actions by competition with human beings, they will never need to develop any compassion for human beings.

Reiterating your argument does not affect either of these problems for assumption A, and without assumption A, AdeleneDawner's objection is fatal to your conclusion.