wedrifid comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong

33 Post author: lukeprog 29 January 2011 02:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (202)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 30 January 2011 04:27:22AM 3 points [-]

I don't think that we need (or will) wait to solve that problem before we build AGI, any more or less than we need to solve it for having children and creating a new generation of humans.

If we can build AGI somewhat better than us according to our current moral criteria, they can build an even better successive generation, and so on - a benevolence explosion.

Someone help me out. What is the right post to link to that goes into the details of why I want to scream "No! No! No! We're all going to die!" in response to this?

Comment author: Vladimir_Nesov 30 January 2011 09:19:17AM 0 points [-]

Coming of Age sequence examined realization of this error from Eliezer's standpoint, and has further links.

Comment author: jacob_cannell 30 January 2011 10:16:54AM 0 points [-]

In which post? I'm not finding discussion about the supposed danger of improved humanish AGI.

Comment author: Vladimir_Nesov 30 January 2011 10:22:32AM *  -1 points [-]

That Tiny Note of Discord, say. (Not on "humanish" AGI, but eventually exploding AGI.)

Comment author: jacob_cannell 30 January 2011 07:07:34PM *  0 points [-]

I don't see much of a relation at all to what i've been discussing in that first post.

[http://lesswrong.com/lw/lq/fake_utility_functions/] is a little closer, but still doesn't deal with human-ish AGI.