timtyler comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong

33 Post author: lukeprog 29 January 2011 02:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (202)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 29 January 2011 11:21:55PM 0 points [-]

Aren't you simply assuming that the world is doomed here? It sure looks like it!

Since when is that assumption part of a valid argument?

Comment author: Will_Newsome 09 February 2011 10:54:10AM 0 points [-]

That assumption isn't really a core part of the argument... the general "if specifying human concepts is easy, then come up with a plan for making a seed AI want to stay in a box" argument still stands, even if we don't actually want to keep arbitrary seed AIs in boxes.

For the record I am significantly less certain than most LW or SIAI singularitarians that seed AIs not explicitly coded with human values in mind will end up creating a horrible future, or at least a more horrible future than something like CEV. I do think it's worth a whole lot of continued investigation.