NancyLebovitz comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong

33 Post author: lukeprog 29 January 2011 02:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (202)

You are viewing a single comment's thread.

Comment author: NancyLebovitz 29 January 2011 02:59:54PM 6 points [-]

Isolation is trickier than it sounds. If AI is created once, then we can assume that humanity is an AI-creating species. What constraints on tech, action, and/or intelligence would be necessary to guarantee that no one makes an AI in what was supposed to be a safe-for-humans region?

Comment author: lukeprog 29 January 2011 04:12:34PM *  14 points [-]

Right. I'm often asked, "Why not just keep the AI in a box, with no internet connection and no motors with which to move itself?"

Eliezer's experiments with AI-boxing suggest the AI would escape anyway, but there is a stronger reply.

If we've created a superintelligence and put it in a box, that means that others on the planet are just about capable of creating a superintelligence, too. What are you going to do? Ensure that every superintelligence everyone creates is properly boxed? I think not.

Before long, the USA or China or whoever is going to think that their superintelligence is properly constrained and loyal, and release it into the wild in an effort at world domination. You can't just keep boxing AIs forever.

Comment deleted 29 January 2011 04:32:22PM *  [-]
Comment author: Perplexed 29 January 2011 04:52:58PM 0 points [-]

"You just can't keep AIs boxed forever"?