NancyLebovitz comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (202)
Isolation is trickier than it sounds. If AI is created once, then we can assume that humanity is an AI-creating species. What constraints on tech, action, and/or intelligence would be necessary to guarantee that no one makes an AI in what was supposed to be a safe-for-humans region?
Right. I'm often asked, "Why not just keep the AI in a box, with no internet connection and no motors with which to move itself?"
Eliezer's experiments with AI-boxing suggest the AI would escape anyway, but there is a stronger reply.
If we've created a superintelligence and put it in a box, that means that others on the planet are just about capable of creating a superintelligence, too. What are you going to do? Ensure that every superintelligence everyone creates is properly boxed? I think not.
Before long, the USA or China or whoever is going to think that their superintelligence is properly constrained and loyal, and release it into the wild in an effort at world domination. You can't just keep boxing AIs forever.
"You just can't keep AIs boxed forever"?