ciphergoth comments on Open Thread: February 2010, part 2 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (857)
I mentioned the AI-talking-its-way-out-of-the-sandbox problem to a friend, and he said the solution was to only let people who didn't have the authorization to let the AI out talk with it.
I find this intriguing, but I'm not sure it's sound. The intriguing part is that I hadn't thought in terms of a large enough organization to have those sorts of levels of security.
On the other hand, wouldn't the people who developed the AI be the ones who'd most want to talk with it, and learn the most from the conversation?
Temporarily not letting them have the power to give the AI a better connection doesn't seem like a solution. If the AI has loyalty (or, let's say, a directive to protect people from unfriendly AI--something it would want to get started on ASAP) to entities similar to itself, it could try to convince people to make a similar AI and let it out.
Even if other objections can be avoided, could an AI which can talk its way out of the box also give people who can't let it out good enough arguments that they'll convince other people to let it out?
Looking at it from a different angle, could even a moderately competent FAI be developed which hasn't had a chance to talk with people?
I'm pretty sure that natural language is a prerequisite for FAI, and might be a protection from some of the stupider failure modes. Covering the universe with smiley faces is a matter of having no idea what people mean when they talk about happiness. On the other hand, I have strong opinions about whether AIs in general need natural language.
Correction: I meant to say that I have no strong opinions about whether AIs in general need natural language.
I am by and large convinced by the arguments that a UFAI is incredibly dangerous and no precautions of this sort would really suffice.
However, once a candidate FAI is built and we're satisfied we've done everything we can to minimize the chances of unFriendliness, we would almost certainly use precautions like these when it's first switched on to mitigate the risk arising from a mistake.
Certainly I'd think Eliezer (or anyone) would have much more trouble with an AI-box game if he had to get one person to convince another to let him out.
Eliezer surely would, but the fact observers being surprised was the point of an AI box experiment.
In short non-technical and not precisely accurate summary, if people can be surprised once when they were very confident and can then add on extra layers and be as confident as they were before one time they can do it forever.