ciphergoth comments on Open Thread: February 2010, part 2 - Less Wrong

10 Post author: CronoDAS 16 February 2010 08:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (857)

You are viewing a single comment's thread. Show more comments above.

Comment author: ciphergoth 17 February 2010 12:44:12PM 5 points [-]

I am by and large convinced by the arguments that a UFAI is incredibly dangerous and no precautions of this sort would really suffice.

However, once a candidate FAI is built and we're satisfied we've done everything we can to minimize the chances of unFriendliness, we would almost certainly use precautions like these when it's first switched on to mitigate the risk arising from a mistake.

Comment author: dclayh 17 February 2010 09:32:21PM 1 point [-]

Certainly I'd think Eliezer (or anyone) would have much more trouble with an AI-box game if he had to get one person to convince another to let him out.

Comment author: MichaelVassar 19 February 2010 04:49:23PM 1 point [-]

Eliezer surely would, but the fact observers being surprised was the point of an AI box experiment.

In short non-technical and not precisely accurate summary, if people can be surprised once when they were very confident and can then add on extra layers and be as confident as they were before one time they can do it forever.