You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

tailcalled comments on The Hardcore AI Box Experiment - Less Wrong Discussion

3 Post author: tailcalled 30 March 2015 06:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread. Show more comments above.

Comment author: tailcalled 30 March 2015 10:04:37PM 2 points [-]

I think the fundamental point I'm trying to make is that Eliezer merely demonstrated that humans are too insecure to box an AI and that this problem can be solved by not giving the AI a chance to hack the humans.

Comment author: artemium 31 March 2015 06:06:16AM 0 points [-]

Agree.. The AI boxing Is horrible idea for testing AI safety issues. Putting AI in some kind of virtual sandbox where you can watch his behavior is much better option, as long as you can make sure that AGI won't be able to become aware that he is boxed in.

Comment author: Vaniver 31 March 2015 01:22:39PM *  1 point [-]

Agree.. The AI boxing Is horrible idea for testing AI safety issues. Putting AI in some kind of virtual sandbox where you can watch his behavior is much better option, as long as you can make sure that AGI won't be able to become aware that he is boxed in.

  1. What's the difference between the AI's text output channel and you observing the virtual sandbox?
  2. Is it possible to ensure that the AI won't realize that it is boxed in?
  3. Is it possible to ensure that, if the AI does realize that it is boxed in, we will be able to realize that it realizes that?

As I understand it, the main point of the AI Box experiment was not whether or not humans are good gatekeepers, but that people who don't understand why it would be enticing to let an AI out of the box haven't fully engaged with the issue. But even how to correctly do a virtual sandbox for an AGI is a hard problem that requires serious attention.

Comment author: dxu 30 March 2015 11:11:52PM *  0 points [-]

That being said, if you have an AI, only to seal it in a box without interacting with it in any way (which seems the only realistic way to "not [give] the AI a chance to hack the humans"), that's not much different from not building the AI in the first place.

Comment author: tailcalled 30 March 2015 11:31:53PM 0 points [-]

I'll post a list of methods soon, probably tomorrow.