You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

tailcalled comments on On the Boxing of AIs - Less Wrong Discussion

0 Post author: tailcalled 31 March 2015 09:58PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (8)

You are viewing a single comment's thread. Show more comments above.

Comment author: tailcalled 01 April 2015 09:06:48AM 0 points [-]

There are other reasons than to check if the AI is friendly. AI, like other software, would have to be tested pretty thoroughly. It would be hard to make an AI if we can't test it without destroying the world.

Comment author: Slider 01 April 2015 12:55:03PM 0 points [-]

Isn't that only a subcase of friendliness testing?

Comment author: tailcalled 01 April 2015 01:24:26PM 0 points [-]

Not really. If you have an AI where you're not sure if it is completely broken or just unfriendly, you might want to test it, but without proper boxing you still risk destroying the world in the unlikely case that the AI works.