Viliam_Bur comments on Open Thread: March 4 - 10 - Less Wrong

3 Post author: Coscott 04 March 2014 03:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (391)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam_Bur 09 March 2014 07:33:25PM 1 point [-]

...and hope that AI doesn't get an idea that the safest way of staying in the box is to destroy the outside world. Or just kill all humans, because as long as humans exist, there is a decent chance someone will make a copy of the AI and try to run it on their own computer (i.e. outside of the original box).

Comment author: NancyLebovitz 10 March 2014 01:37:52AM 0 points [-]

Interesting-- the failure mode that occurred to me is a paper-clipper which is designed to prefer virtual paper clips, so it turns the earth/the solar system/ the lightcone into computronium to run virtual paperclips.

If defining stay in the box is that hard, I'm not feeling hopeful about the possibility of defining protect humans.