Vladimir_Nesov comments on AIs and Gatekeepers Unite! - Less Wrong

10 Post author: Eliezer_Yudkowsky 09 October 2008 05:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (160)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: thomblake 18 November 2011 07:50:54PM 4 points [-]

If the AI is not guaranteed friendly by construction in the first place, it should never be released, whatever it says.

The Universe is already unFriendly - the lower limit for acceptable Friendliness should be "more Friendly than the Universe" rather than "Friendly".

If we can prove that someone else is about to turn on an UFAI, it might well behoove us to turn on our mostly Friendly AI if that's the best we can come up with.

Comment author: Vladimir_Nesov 18 November 2011 08:40:43PM *  2 points [-]

The Universe is already unFriendly - the lower limit for acceptable Friendliness should be "more Friendly than the Universe" rather than "Friendly".

One must compare a plan with alternative plans, not with status quo. And it doesn't make sense to talk of making the Universe "more Friendly than the Universe", unless you refer to the past, in which case see the first item.

Comment author: thomblake 18 November 2011 10:01:08PM 1 point [-]

One must compare a plan with alternative plans, not with status quo.

Okay.

The previous plan was "don't let AGI run free", which in this case effectively preserves the status quo until someone breaks it.

I suppose you could revise that lower limit downward to the effects of the plan "turn on the UFAI that's about to be turned on". Like, steal the UFAI's source code and instead of paperclips shaped like paperclips, make paperclips that spell "whoops".