XiXiDu comments on AIs and Gatekeepers Unite! - Less Wrong

10 Post author: Eliezer_Yudkowsky 09 October 2008 05:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (160)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 18 November 2011 08:22:42PM 1 point [-]

If the AI is not guaranteed friendly by construction in the first place, it should never be released, whatever it says.

What if doom is imminent and we are unable to do something about it?

Comment author: lessdazed 18 November 2011 08:41:18PM 2 points [-]

We check and see if we are committing the conjunction fallacy and wrongly think doom is imminent.

Comment author: Vladimir_Nesov 18 November 2011 08:42:15PM 10 points [-]

What if doom is imminent and we are unable to do something about it?

We die.

Comment author: wedrifid 01 December 2011 10:29:51AM 1 point [-]

What if doom is imminent and we are unable to do something about it?

We release it. (And then we still probably die.)