You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Viliam_Bur comments on [LINK] Another "LessWrongers are crazy" article - this time on Slate - Less Wrong Discussion

9 Post author: CronoDAS 18 July 2014 04:57AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

You are viewing a single comment's thread. Show more comments above.

Comment author: drethelin 18 July 2014 05:38:30PM 2 points [-]

yeah, the horror lies in the idea that it might be morally CORRECT for an FAI to engage in eternal torture of some people.

Comment author: Viliam_Bur 19 July 2014 09:25:58AM 3 points [-]

There is this problem with human psychology that threatening someone with torture doesn't contribute to their better judgement.

If threatening someone with eternal torture would magically raise their intelligence over 9000 and give them ability to develop a correct theory of Friendliness and reliably make them build a Friendly AI in five years... then yes, under these assumptions, threatening people with eternal torture could be the morally correct thing to do.

But human psychology doesn't work this way. If you start threatening people with torture, they are more likely to make mistakes in their reasoning. See: motivated reasoning, "ugh" fields, etc.

Therefore, the hypothetical AI threatening people with torture for... well, pretty much for not being perfectly epistemically and instrumentally rational... would decrease the probability of Friendly AI being built correctly. Therefore, I don't consider this hypothetical AI to be Friendly.