drethelin comments on [LINK] Another "LessWrongers are crazy" article - this time on Slate - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (129)
The basilisk get's more compliance from the believers when he puts the innocents into heaven then when he puts them into hell. Also the debate is not about an UFAI but a FAI that optimizes the utility function of general welfare with TDT.
This is also the point, where you might think about how Eliezer's censorship had an effect. His censuring did lead you and Viliam_Bur to have an understanding of the issue where you think it's about an UFAI.
yeah, the horror lies in the idea that it might be morally CORRECT for an FAI to engage in eternal torture of some people.
There is this problem with human psychology that threatening someone with torture doesn't contribute to their better judgement.
If threatening someone with eternal torture would magically raise their intelligence over 9000 and give them ability to develop a correct theory of Friendliness and reliably make them build a Friendly AI in five years... then yes, under these assumptions, threatening people with eternal torture could be the morally correct thing to do.
But human psychology doesn't work this way. If you start threatening people with torture, they are more likely to make mistakes in their reasoning. See: motivated reasoning, "ugh" fields, etc.
Therefore, the hypothetical AI threatening people with torture for... well, pretty much for not being perfectly epistemically and instrumentally rational... would decrease the probability of Friendly AI being built correctly. Therefore, I don't consider this hypothetical AI to be Friendly.