thomblake comments on Holden's Objection 1: Friendliness is dangerous - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (428)
Or for any number of other, non-religious reasons. And it could well be that extrapolating those people's preferences would lead, not to them rejecting their beliefs, but to them wishing to bring their god into existence.
Either people have fundamentally different, irreconcilable, values or they don't. If they do, then the argument I made is valid. If they don't, then CEV(any random person) will give exactly the same result as CEV(humanity).
That means that either calculating CEV(humanity) is an unnecessary inefficiency, or CEV(humanity) will do nothing at all, or CEV(humanity) would lead to a world that is intolerable for at least some minority of people. I actually doubt that any of the people from the SI would disagree with that (remember the torture vs flyspecks argument).
That may be considered a reasonable tradeoff by the developers of an "F"AI, but it gives those minority groups to whom the post-AI world would be inimical equally rational reasons to oppose such a development.
This is a false dilemma. If people have some values that are the same or reconcilable, then you will get different output from CEV(any random person) and CEV(humanity).
And note that an actual move by virtue ethicists is to exclude sociopaths from "humanity".