TimS comments on Stupid Questions Open Thread - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (265)
As I understand the terminology, AI that only respects some humans' preferences is uFAI by definition. Thus:
is actually unFriendly, as Eliezer uses the term. Thus, the researcher you describe is already an "uFAI researcher"
What do you mean by "representative set of all human values"? Is there any reason to that the resulting moral theory would be acceptable to implement on everyone?
Absolutely. I used "friendly" AI (with scare quotes) to denote it's not really FAI, but I don't know if there's a better term for it. It's not the same as uFAI because Eliezer's personal utopia is not likely to be valueless by my standards, whereas a generic uFAI is terrible from any human point of view (paperclip universe, etc).
I guess it just doesn't bother me that uFAI includes both indifferent AI and malicious AI. I honestly think that indifferent AI is much more likely than malicious (Clippy is malicious, but awfully unlikely), but that's not good for humanity's future either.