rlpowell comments on The human problem - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (7)
The problem here is that, as evidenced by SL4 list posts, Phil is serious.
So basically, there is some super-morality or super-goal or something that is "better" by some standard than what humans have. Let's call it woogah. Phil is worried because we're going to make FAI that can't possibly learn/reach/achieve/understand woogah because it's based on human values.
As far as I can see, there are three options here:
Phil values woogah, which means it's included in the space of human values, which means there's no problem.
Phil does not value woogah, in which case we wouldn't be having this discussion because he wouldn't be worried about it.
Phil thinks that there's some sort of fundamental/universal morality that makes woogah better than human values, even though woogah can't be reached from a human perspective, at all, ever. This is perhaps the most interesting option, except that there's no evidence, anywhere, that such a thing might exist. The is-ought problem does not appear to be solvable; we have only our own preferences out of which to make the future, because that's all we have and all we can have. We could create a mind that doesn't have our values, but the important question is: what would it have instead?
-Robin