You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Jack comments on Less Wrong views on morality? - Less Wrong Discussion

1 Post author: hankx7787 05 July 2012 05:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (145)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jack 05 July 2012 06:59:50PM 3 points [-]

Perhaps I misunderstood. There are definitely possible scenarios in which metaethics could matter to a paperclip maximizer. It's just that answering "what meta-ethics would the best paperclip maximizer have?" isn't any easier than answering "what is the ideal metaethics?". Varying an agent's goal structure doesn't change the question.

That said, if you think humans are just like paperclip maximizers except they're trying to maximize something else than you're already 8/10ths of the way to moral anti-realism (Come! Take those last two steps the water is fine!).

Of course it's also the case that meta-ethics probably matters more to humans than paperclip maximizers: In particular metaethics matters for humans because of individual moral uncertainty, group and individual moral change, differences in between individual moralities, and the overall complexity of our values. There are probably similar possible issues for paperclip maximizers-- like how should they resolve uncertainty over what counts as a paperclip or deal with agents that are ignorant of the ultimate value of paperclips-- and thinking about them pumps my anti-realist intuitions.