You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Nornagest comments on Less Wrong views on morality? - Less Wrong Discussion

1 Post author: hankx7787 05 July 2012 05:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (145)

You are viewing a single comment's thread.

Comment author: Nornagest 06 July 2012 02:45:39AM *  1 point [-]

I'd be extremely surprised if there turned out to be some Platonic ideal of a moral system that we can compare against. But it seems fairly clear to me that the moral systems we adopt influence factors which can be objectively investigated, i.e. happiness in individuals (however defined) or stability in societies, and that moral systems can be productively thought of as commensurable with each other along these axes. Since some aspects of our emotional responses are almost certainly innate, it also seems clear to me that the observable qualities of moral systems depend partly on more or less fixed qualities rather than the internal architecture of the moral system in question.

However, it seems unlikely to me that all of these fixed qualities are human universals, i.e. that there are going to be universally relevant "is" values from which we can derive solutions to arbitrary "ought" questions. Certain points within human mind-design-space are likely to respond differently than others to given moral systems, at least on the object level. Additionally, I think it's unlikely that the observable output of moral systems depends purely on their hosts' fixed qualities: identity maintenance and related processes set up feedback loops, and we can also expect other active moral systems nearby to play a role in their mutual success.

I'd expect, but cannot prove, the success of a moral system in guaranteeing the happiness of its adherents or the stability of their societies to be governed more by local conditions and biology (species-wide or of particular humans) and less by game-theoretic considerations. Conversely, I'd expect the success of a moral system in handling other moral systems to have more of a game-theoretic flavor, and higher meta-levels to be more game-theoretic still.

I have no idea where any of this places me in the taxonomy of moral philosophy.