FAWS comments on What is Metaethics? - Less Wrong

31 Post author: lukeprog 25 April 2011 04:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (550)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 26 April 2011 05:01:24PM 7 points [-]

I am increasingly getting the perception that morality/ethics is useless hogwash. I already believed that to be the case before Less Wrong and I am not sure why I ever bothered to take it seriously again. I guess I was impressed that people who are concerned with 'refining the art of rationality' talk about it and concluded that after all there must be something to it. But I have yet to come across a single argument that would warrant the use of any terminology related to moral philosophy.

The article Say Not "Complexity" should have been about morality. Say not "morality"...

Consider the following questions:

  • Do moral judgements express beliefs?
  • Do judgements express beliefs?
  • How do we evaluate evidence in the making of a decision?

All questions ask for the same, yet each one is less vague than the previous one.

It is as obvious as it can get that there is no single argument against deliberately building a paperclip maximizer if I want to build one and are aware of the consequences. It is not a question about morality but solely a question about wants.

The whole talk about morality seems to be nothing more than a signaling game.

The only reasons we care about other people is either to survive, i.e. get what we want, or because it is part of our preferences to see other people being happy. Accordingly, trying to maximize happiness for everybody can be framed in the language of volition rather than morality.

Once we get rid of the moral garbage, thought experiments like the trolley problem are no more than a question about one's preferences.

Comment author: FAWS 26 April 2011 05:06:36PM *  0 points [-]

I agree with you that morality can mostly be framed in terms of volation and an adequate decision theory, but I think you are oversimplifying. For example consider people talking about what other people should want purely for their own good. That might be explainable in terms of projecting their own wants in some way (or perhaps selfish self-delusion), but it doesn't seem like something you could easily predict in advance from reasoning about wants if you were unfamiliar with how people act among each other.