TheOtherDave comments on Is Morality a Valid Preference? - Less Wrong

13 Post author: MinibearRex 21 February 2011 01:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (75)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 23 February 2011 01:58:53PM *  0 points [-]

I am fairly sure that we aren't talking past each other, I just disagree with you on some points. Just to try and clarify those points...

  • You seem to believe that a moral theory must, first and foremost, be compelling... if moral theory X does not convince others, then it can't do much worth doing. I am not convinced of this. For example, working out my own moral theory in detail allows me to recognize situations that present moral choices, and identify the moral choices I endorse, more accurately... which lowers my chances of doing things that, if I understood better, I would reject. This seems worth doing, even if I'm the only person who ever subscribes to that theory.

  • You seem to believe that if moral theory X is not rationally compelling, then we cannot come to agree on the specific claims of X except by chance. I'm unconvinced of that. People come to agree on all kinds of things where there is a payoff to agreement, even where the choices themselves are arbitrary. Heck, people often agree on things that are demonstrably false.

  • Relatedly, you seem to believe that if X logically entails Y, then everyone in the world who endorses X necessarily endorses Y. I'd love to live in that world, but I see no evidence that I do. (That said, it's possible that you are actually making a moral claim that having logically consistent beliefs is good, rather than a claim that people actually do have such beliefs. I'm inclined to agree with the former.)

  • I can have a moral intuition that bears clubbing baby seals is wrong, also. Now, I grant you that I, as a human, am less likely to have moral intuitions about things that don't affect humans in any way... but my moral intuitions might nevertheless be expressible as a general principle which turns out to apply to non-humans as well.

  • You seem to believe that things I'm biologically predisposed to desire, I will necessarily desire. But lots of biological predispositions are influenced by local environment. My desire for pie may be stronger in some settings than others, and it may be brought lower than my desire for the absence of pie via a variety of mechanisms, and etc. Sure, maybe I can't "will myself to unlove it," but I have stronger tools available than unaided will, and we're developing still-stronger tools every year.

  • I agree that the desire to be rational is a desire like any other. I intended "much of anything else" to denote an approximate absence of desire, not a complete one.

Comment author: rohern 24 February 2011 05:25:07AM 0 points [-]

I think an important part of our disagreement, at least for me, is that you are interested in people generally and morality as it is now --- at least your examples come from this set --- while I am trying to restrict my inquiry to the most rational type of person, so that I can discover a morality that all rational people can be brought to through reason alone without need for error or chance. If such a morality does not exist among people generally, then I have no interest for the morality of people generally. To bring it up is a non sequitur in such a case.

I do not see that people coming to agree on things that are demonstrably false is a point against me. This fact is precisely why I am turned-off by the current state of ethical thought, as it seems infested with examples of this circumstance. I am not impressed by people who will agree to an intellectual point because it is convenient. I take truth first, at least that is the point of this inquiry.

I am asking a single question: Is there (or can we build) a morality that can be derived with logic from first principles that are obvious to everyone and require no Faith?

Comment author: TheOtherDave 24 February 2011 02:23:35PM 1 point [-]

You're right, I'm concerned with morality as it applies to people generally.

If you are exclusively concerned with sufficiently rational people, then we have indeed been talking past each other. Thanks for clarifying that.

As to your question: I submit that for that community, there are only two principles that matter:

  1. Come to agreement with the rest of the community about how to best optimize your shared environment to satisfy your collective preferences.

  2. Abide by that agreement as long as doing so is in the long-term best interests of everyone you care about.

...and the justification for those principles is fairly self-evident. Perhaps that isn't a morality, but if it isn't I'm not sure what use that community would have for a morality in the first place. So I say: either of course there is, or there's no reason to care.

The specifics of that agreement will, of course, depend on the particular interests of the people involved, and will therefore change regularly. There's no way to build that without actually knowing about the specific community at a specific point in time. But that's just implementation. It's like the difference between believing it's right to not let someone die, and actually having the medical knowledge to save them.

That said, if this community is restricted to people who, as you implied earlier, care only for rationality, then the resulting agreement process is pretty simple. (If they invite people who also care for other things, it will get more complex.)

Comment author: rohern 25 February 2011 04:17:06AM 0 points [-]

Very well put.

Comment author: Prolorn 25 February 2011 07:23:48AM 0 points [-]

I am asking a single question: Is there (or can we build) a morality that can be derived with logic from first principles that are obvious to everyone and require no Faith?

Perhaps you've already encountered this, but your question calls to mind the following piece by Yudkowsky: No Universally Compelling Arguments, which is near the start of his broader metaethics sequence.

I think it's one of Yudkowsky's better articles.
(On a tangential note, I'm amused to find on re-reading it that I had almost the exact same reaction to The Golden Transcendence, though I had no conscious recollection of the connection when I got around to reading it myself.)