dxu comments on Moral Anti-Epistemology - Less Wrong

0 Post author: Lukas_Gloor 24 April 2015 03:30AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (36)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lukas_Gloor 24 April 2015 12:41:57PM *  -2 points [-]

What's wrong with that? Not enough concern for non human animals?

The way most people use it, the slogan would also put all transhumanist ideas outside the space of things to consider. I feel that it is "wrong" in that it prematurely limits your search space, but I guess if someone really did just care about how humans in their current set-up interact with each other, ok...

Does that mean what counts as good epistemology in the context of ethics is specific to the contexts of ethics?

Yes, and I find this non-trivial because it means that "ethics" is too broad for there to be one all-encompassing methodology. For instance, some people people are just interested to find an "impartial" view that they would choose behind the veil of ignorance, whereas others also want to account for person-specific intuitions and preferences. None of these two parties is wrong, they just have different axioms. The situations seems different when you look at science, there people seem to agree on the criteria for a good scientific explanation (well, at least in most cases).

All variations on deontology?

No, Golden-rule deontology is very similar to timeless cooperation for instance, and that doesn't strike me as a misguided thing to be thinking about.

Comment author: dxu 24 April 2015 04:13:19PM *  1 point [-]

No, Golden-rule deontology is very similar to timeless cooperation for instance, and that doesn't strike me as a misguided thing to be thinking about.

Well, there are two things I have to say in response to that:

  1. Timeless decision-making is a decision algorithm; you can use it to maximize any utility function you want. In other words, it's instrumental, not terminal. So it's hard to see how timeless cooperation could be morally significant, since morality usually deals with terminal values, not instrumental goals.
  2. Timeless decision-making is still based on your estimated degree of similarity to other agents on the playing field. I'll only cooperate in the one-shot Prisoner's Dilemma if I suspect my decision and my opponent's are logically connected. So even if you advocate timeless decision-making, "cooperate in PD-like situations" is still not going to be a universal rule like the Golden Rule.
Comment author: afeller08 24 April 2015 09:38:49PM 0 points [-]

I changed my mind midway through this post. Hopefully it still makes sense... I started disagreeing with you based on the first two thoughts that come to mind, but I'm now beginning to think you may be right.

So it's hard to see how timeless cooperation could be morally significant, since morality usually deals with terminal values, not instrumental goals.

I.

This statement doesn't really fit with the philosophy of morality. (At least as I read it.)

Consequentialism distinguishes itself from other moral theories by emphasizing terminal values more than other approaches to morality do. A consequentialist can have "No murder" as a terminal value, but that's different from a deontologist believing that murder is wrong or a Virtue Ethicist believing that virtuous people don't commit murder. A true consequentialist seeking to minimize the amount of murder that happens would be willing to commit murder to prevent more murder, but neither a deontologist nor a virtue ethicist would.

Contractualism is a framework for thinking about morality that presupposes that people have terminal values and their values sometimes conflict with each other's terminal values. It's a description of morality as a negotiated system of adopting/avoiding certain instrumental goals so that the people who implicitly negotiate the contract for their mutual benefit at attaining their terminal values. It says nothing about what kind of terminal values people should have.

II.

Discussions of morality focus on what people "should" do and what people "should" think, etc. The general idea of terminal values is that you have them and they don't change in response to other considerations. They're the fixed points that affect the way you think about what you want to accomplish with you instrumental goals. There's no point to discussing what kind of terminal values people "should" have. But in practice, people agree that there is a point to discussing what sorts of moral beliefs people should have.

III.

The psychological conditions that cause people to become immoral by most other people's standards have a lot to do with terminal values, but not anything to do with the kinds of terminal values that people talk about when they discuss morality.

Sociopaths are people who don't experience empathy or remorse. Psychopaths are people who don't experience empathy, remorse, or fear. Being able to feel fear is not the sort of thing that seems relevant to a discussion about morality... But that's not the same thing as saying that being able to feel fear is not relevant to a discussion about morality. Maybe it is.

Maybe what we mean by morality, is having the terminal values that arise from experiencing empathy, remorse, and fear the way most people experience these things in relation to the people they care about. That sounds like a really odd thing to say to me... but it also sounds pretty empirically accurate for nailing down what people typically mean when they talk about morality.

Comment author: TheAncientGeek 02 May 2015 09:04:48PM -1 points [-]

Contractualism is a framework for thinking about morality that presupposes that people have terminal values and their values sometimes conflict with each other's terminal value

Instrumental values can clash too. The instrumental-terminal axis is pretty well orthogonal to the morally relevant/irrelevant axis.