Lukas_Gloor comments on Moral Anti-Epistemology - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (36)
What's wrong with that? Not enough concern for non human animals?
Does that mean what counts as good epistemology in the context of ethics is specific to the contexts of ethics?
All variations on deontology?
The way most people use it, the slogan would also put all transhumanist ideas outside the space of things to consider. I feel that it is "wrong" in that it prematurely limits your search space, but I guess if someone really did just care about how humans in their current set-up interact with each other, ok...
Yes, and I find this non-trivial because it means that "ethics" is too broad for there to be one all-encompassing methodology. For instance, some people people are just interested to find an "impartial" view that they would choose behind the veil of ignorance, whereas others also want to account for person-specific intuitions and preferences. None of these two parties is wrong, they just have different axioms. The situations seems different when you look at science, there people seem to agree on the criteria for a good scientific explanation (well, at least in most cases).
No, Golden-rule deontology is very similar to timeless cooperation for instance, and that doesn't strike me as a misguided thing to be thinking about.
Are you sure? That meaning wasn't obvious to me?
There could be further considerations that can be brought to bear. Just because something is claimed as axiomatic , doesn't mean the buck has actually stopped. Having multiple epistemologies with equally good answers us something of a disaster.
I still don't know what you think is bad about bad deontology.
In general, you need to make many fewer assumptions that what is obvious to you is obvious to everybody.
I often got this as an objection to utilitarianism, the other premise being that utilitarianism is impractical for humans. I've talked to lots of people about ethics since I took high school philosophy classes, study philosophy at university, and have engaged in more than a hundred online discussions about ethics. The objection actually isn't that bad if you steelman it, maybe people are trying to say that they, as humans, care about many other things and would be overwhelmed with utilitarian obligations. (But there remains the question whether they care terminally about these other things, or whether they would self-modify to a perfect utilitarian robot if given the chance.)
There could be in some cases, if people find out they didn't really believe their axiom after all. But it can just as well be that the starting assumptions really are axiomatic. I think that the idea that terminal values are hardwired in the human brain, and will converge if you just give an FAI good instructions to get them out, is mistaken. There are billions of different ways of doing the extrapolation, and they won't all output the same. At the end of the day, the buck does have to stop somewhere, and where else could that be than where a person, after long reflection and an understanding of what she is doing, concludes that x are her starting assumptions and that's it.
I don't quite agree with the prominent LW-opinion that human values are complex. What is complex are human moral intuitions. But no one is saying that you need to take every intuition into account equally. Humans are very peculiar sort of agents in mind space, when you ask most people what their goal is in life, they do not know or they give you an answer that they will take back as soon as you point out some counterintuitive implications of what they just said. I imagine that many AI-designs would be such that the AIs are always clearly aware of their goals, and thus feel no need to ever engage in genuine moral philosophy. Of course, people do have a utility-function in form of revealed preferences, what they would do if you placed them in all sorts of situations, but is that the thing we are interested in when we talk of terminal values? I don't think so! It should at least be on the table that some fraction of my brain's pandemonium of voices/intuitions is stronger than the other fractions, and that this fraction makes up what I consider the rational part of my brain and the core part of my moral self-identity, and that I would, upon reflection, self-modify to an efficient robot with simple values. Personally I would do this, and I don't think I'm missing anything that would imply that I'm making any sort of mistake. Therefore, the view that all human values are necessarily complex seems mistaken to me.
These different epistemologies have a lot in common. The exercise would always be "define you starting assumptions, then see which moves are goal-tracking, and which ones aren't". Ethical thought experiments for instance, or distinguishing instrumental values from terminal ones, are things that you need to do either way if you think about what your goals are, e.g. how you would want to act in all possible decision-situations.
It is often vague and lets people get away with not thinking things through. It feels like they have an answer, but most people would have no clue how to set the parameters for an AI that implemented their type of deontology (e.g. when dilemma situations become probabilistic, which is, of course, all the time).
It contains discussion stoppers like "rights", even though, when you taboo the term, that just means "harming is worse than not-helping", which is a weird way to draw a distinction, because when you're in pain, you primarily care about getting out of it and don't first ask what the reason for it was. Related: It gives the air of being "about the victim", but it's really more about the agent's own moral intuitions, and is thus, not really other-regarding/impartial at all. This would be ok if deontologists were aware of it, but they often aren't. They object to utilitarianism on the grounds of it being "inhumane", instead of "too altruistic".
Yes, I see that now. I thought I was mainly preaching to the choir and didn't think the details of people's metaethical views would matter for the main thoughts in my original post. It felt to me like I was saying something at risk of being too trivial, but maybe I should have picked better examples. I agree that this comment does a good job at what I was trying to get at.
The same is true of most discussions of consequentialism and utility functions.
No, most consequentialists have a very good idea of how they would deal with probabilistic decision-situations, that's where consequentialism is good at. This is worked out to a much lesser extent in deontology.
I'm not saying that most forms of consequentialism aren't vague at all, if you interpreted me charitably, you would assume that I'm talking about a difference in degree.
An example of "letting people get away with not thinking things through": Consider the entire domain of population ethics. Why is this predominantly being discussed by consequentialists, where it is recognized as a huge problem-area? It's not like analogous difficulties wouldn't turn up in deontology if you went deep enough into the rabbit hole, but how many deontologists have gone there?
Whereas what it is bad at is combining utility functions.
Do you mean utility functions of different parts of your brain? I agree. But no one says it's necessary to consider every single voice in your mind. If your internal democracy falls into a consequentialist dictatorship because somehow your most fundamental intuition is about altruism, that seems totally fine. Likewise, if you have a lot of strong deontological intuitions and don't want to just overwrite them with a more simple, consequentialist view, that's totally fine as well, as long as you understand what you're doing. I'm only objecting to deontology because most of the time, it seems like people think they are doing more than just following their intuitions, they think they somehow do the only right or alruistic thing, when this is non-obvious at best. The "as long as you understand what you're doing" of course also applies to consequentialists: it would be problematic if the main reason someone is a consequentialist is that she thinks utility functions ought to be simple/elegant. (Consequentialism doesn't necessarily have to be simple, complexity of value could well be consequentialist as well. I'm mainly talking about utilitarianism and closely related views here.)
No I mean combining utilities across individual, species, etc H
You have missed my point entirely. I meant that it is actually difficult to make consequentialism work, and c ists solve the problem by taking it glibly ... your critique of deontology, IOW.
Rightly. Most of the time they are following socially defined rules.
Ah, aggregation. This seems to be mainly a problem for what I would call preference utilitarianism, where you sum up utility functions over individuals. Outside of LW, the standard usage of utilitarianism refers to experiential utilitarianism, where the only matter of concern is hedonic tone. Hence my confusion about what you meant. There are still some tricky questions with that, e.g. how many seconds of intense depression of a 24-year-old human is worse than a chimpanzee being burned alive for 1 second, but at worst these questions require the stipulation of a finite number of tradeoff values. So your objection fails for the (arguably) most popular forms of utilitarianism.
In addition, I would say it also fails for preference utilitarianism, because I would imagine that these problems arise mainly because utilitarians are trying hard to find decision-criteria that cover all conceivable situations. If someone took deontology this seriously, I suspect that they too would run into aggregation problems of some sort somewhere, except if they block aggregation entirely (Taurek) and rely on the view that "numbers never count".
I wasnt objecting to utilitarianism.
Belief isnt the important criterion. The important criterion is whether person B can argue for or against what person A takes as automatic. How do you show objectively that claim can't be argued for, and has to be assumed.
Values are complex. Whether moral values are complex is another story.
That doesn't seem to be an intrinsic problem. You can make a set of rules as precise as you like. It also not clear that the well known alternatives fare better. Utilitarianism, in particular, works only in fairly constrained domains, where you're not comparing apples and oranges.
Arguably, that's a feature, not a bug. If people realised how insubstantial ethics is, they would have trouble sticking to it.
I know, my point referred to people using "ethics is from humans for humans" in a way that would also rule out transhumanism.
The burden of proof is elsewhere, how do you overcome the is-ought distinction when you try to justify/argue for a claim? Edit: To repraphse this (don't know how this could get me downvotes, but I'm trying to make this more clear), if the arguments for the is-ought distinction, which seem totally sound, are correct, it is unclear how you could argue for person A's moral assumptions being incorrect, at least in cases where these assumptions are non-contradicting and not based on confused metaphysics.
Well, there are two things I have to say in response to that:
I changed my mind midway through this post. Hopefully it still makes sense... I started disagreeing with you based on the first two thoughts that come to mind, but I'm now beginning to think you may be right.
I.
This statement doesn't really fit with the philosophy of morality. (At least as I read it.)
Consequentialism distinguishes itself from other moral theories by emphasizing terminal values more than other approaches to morality do. A consequentialist can have "No murder" as a terminal value, but that's different from a deontologist believing that murder is wrong or a Virtue Ethicist believing that virtuous people don't commit murder. A true consequentialist seeking to minimize the amount of murder that happens would be willing to commit murder to prevent more murder, but neither a deontologist nor a virtue ethicist would.
Contractualism is a framework for thinking about morality that presupposes that people have terminal values and their values sometimes conflict with each other's terminal values. It's a description of morality as a negotiated system of adopting/avoiding certain instrumental goals so that the people who implicitly negotiate the contract for their mutual benefit at attaining their terminal values. It says nothing about what kind of terminal values people should have.
II.
Discussions of morality focus on what people "should" do and what people "should" think, etc. The general idea of terminal values is that you have them and they don't change in response to other considerations. They're the fixed points that affect the way you think about what you want to accomplish with you instrumental goals. There's no point to discussing what kind of terminal values people "should" have. But in practice, people agree that there is a point to discussing what sorts of moral beliefs people should have.
III.
The psychological conditions that cause people to become immoral by most other people's standards have a lot to do with terminal values, but not anything to do with the kinds of terminal values that people talk about when they discuss morality.
Sociopaths are people who don't experience empathy or remorse. Psychopaths are people who don't experience empathy, remorse, or fear. Being able to feel fear is not the sort of thing that seems relevant to a discussion about morality... But that's not the same thing as saying that being able to feel fear is not relevant to a discussion about morality. Maybe it is.
Maybe what we mean by morality, is having the terminal values that arise from experiencing empathy, remorse, and fear the way most people experience these things in relation to the people they care about. That sounds like a really odd thing to say to me... but it also sounds pretty empirically accurate for nailing down what people typically mean when they talk about morality.
Instrumental values can clash too. The instrumental-terminal axis is pretty well orthogonal to the morally relevant/irrelevant axis.