Comment author: OrphanWilde 01 May 2015 07:29:47PM 3 points [-]

You're making a mistake, in assuming that ethical systems are intended to do what you think they're intended to do. I'm going to make some complete unsubstantiated claims; you can evaluate them for yourself.

Point 1: The ethical systems aren't designed to be followed by the people you're talking to.

Normal people operate by internal guidance through implicit and internal ethics, primarily guilt; ethics are largely and -deliberately- a rationalization game. That's not an accident. Being a functional person means being able to manipulate the ethical system as necessary, and justify the actions you would have taken anyways.

Point 2: The ethical systems aren't just there to be followed, they're there to see who follows them.

People who -do- need the ethical systems are, from a social perspective, dangerous and damaged. Ethical systems are ultimately a fallback for these kinds of people, but also a marker; "normal" people don't -need- ethics. As a rule of thumb, anybody who has strict adherence to a code of ethics is some variant of sociopath. And also as a rule of thumb, some mechanism of taking advantage of these people, who can't know any better, is going to be built into these ethical systems. It will generally take some form akin to "altruism", and is most recognizable when ethical behavior begins to be labeled as selfishness, such as variants of Buddhism where personal enlightenment is treated as selfish, or Comtean altruism.

Point 3: The ethical systems are designed to be flexible

People who have internal ethical systems -do- need something to deal with situations which have no ethical solutions, but nonetheless are necessary to solve. Ethical systems which don't permit considerable flexibility in dealing with these situations aren't useful. But because of sociopaths, who still need ethical systems to be kept in line, you can't just permit anything. This is where contradiction is useful; you can use mutually exclusive rules to justify whatever action you need to take, without worrying about any ordinary crazy person using the same contradictions to their advantage, since they're trying to follow all the rules all the time.

Point 4: Ethical systems were invented by monkeys trying to out-monkey other monkeys

Finally, ethical systems provide a framework by which people can assert or prove their superiority, thereby improving their perceived social rank (what, you think most people here are arguing with an interest in actually getting the right answer?). A good ethical framework needs to provide room for disagreement; ambiguity and contradiction are useful here, as well, especially because a large point of ethical systems is to provide a framework to justify whatever action you happened to take. This is enhanced by perceptions of the ethical framework itself, which is why mathematicians will tend to claim utilitarianism is a great ethical system, in spite of it being a perfectly ambiguous "ethical system"; it has a superficially mathematical rigor to it, so appears more scientific, and lends itself to mathematics-based arguments.

See all the monkeys correcting you on trivial issues? Raising meaningless points that contribute nothing to anybody's understanding of anything while giving them a basis to prove their intelligence in thinking about things you hadn't considered? They're just trying to elevate their social status, here measured by karma points. On a site called Less Wrong, descended from a site called Overcoming Bias, the vast majority of interactions are still ultimately driven by an unconscious bias for social status. Although I admit the quality of the monkey-games here is at times somewhat better than elsewhere.

If you want an ethical system that is actually intended to be followed as-is, try Objectivism. There may be other ethical systems designed for sociopaths, but as a rule, most ethical systems are ultimately designed to take advantage of the people who actually try to follow them, as opposed to pay lip service to them.

Comment author: Lukas_Gloor 02 May 2015 10:53:52AM *  0 points [-]

Good points. My entire post assumes that people are interested in figuring out what they would want to do in every conceivable decision-situation. That's what I''d call "doing ethics", but you're completely correct that many people do something very different. Now, would they keep doing what they're doing if they knew exactly what they're doing and not doing, i.e. if they were aware of the alternatives? If they were aware of concepts like agentyness? And if yes, what would this show?

I wrote down some more thoughts on this in this comment. As a general reply to your main point: Just because people act as though they are interested in x rather than y doesn't mean that they wouldn't rather choose y if they were more informed. And to me, choosing something because one is not optimally informed seems like a bias, which is why I thought the comparison/the term "moral anti-epistemology" has merits. However, under a more Panglossian interpretation of ethics, you could just say that people want to do what they do, and that this is perfectly fine. I depends on how much you value ethical reflection (there is quite a rabbit hole to go down to, actually, having to do with the question whether terminal values are internal or chosen).

Comment author: TheAncientGeek 02 May 2015 08:52:00AM *  1 point [-]

If you care about suffering, you don't stop caring just because you learn that there are no objectively right numerical tradeoff-values attached to the neural correlates of consciousness. 

I wasn't suggesting giving up on ethics, I was suggesting giving up on utilitarianism.

This is only problematic if you want to do something else, e.g. find a universally valid solution that all other minds would also agree with. I suspect that this isn't possible.

I think there are other approaches that do better than utilitarianism at its weak areas.

I don't see how hybrid theorists would solve the problem of things being "guesswork" either. In fact, there are multiple layers of guesswork involved there: you first need to determine in which cases which theories apply and to what extent, and then you need to solve all the issues within a theory.

Metaethically, hybrid theorists do need to figure out which theories apply where, and that isnt guesswork.

At the object level, it is quite possible, at the first approximation, to cash out your obligations as whatever society obliges you to do -- deontologists have a simpler problem to solve.

I still don't see any convincing objections to all the arguments I gave when I explained why I consider it likely that deontology is the result of moral rationalizing. The objection you gave about aggregation doesn't hold, because it applies to most or all moral views.

My principle argument is that it ain't necessarily so. You put forward, without any specific evidence, a version of events where deontology arises out of attempts to rationalise random intuitions. I put forward, without any specific evidence a version of events where widespread deontology arises out of rules being defined socially, and people internalising them. My handwaving theory doesn't defeat yours, since they both have the same, minimal, support, but it does show that your theory doesn'thave any unique status as the default or only theory of de facto deontology.

Comment author: Lukas_Gloor 02 May 2015 10:48:02AM *  0 points [-]

I wasn't suggesting giving up on ethics, I was suggesting giving up on utilitarianism.

What I wrote concerned giving up on caring about suffering, which is very closely related with utilitarianism.

I think there are other approaches that do better than utilitarianism at its weak areas.

Maybe according to your core intuitions, but not for me as far as I know.

but it does show that your theory doesn'thave any unique status as the default or only theory of de facto deontology.

But my main point was that deontology is too vague for a theory that specifies how you would want to act in every possible situation, and that it runs into big problems (and lots of "guesswork") if you try to make it less vague. Someone pointed out that I'm misunderstanding what people's ethical systems are intended to do. Maybe, but I think that's exactly my point: People don't even think about what they would want to do in every possible situation because they're more interested in protecting certain status quos rather than figuring out what it is that they actually want to accomplish. Is "protecting certain status quos" their true terminal value? Maybe, but how would they know if they know if this question doesn't even occur to them? This is exactly what I meant by moral anti-epistemology: you believe things and follow rules because the alternative is daunting/complicated and possibly morally demanding.

The best objection to my view is indeed that I'm putting arbitrary and unreasonable standards on what people "should" be thinking about. In the end, it also arbitrary what you decide to call a terminal value, and which definition of terminal values you find relevant. For instance, whether it needs to be something that people reach on reflection, or whether it is simply what people tell you they care about. Are people who never engage in deep moral reasoning making a mistake? Or are they simply expressing their terminal value of wanting to avoid complicated and potentially daunting things because they're satisficers? That's entirely up to your interpretation. I think that a lot of these people, if you were to nudge them towards thinking more about the situation, would at least in some respect be grateful for that, and this, to me, is reason to consider deontology as something irrational in respect to a conception of terminal values that takes into account a certain degree of reflection about goals.

Comment author: ChaosMote 01 May 2015 03:59:57AM 2 points [-]

Not necessarily. You are assuming that she has an explicit utility function, but that need not be the case.

Comment author: Lukas_Gloor 01 May 2015 09:32:31AM *  0 points [-]

Good point. May I ask, is "explicit utility function" standard terminology, and if yes, is there a good reference to it somewhere that explains it? It took me a long time until I realized the interesting difference between humans, who engage in moral philosophy and often can't tell you what their goals are, and my model of paperclippers. I also think that not understanding this difference is a big reason why people don't understand the orthagonality thesis.

Comment author: TheAncientGeek 01 May 2015 08:03:34AM *  1 point [-]

only matter of concern is hedonic tone. Hence my confusion about what you meant.

I don't think that fixes the problem, so I didn't think that the distinction was worth making. We can't objectively measure subjective feelings, so aggregating them across species is guesswork.

but at worst these questions require the stipulation of a finite number of tradeoff values. 

That sounds like guesswork to me,

In addition, I would say it also fails for preference utilitarianism, because I would imagine that these problems are trying hard to find decision-criteria that cover all conceivable situations

Inter species aggregation comes in when you are considers vegetarianism, vivisection, etc, which are uncontrived real world issues.

I don't think deontology necessarily does a lot better, -- I am actually a hybrid theorist-- but I don't think you are  giving deontology a fair trial, in that you are not considering its most sophisticated arguments, or allowing it to guess its way out of problems.

Comment author: Lukas_Gloor 01 May 2015 09:19:17AM 0 points [-]

That sounds like guesswork to me,

If you care about suffering, you don't stop caring just because you learn that there are no objectively right numerical tradeoff-values attached to the neural correlates of consciousness. Things being "arbitrary" or "guesswork" just means that the answer you're looking for depends on your own intuitions and cognitive machinery. This is only problematic if you want to do something else, e.g. find a universally valid solution that all other minds would also agree with. I suspect that this isn't possible.

I don't think deontology necessarily does a lot better, -- I am actually a hybrid theorists-- but I don't think you are giving deontology a fair trial, in that you are not considering its mist sophisticated arguments, or allowing it to guess its way out of problems.

I don't see how hybrid theorists would solve the problem of things being "guesswork" either. In fact, there are multiple layers of guesswork involved there: you first need to determine in which cases which theories apply and to what extent, and then you need to solve all the issues within a theory.

I still don't see any convincing objections to all the arguments I gave when I explained why I consider it likely that deontology is the result of moral rationalizing. The objection you gave about aggregation doesn't hold, because it applies to most or all moral views.

To give more support to my position: Joshua Greene has done a lot of interesting work that suggests that deontological judgments rely on system-1 thinking, whereas consequentialist judgments rely on system-2 thinking. In non-ethical contexts, these results would strongly suggest the presence of biases, especially if we consider situations were evolved heuristics are not goal-tracking.

Comment author: TheAncientGeek 29 April 2015 06:08:35PM *  1 point [-]

Do you mean utility functions of different parts of your brain?

No I mean combining utilities across individual, species, etc H

Likewise, if you have a lot of strong deontological intuitions and don't want to just overwrite them with a more simple, consequentialist view, that's totally fine as well, as long as you understand what you're doing. 

You have missed my point entirely. I meant that it is actually difficult to make consequentialism work, and c ists solve the problem by taking it glibly ... your critique of deontology, IOW.

I'm only objecting to deontology because most of the time, it seems like people think they are doing more than just following their intuitions

Rightly. Most of the time they are following socially defined rules.

Comment author: Lukas_Gloor 29 April 2015 07:54:27PM *  -1 points [-]

Ah, aggregation. This seems to be mainly a problem for what I would call preference utilitarianism, where you sum up utility functions over individuals. Outside of LW, the standard usage of utilitarianism refers to experiential utilitarianism, where the only matter of concern is hedonic tone. Hence my confusion about what you meant. There are still some tricky questions with that, e.g. how many seconds of intense depression of a 24-year-old human is worse than a chimpanzee being burned alive for 1 second, but at worst these questions require the stipulation of a finite number of tradeoff values. So your objection fails for the (arguably) most popular forms of utilitarianism.

In addition, I would say it also fails for preference utilitarianism, because I would imagine that these problems arise mainly because utilitarians are trying hard to find decision-criteria that cover all conceivable situations. If someone took deontology this seriously, I suspect that they too would run into aggregation problems of some sort somewhere, except if they block aggregation entirely (Taurek) and rely on the view that "numbers never count".

Comment author: TheAncientGeek 28 April 2015 09:16:59AM 2 points [-]

No, most consequentialists have a very good idea of how they would deal with probabilistic decision-situations,

Whereas what it is bad at is combining utility functions.

Comment author: Lukas_Gloor 28 April 2015 10:23:29AM *  0 points [-]

Do you mean utility functions of different parts of your brain? I agree. But no one says it's necessary to consider every single voice in your mind. If your internal democracy falls into a consequentialist dictatorship because somehow your most fundamental intuition is about altruism, that seems totally fine. Likewise, if you have a lot of strong deontological intuitions and don't want to just overwrite them with a more simple, consequentialist view, that's totally fine as well, as long as you understand what you're doing. I'm only objecting to deontology because most of the time, it seems like people think they are doing more than just following their intuitions, they think they somehow do the only right or alruistic thing, when this is non-obvious at best. The "as long as you understand what you're doing" of course also applies to consequentialists: it would be problematic if the main reason someone is a consequentialist is that she thinks utility functions ought to be simple/elegant. (Consequentialism doesn't necessarily have to be simple, complexity of value could well be consequentialist as well. I'm mainly talking about utilitarianism and closely related views here.)

Comment author: VoiceOfRa 26 April 2015 10:01:51PM 2 points [-]

It is often vague and lets people get away with not thinking things through. It feels like they have an answer, but most people would have no clue how to set the parameters for an AI that implemented their type of deontology (e.g. when dilemma situations become probabilistic, which is, of course, all the time).

The same is true of most discussions of consequentialism and utility functions.

Comment author: Lukas_Gloor 26 April 2015 10:54:25PM 0 points [-]

No, most consequentialists have a very good idea of how they would deal with probabilistic decision-situations, that's where consequentialism is good at. This is worked out to a much lesser extent in deontology.

I'm not saying that most forms of consequentialism aren't vague at all, if you interpreted me charitably, you would assume that I'm talking about a difference in degree.

An example of "letting people get away with not thinking things through": Consider the entire domain of population ethics. Why is this predominantly being discussed by consequentialists, where it is recognized as a huge problem-area? It's not like analogous difficulties wouldn't turn up in deontology if you went deep enough into the rabbit hole, but how many deontologists have gone there?

Comment author: TheAncientGeek 26 April 2015 02:08:07PM *  -2 points [-]

I often got this as an objection to utilitarianism

I wasnt objecting to utilitarianism.

There could be further considerations that can be brought to bear. Just because something is claimed as axiomatic , doesn't mean the buck has actually stopped

.There could be in some cases, if people find out they didn't really believe their axiom after all.

Belief isnt the important criterion. The important criterion is whether person B can argue for or against what person A takes as automatic. How do you show objectively that claim can't be argued for, and has to be assumed.

don't quite agree with the prominent LW-opinion that human values are complex.

Values are complex. Whether moral values are complex is another story.

still don't know what you think is bad about bad deontology.

It is often vague

That doesn't seem to be an intrinsic problem. You can make a set of rules as precise as you like. It also not clear that the well known alternatives fare better. Utilitarianism, in particular, works only in fairly constrained domains, where you're not comparing apples and oranges.

It contains discussion stoppers like "rights",

Arguably, that's a feature, not a bug. If people realised how insubstantial ethics is, they would have trouble sticking to it.

Comment author: Lukas_Gloor 26 April 2015 02:53:47PM *  0 points [-]

I wasnt objecting to utilitarianism.

I know, my point referred to people using "ethics is from humans for humans" in a way that would also rule out transhumanism.

Belief isnt the important criterion. The important criterion is whether person B can argue for or against what person A takes as automatic. How do you show objectively that claim can't be argued for, and has to be assumed.

The burden of proof is elsewhere, how do you overcome the is-ought distinction when you try to justify/argue for a claim? Edit: To repraphse this (don't know how this could get me downvotes, but I'm trying to make this more clear), if the arguments for the is-ought distinction, which seem totally sound, are correct, it is unclear how you could argue for person A's moral assumptions being incorrect, at least in cases where these assumptions are non-contradicting and not based on confused metaphysics.

Comment author: TheAncientGeek 25 April 2015 10:21:41AM *  1 point [-]

The way most people use it,

Are you sure? That meaning wasn't obvious to me?

For instance, some people people are just interested to find an "impartial" view that they would choose behind the veil of ignorance, whereas others also want to account person-specific intuitions and preferences. None of these two parties is wrong, they just have different axioms.

There could be further considerations that can be brought to bear. Just because something is claimed as axiomatic , doesn't mean the buck has actually stopped. Having multiple epistemologies with equally good answers us something of a disaster.

No, Golden-rule deontology is very similar to timeless cooperation for instance, and that doesn't strike me as a misguided thing to be thinking about.

I still don't know what you think is bad about bad deontology.

In general, you need to make many fewer assumptions that what is obvious to you is obvious to everybody.

Comment author: Lukas_Gloor 25 April 2015 02:41:19PM *  -1 points [-]

Are you sure? That meaning wasn't obvious to me?

I often got this as an objection to utilitarianism, the other premise being that utilitarianism is impractical for humans. I've talked to lots of people about ethics since I took high school philosophy classes, study philosophy at university, and have engaged in more than a hundred online discussions about ethics. The objection actually isn't that bad if you steelman it, maybe people are trying to say that they, as humans, care about many other things and would be overwhelmed with utilitarian obligations. (But there remains the question whether they care terminally about these other things, or whether they would self-modify to a perfect utilitarian robot if given the chance.)

There could be further considerations that can be brought to bear. Just because something is claimed as axiomatic , doesn't mean the buck has actually stopped.

There could be in some cases, if people find out they didn't really believe their axiom after all. But it can just as well be that the starting assumptions really are axiomatic. I think that the idea that terminal values are hardwired in the human brain, and will converge if you just give an FAI good instructions to get them out, is mistaken. There are billions of different ways of doing the extrapolation, and they won't all output the same. At the end of the day, the buck does have to stop somewhere, and where else could that be than where a person, after long reflection and an understanding of what she is doing, concludes that x are her starting assumptions and that's it.

I don't quite agree with the prominent LW-opinion that human values are complex. What is complex are human moral intuitions. But no one is saying that you need to take every intuition into account equally. Humans are very peculiar sort of agents in mind space, when you ask most people what their goal is in life, they do not know or they give you an answer that they will take back as soon as you point out some counterintuitive implications of what they just said. I imagine that many AI-designs would be such that the AIs are always clearly aware of their goals, and thus feel no need to ever engage in genuine moral philosophy. Of course, people do have a utility-function in form of revealed preferences, what they would do if you placed them in all sorts of situations, but is that the thing we are interested in when we talk of terminal values? I don't think so! It should at least be on the table that some fraction of my brain's pandemonium of voices/intuitions is stronger than the other fractions, and that this fraction makes up what I consider the rational part of my brain and the core part of my moral self-identity, and that I would, upon reflection, self-modify to an efficient robot with simple values. Personally I would do this, and I don't think I'm missing anything that would imply that I'm making any sort of mistake. Therefore, the view that all human values are necessarily complex seems mistaken to me.

Having multiple epistemologies with equally good answers us something of a disaster.

These different epistemologies have a lot in common. The exercise would always be "define you starting assumptions, then see which moves are goal-tracking, and which ones aren't". Ethical thought experiments for instance, or distinguishing instrumental values from terminal ones, are things that you need to do either way if you think about what your goals are, e.g. how you would want to act in all possible decision-situations.

I still don't know what you think is bad about bad deontology.

  • It is often vague and lets people get away with not thinking things through. It feels like they have an answer, but most people would have no clue how to set the parameters for an AI that implemented their type of deontology (e.g. when dilemma situations become probabilistic, which is, of course, all the time).

  • It contains discussion stoppers like "rights", even though, when you taboo the term, that just means "harming is worse than not-helping", which is a weird way to draw a distinction, because when you're in pain, you primarily care about getting out of it and don't first ask what the reason for it was. Related: It gives the air of being "about the victim", but it's really more about the agent's own moral intuitions, and is thus, not really other-regarding/impartial at all. This would be ok if deontologists were aware of it, but they often aren't. They object to utilitarianism on the grounds of it being "inhumane", instead of "too altruistic".

In general, you need to make many fewer assumptions that what is obvious to you is obvious to everybody.

Yes, I see that now. I thought I was mainly preaching to the choir and didn't think the details of people's metaethical views would matter for the main thoughts in my original post. It felt to me like I was saying something at risk of being too trivial, but maybe I should have picked better examples. I agree that this comment does a good job at what I was trying to get at.

Comment author: Lumifer 24 April 2015 03:02:47PM 3 points [-]

Therefore, we should expect there to be a lot of moral anti-epistemology.

Um. Epistemology, generally speaking, assumes that there is something stable and objective to be known -- reality. Do you assume moral realism? We don't speak of epistemology (or anti-epistemology) of artistic preferences, do we?

Comment author: Lukas_Gloor 24 April 2015 03:37:56PM *  0 points [-]

Did you read my third paragraph? I'm not assuming moral realism and I'm well aware of the issue you mention. I do think ithere is a meaningful way a person's reasoning about moral issues can be wrong, even under the assumption of anti-realism. Namely, if people use an argument of form f to argue for their desired conclusion, and yet they would reject other conclusions that follow from the argument of form f, it seems like they're deluding themselves. I'm not entirely sure the parallels to epistemology are strong enough to justify the analogy, but it seems worth thinking about it.

View more: Prev | Next