Comment author: ChaosMote 01 May 2015 03:59:57AM 2 points [-]

Not necessarily. You are assuming that she has an explicit utility function, but that need not be the case.

Comment author: Lukas_Gloor 01 May 2015 09:32:31AM *  0 points [-]

Good point. May I ask, is "explicit utility function" standard terminology, and if yes, is there a good reference to it somewhere that explains it? It took me a long time until I realized the interesting difference between humans, who engage in moral philosophy and often can't tell you what their goals are, and my model of paperclippers. I also think that not understanding this difference is a big reason why people don't understand the orthagonality thesis.

Comment author: TheAncientGeek 01 May 2015 08:03:34AM *  1 point [-]

only matter of concern is hedonic tone. Hence my confusion about what you meant.

I don't think that fixes the problem, so I didn't think that the distinction was worth making. We can't objectively measure subjective feelings, so aggregating them across species is guesswork.

but at worst these questions require the stipulation of a finite number of tradeoff values. 

That sounds like guesswork to me,

In addition, I would say it also fails for preference utilitarianism, because I would imagine that these problems are trying hard to find decision-criteria that cover all conceivable situations

Inter species aggregation comes in when you are considers vegetarianism, vivisection, etc, which are uncontrived real world issues.

I don't think deontology necessarily does a lot better, -- I am actually a hybrid theorist-- but I don't think you are  giving deontology a fair trial, in that you are not considering its most sophisticated arguments, or allowing it to guess its way out of problems.

Comment author: Lukas_Gloor 01 May 2015 09:19:17AM 0 points [-]

That sounds like guesswork to me,

If you care about suffering, you don't stop caring just because you learn that there are no objectively right numerical tradeoff-values attached to the neural correlates of consciousness. Things being "arbitrary" or "guesswork" just means that the answer you're looking for depends on your own intuitions and cognitive machinery. This is only problematic if you want to do something else, e.g. find a universally valid solution that all other minds would also agree with. I suspect that this isn't possible.

I don't think deontology necessarily does a lot better, -- I am actually a hybrid theorists-- but I don't think you are giving deontology a fair trial, in that you are not considering its mist sophisticated arguments, or allowing it to guess its way out of problems.

I don't see how hybrid theorists would solve the problem of things being "guesswork" either. In fact, there are multiple layers of guesswork involved there: you first need to determine in which cases which theories apply and to what extent, and then you need to solve all the issues within a theory.

I still don't see any convincing objections to all the arguments I gave when I explained why I consider it likely that deontology is the result of moral rationalizing. The objection you gave about aggregation doesn't hold, because it applies to most or all moral views.

To give more support to my position: Joshua Greene has done a lot of interesting work that suggests that deontological judgments rely on system-1 thinking, whereas consequentialist judgments rely on system-2 thinking. In non-ethical contexts, these results would strongly suggest the presence of biases, especially if we consider situations were evolved heuristics are not goal-tracking.

Comment author: TheAncientGeek 29 April 2015 06:08:35PM *  1 point [-]

Do you mean utility functions of different parts of your brain?

No I mean combining utilities across individual, species, etc H

Likewise, if you have a lot of strong deontological intuitions and don't want to just overwrite them with a more simple, consequentialist view, that's totally fine as well, as long as you understand what you're doing. 

You have missed my point entirely. I meant that it is actually difficult to make consequentialism work, and c ists solve the problem by taking it glibly ... your critique of deontology, IOW.

I'm only objecting to deontology because most of the time, it seems like people think they are doing more than just following their intuitions

Rightly. Most of the time they are following socially defined rules.

Comment author: Lukas_Gloor 29 April 2015 07:54:27PM *  -1 points [-]

Ah, aggregation. This seems to be mainly a problem for what I would call preference utilitarianism, where you sum up utility functions over individuals. Outside of LW, the standard usage of utilitarianism refers to experiential utilitarianism, where the only matter of concern is hedonic tone. Hence my confusion about what you meant. There are still some tricky questions with that, e.g. how many seconds of intense depression of a 24-year-old human is worse than a chimpanzee being burned alive for 1 second, but at worst these questions require the stipulation of a finite number of tradeoff values. So your objection fails for the (arguably) most popular forms of utilitarianism.

In addition, I would say it also fails for preference utilitarianism, because I would imagine that these problems arise mainly because utilitarians are trying hard to find decision-criteria that cover all conceivable situations. If someone took deontology this seriously, I suspect that they too would run into aggregation problems of some sort somewhere, except if they block aggregation entirely (Taurek) and rely on the view that "numbers never count".

Comment author: TheAncientGeek 28 April 2015 09:16:59AM 2 points [-]

No, most consequentialists have a very good idea of how they would deal with probabilistic decision-situations,

Whereas what it is bad at is combining utility functions.

Comment author: Lukas_Gloor 28 April 2015 10:23:29AM *  0 points [-]

Do you mean utility functions of different parts of your brain? I agree. But no one says it's necessary to consider every single voice in your mind. If your internal democracy falls into a consequentialist dictatorship because somehow your most fundamental intuition is about altruism, that seems totally fine. Likewise, if you have a lot of strong deontological intuitions and don't want to just overwrite them with a more simple, consequentialist view, that's totally fine as well, as long as you understand what you're doing. I'm only objecting to deontology because most of the time, it seems like people think they are doing more than just following their intuitions, they think they somehow do the only right or alruistic thing, when this is non-obvious at best. The "as long as you understand what you're doing" of course also applies to consequentialists: it would be problematic if the main reason someone is a consequentialist is that she thinks utility functions ought to be simple/elegant. (Consequentialism doesn't necessarily have to be simple, complexity of value could well be consequentialist as well. I'm mainly talking about utilitarianism and closely related views here.)

Comment author: VoiceOfRa 26 April 2015 10:01:51PM 2 points [-]

It is often vague and lets people get away with not thinking things through. It feels like they have an answer, but most people would have no clue how to set the parameters for an AI that implemented their type of deontology (e.g. when dilemma situations become probabilistic, which is, of course, all the time).

The same is true of most discussions of consequentialism and utility functions.

Comment author: Lukas_Gloor 26 April 2015 10:54:25PM 0 points [-]

No, most consequentialists have a very good idea of how they would deal with probabilistic decision-situations, that's where consequentialism is good at. This is worked out to a much lesser extent in deontology.

I'm not saying that most forms of consequentialism aren't vague at all, if you interpreted me charitably, you would assume that I'm talking about a difference in degree.

An example of "letting people get away with not thinking things through": Consider the entire domain of population ethics. Why is this predominantly being discussed by consequentialists, where it is recognized as a huge problem-area? It's not like analogous difficulties wouldn't turn up in deontology if you went deep enough into the rabbit hole, but how many deontologists have gone there?

Comment author: TheAncientGeek 26 April 2015 02:08:07PM *  -2 points [-]

I often got this as an objection to utilitarianism

I wasnt objecting to utilitarianism.

There could be further considerations that can be brought to bear. Just because something is claimed as axiomatic , doesn't mean the buck has actually stopped

.There could be in some cases, if people find out they didn't really believe their axiom after all.

Belief isnt the important criterion. The important criterion is whether person B can argue for or against what person A takes as automatic. How do you show objectively that claim can't be argued for, and has to be assumed.

don't quite agree with the prominent LW-opinion that human values are complex.

Values are complex. Whether moral values are complex is another story.

still don't know what you think is bad about bad deontology.

It is often vague

That doesn't seem to be an intrinsic problem. You can make a set of rules as precise as you like. It also not clear that the well known alternatives fare better. Utilitarianism, in particular, works only in fairly constrained domains, where you're not comparing apples and oranges.

It contains discussion stoppers like "rights",

Arguably, that's a feature, not a bug. If people realised how insubstantial ethics is, they would have trouble sticking to it.

Comment author: Lukas_Gloor 26 April 2015 02:53:47PM *  0 points [-]

I wasnt objecting to utilitarianism.

I know, my point referred to people using "ethics is from humans for humans" in a way that would also rule out transhumanism.

Belief isnt the important criterion. The important criterion is whether person B can argue for or against what person A takes as automatic. How do you show objectively that claim can't be argued for, and has to be assumed.

The burden of proof is elsewhere, how do you overcome the is-ought distinction when you try to justify/argue for a claim? Edit: To repraphse this (don't know how this could get me downvotes, but I'm trying to make this more clear), if the arguments for the is-ought distinction, which seem totally sound, are correct, it is unclear how you could argue for person A's moral assumptions being incorrect, at least in cases where these assumptions are non-contradicting and not based on confused metaphysics.

Comment author: TheAncientGeek 25 April 2015 10:21:41AM *  1 point [-]

The way most people use it,

Are you sure? That meaning wasn't obvious to me?

For instance, some people people are just interested to find an "impartial" view that they would choose behind the veil of ignorance, whereas others also want to account person-specific intuitions and preferences. None of these two parties is wrong, they just have different axioms.

There could be further considerations that can be brought to bear. Just because something is claimed as axiomatic , doesn't mean the buck has actually stopped. Having multiple epistemologies with equally good answers us something of a disaster.

No, Golden-rule deontology is very similar to timeless cooperation for instance, and that doesn't strike me as a misguided thing to be thinking about.

I still don't know what you think is bad about bad deontology.

In general, you need to make many fewer assumptions that what is obvious to you is obvious to everybody.

Comment author: Lukas_Gloor 25 April 2015 02:41:19PM *  -1 points [-]

Are you sure? That meaning wasn't obvious to me?

I often got this as an objection to utilitarianism, the other premise being that utilitarianism is impractical for humans. I've talked to lots of people about ethics since I took high school philosophy classes, study philosophy at university, and have engaged in more than a hundred online discussions about ethics. The objection actually isn't that bad if you steelman it, maybe people are trying to say that they, as humans, care about many other things and would be overwhelmed with utilitarian obligations. (But there remains the question whether they care terminally about these other things, or whether they would self-modify to a perfect utilitarian robot if given the chance.)

There could be further considerations that can be brought to bear. Just because something is claimed as axiomatic , doesn't mean the buck has actually stopped.

There could be in some cases, if people find out they didn't really believe their axiom after all. But it can just as well be that the starting assumptions really are axiomatic. I think that the idea that terminal values are hardwired in the human brain, and will converge if you just give an FAI good instructions to get them out, is mistaken. There are billions of different ways of doing the extrapolation, and they won't all output the same. At the end of the day, the buck does have to stop somewhere, and where else could that be than where a person, after long reflection and an understanding of what she is doing, concludes that x are her starting assumptions and that's it.

I don't quite agree with the prominent LW-opinion that human values are complex. What is complex are human moral intuitions. But no one is saying that you need to take every intuition into account equally. Humans are very peculiar sort of agents in mind space, when you ask most people what their goal is in life, they do not know or they give you an answer that they will take back as soon as you point out some counterintuitive implications of what they just said. I imagine that many AI-designs would be such that the AIs are always clearly aware of their goals, and thus feel no need to ever engage in genuine moral philosophy. Of course, people do have a utility-function in form of revealed preferences, what they would do if you placed them in all sorts of situations, but is that the thing we are interested in when we talk of terminal values? I don't think so! It should at least be on the table that some fraction of my brain's pandemonium of voices/intuitions is stronger than the other fractions, and that this fraction makes up what I consider the rational part of my brain and the core part of my moral self-identity, and that I would, upon reflection, self-modify to an efficient robot with simple values. Personally I would do this, and I don't think I'm missing anything that would imply that I'm making any sort of mistake. Therefore, the view that all human values are necessarily complex seems mistaken to me.

Having multiple epistemologies with equally good answers us something of a disaster.

These different epistemologies have a lot in common. The exercise would always be "define you starting assumptions, then see which moves are goal-tracking, and which ones aren't". Ethical thought experiments for instance, or distinguishing instrumental values from terminal ones, are things that you need to do either way if you think about what your goals are, e.g. how you would want to act in all possible decision-situations.

I still don't know what you think is bad about bad deontology.

  • It is often vague and lets people get away with not thinking things through. It feels like they have an answer, but most people would have no clue how to set the parameters for an AI that implemented their type of deontology (e.g. when dilemma situations become probabilistic, which is, of course, all the time).

  • It contains discussion stoppers like "rights", even though, when you taboo the term, that just means "harming is worse than not-helping", which is a weird way to draw a distinction, because when you're in pain, you primarily care about getting out of it and don't first ask what the reason for it was. Related: It gives the air of being "about the victim", but it's really more about the agent's own moral intuitions, and is thus, not really other-regarding/impartial at all. This would be ok if deontologists were aware of it, but they often aren't. They object to utilitarianism on the grounds of it being "inhumane", instead of "too altruistic".

In general, you need to make many fewer assumptions that what is obvious to you is obvious to everybody.

Yes, I see that now. I thought I was mainly preaching to the choir and didn't think the details of people's metaethical views would matter for the main thoughts in my original post. It felt to me like I was saying something at risk of being too trivial, but maybe I should have picked better examples. I agree that this comment does a good job at what I was trying to get at.

Comment author: Lumifer 24 April 2015 03:02:47PM 3 points [-]

Therefore, we should expect there to be a lot of moral anti-epistemology.

Um. Epistemology, generally speaking, assumes that there is something stable and objective to be known -- reality. Do you assume moral realism? We don't speak of epistemology (or anti-epistemology) of artistic preferences, do we?

Comment author: Lukas_Gloor 24 April 2015 03:37:56PM *  0 points [-]

Did you read my third paragraph? I'm not assuming moral realism and I'm well aware of the issue you mention. I do think ithere is a meaningful way a person's reasoning about moral issues can be wrong, even under the assumption of anti-realism. Namely, if people use an argument of form f to argue for their desired conclusion, and yet they would reject other conclusions that follow from the argument of form f, it seems like they're deluding themselves. I'm not entirely sure the parallels to epistemology are strong enough to justify the analogy, but it seems worth thinking about it.

Comment author: TheAncientGeek 24 April 2015 11:26:13AM *  0 points [-]

morality is from humans for humans

What's wrong with that? Not enough concern for non human animals?

As long as we always make sure to clarify what it is that we're trying to accomplish, it would seem possible to differentiate between valid and invalid arguments in regard to the specified goal.

Does that mean what counts as good epistemology in the context of ethics is specific to the contexts of ethics?

Deontology comes to mind (mostly because it's my usual suspect when it comes to odd reasoning in ethics), b0

All variations on deontology?

Comment author: Lukas_Gloor 24 April 2015 12:41:57PM *  -2 points [-]

What's wrong with that? Not enough concern for non human animals?

The way most people use it, the slogan would also put all transhumanist ideas outside the space of things to consider. I feel that it is "wrong" in that it prematurely limits your search space, but I guess if someone really did just care about how humans in their current set-up interact with each other, ok...

Does that mean what counts as good epistemology in the context of ethics is specific to the contexts of ethics?

Yes, and I find this non-trivial because it means that "ethics" is too broad for there to be one all-encompassing methodology. For instance, some people people are just interested to find an "impartial" view that they would choose behind the veil of ignorance, whereas others also want to account for person-specific intuitions and preferences. None of these two parties is wrong, they just have different axioms. The situations seems different when you look at science, there people seem to agree on the criteria for a good scientific explanation (well, at least in most cases).

All variations on deontology?

No, Golden-rule deontology is very similar to timeless cooperation for instance, and that doesn't strike me as a misguided thing to be thinking about.

Moral Anti-Epistemology

0 Lukas_Gloor 24 April 2015 03:30AM

This post is a half-baked idea that I'm posting here in order to get feedback and further brainstorming. There seem to be some interesting parallels between epistemology and ethics.

Part 1: Moral Anti-Epistemology

"Anti-Epistemology" refers to bad rules of reasoning that exist not because they are useful/truth-tracking, but because they are good at preserving people's cherished beliefs about the world. But cherished beliefs don't just concern factual questions, they also very much concern moral issues. Therefore, we should expect there to be a lot of moral anti-epistemology. 

Tradition as a moral argument, tu quoque, opposition to the use of thought experiments, the noncentral fallacy, slogans like "morality is from humans for humans" – all these are instances of the same general phenomenon. This is trivial and doesn't add much to the already well-known fact that humans often rationalize, but it does add the memetic perspective: Moral rationalizations sometimes concern more than a singular instance, they can affect the entire way people reason about morality. And like with religion or pseudoscience in epistemology about factual claims, there could be entire memeplexes centered around moral anti-epistemology. 

A complication is that metaethics is complicated; it is unclear what exactly moral reasoning is, and whether everyone is trying to do the same thing when they engage in what they think of as moral reasoning. Labelling something "moral anti-epistemology" would suggest that there is a correct way to think about morality. Is there? As long as we always make sure to clarify what it is that we're trying to accomplish, it would seem possible to differentiate between valid and invalid arguments in regard to the specified goal. And this is where moral anti-epistemology might cause troubles. 

Are there reasons to assume that certain popular ethical beliefs are a result of moral anti-epistemology? Deontology comes to mind (mostly because it's my usual suspect when it comes to odd reasoning in ethics), but what is it about deontology that relies on "faulty moral reasoning", if indeed there is something about it that does? How much of it relies on the noncentral fallacy, for instance? Is Yvain's personal opinion that "much of deontology is just an attempt to formalize and justify this fallacy" correct? The perspective of moral anti-epistemology would suggest that it is the other way around: Deontology might be the by-product of people applying the noncentral fallacy, which is done because it helps protect cherished beliefs. Which beliefs would that be? Perhaps the strongly felt intuition that "Some things are JUST WRONG?", which doesn't handle fuzzy concepts/boundaries well and therefore has to be combined with a dogmatic approach. It sounds somewhat plausible, but also really speculative. 

Part 2: Memetics

A lot of people are skeptical towards these memetical just-so stories. They argue that the points made are either too trivial, or too speculative. I have the intuition that a memetic perspective often helps clarify things, and my thoughts about applying the concept of anti-epistemology to ethics seemed like an insight, but I have a hard time coming up with how my expectations about the world have changed because of it. What, if anything, is the value of the idea I just presented? Can I now form a prediction to test whether deontologists want to primarily formalize and justify the noncentral fallacy, or whether they instead want to justify something else by making use of the noncentral fallacy?

Anti-epistemology is a more general model of what is going on in the world than rationalizations are, so it should all reduce to rationalizations in the end. So it shouldn't be worrying that I don't magically find more stuff. Perhaps my expectations were too high and I should be content with having found a way to categorize moral rationalizations, the knowledge of which will make me slightly quicker at spotting or predicting them.

Thoughts?

View more: Prev | Next