Posts

Sorted by New

Wiki Contributions

Comments

wuwei14y30

CEV is not preference utilitarianism, or any other first-order ethical theory. Rather, preference utilitarianism is the sort of thing that might be CEV's output.

wuwei14y00

Matt Simpson was talking about people who have in fact reflected on their values a lot. Why did you switch to talking about people who think they have reflected a lot?

What "someone actually values" or what their "terminal values" are seems to be ambiguous in this discussion. On one reading, it just means what motivates someone the most. In that case, your claims are pretty plausible.

On the other reading, which seems more relevant in this thread and the original comment, it means the terminal values someone should act on, which we might approximate as what they would value at the end of reflection. Switching back to people who have reflected a lot (not merely think they have), it doesn't seem all that plausible to suppose that people who have reflected a lot about their "terminal values" are often the most confused about them.

For the record, I'm perfectly happy to concede that in general, speaking of what someone "actually values" or what their present "terminal values" are should be reserved for what in fact most motivates people. I think it is tempting to use that kind of talk to refer to what people should value because it allows us to point to existing mental structures that play a clear causal role in influencing actions, but I think it is ultimately only confusing because it is the wrong mental structures to point to when analyzing rightness or shouldness.

wuwei14y210

I suppose I might count as someone who favors "organismal" preferences over confusing the metaphorical "preferences" of our genes with those of the individual. I think your argument against this is pretty weak.

You claim that favoring the "organismal" over the "evolutionary" fails to accurately identify our values in four cases, but I fail to see any problem with these cases.

  • I find no problem with upholding the human preference for foods which taste fatty, sugary and salty. (Note that consistently applied, the "organismal" preference would be for the fatty, sugary and salty taste and not foods that are actually fatty, sugary and salty. E.g. We like drinking diet Pepsi with Splenda almost as much as Pepsi and in a way roughly proportional to the success with which Splenda mimics the taste of sugar. We could even go one step further and drop the actual food part, valuing just the experience of [seemingly] eating fatty, sugary and salty foods.) This doesn't necessarily commit me to valuing an unhealthy diet all things considered because we also have many other preferences, e.g. for our health, which may outweigh this true human value.
  • The next two cases (fear of snakes and enjoying violence) can be dealt with similarly.
  • The last one is a little trickier but I think it can be addressed by a similar principle in which one value gets outweighed by a different value. In this case, it would be some higher-order value such as treating like cases alike. The difference here is that rather than being a competing value that outweighs the initial value, it is more like a constitutive value which nullifies the initial value. (Technically, I would prefer to talk here of principles which govern our values rather than necessarily higher order values.)

I thought your arguments throughout this post were similarly shallow and uncharitable to the side you were arguing against. For instance, you go on at length about how disagreements about value are present and intuitions are not consistent across cultures and history, but I don't see how this is supposed to be any more convincing than talking about how many people in history have believed the earth is flat.

Okay, you've defeated the view that ethics is about the values all humans throughout history unanimously agree on. Now what about views that extrapolate not from perfectly consistent, unanimous and foundational intuitions or preferences, but from dynamics in human psychology that tend to shape initially inconsistent and incoherent intuitions to be more consistent and coherent -- dynamics, the end result of which can be hard to predict when iteratively applied and which can be misapplied in any given instance in a way analogous to applications of the dynamic over beliefs of favoring the simplest hypothesis consistent with the evidence.

By the way, I don't mean to claim that your conclusion is obviously wrong. I think someone favoring my type of view about ethics has a heavy burden of proof that you hint at, perhaps even one that has been underappreciated here. I just don't think your arguments here provide any support for your conclusion.

It seems to me that when you try to provide illustrative examples of how opposing views fail, you end up merely attacking straw men. Perhaps you'd do better if you tried to establish that any opposing views must have some property in common and that such a property dooms those views to failure. Or that opposing views must go one of two mutually exclusive and exhaustive routes in response to some central dilemma and both routes doom them to failure.

I really would like to see the most precise and cogent version of your argument here as I think it could prompt some important progress in filling in the gaps present in the sort of ethical view I favor.

wuwei14y110

Hi.

I've read nearly everything on less wrong but except for a couple months last summer, I generally don't comment because a) I feel I don't have time, b) my perfectionist standards make me anxious about meeting and maintaining the high standards of discussion here and c) very often someone has either already said what I would have wanted to say or I anticipate from experience that someone will very soon.

wuwei14y20

There's the consequentialist/utilitarian click, and the intelligence explosion click, and the life-is-good/death-is-bad click, and the cryonics click.

I can find a number of blog posts from you clearly laying out the arguments in favor of each of those clicks except the consequentialism/utilitarianism one.

What do you mean by "consequentialism" and "utilitarianism" and why do you think they are not just right but obviously right?

wuwei14y20

d) should be changed to the sparseness of intelligent aliens and limits to how fast even a superintelligence can extend its sphere of influence.

wuwei14y10

Interesting, what about either of the following:

A) If X should do A, then it is rational for X to do A.

B) If it is rational for X to do A, then X should do A.

wuwei14y30

I'm a moral cognitivist too but I'm becoming quite puzzled as to what truth-conditions you think "should" statements have. Maybe it would help if you said which of these you think are true statements.

1) Eliezer Yudkowsky should not kill babies.

2) Babyeating aliens should not kill babies.

3) Sharks should not kill babies.

4) Volcanoes should not kill babies.

5) Should not kill babies. (sic)

The meaning of "should not" in 2 through 5 are intended to be the same as the common usage of the words in 1.

wuwei14y00

I don't think we anticipate different experimental results.

I find that quite surprising to hear. Wouldn't disagreements about meaning generally cash out in some sort of difference in experimental results?

wuwei14y20

On your analysis of should, paperclip maximizers should not maximize paperclips. Do you think this is a more useful characterization of 'should' than one in which we should be moral and rational, etc., and paperclip maximizers should maximize paperclips?

Load More