Comment author: Wei_Dai 03 June 2010 09:43:44PM 12 points [-]

I can see why you might feel that way, if this was just a technical flaw in CEV that can be fixed with a simple patch. But I've been having a growing suspicion that the main philosophical underpinning of CEV, namely preference utilitarianism, is seriously wrong, and this story was meant to offer more evidence in that vein.

Comment author: wuwei 03 June 2010 11:18:33PM *  2 points [-]

CEV is not preference utilitarianism, or any other first-order ethical theory. Rather, preference utilitarianism is the sort of thing that might be CEV's output.

Comment author: mattnewport 03 May 2010 11:31:05PM 4 points [-]

The problem is using actions to infer terminal values. In order to determine your terminal values, you have to think about them; reflect on them. Probably a lot. So in order for the actions of a person to be a reliable indicator of her terminal values, she must have done some reflecting on what she actually values. For most people, this hasn't happened.

I disagree. People who believe they have thought about their terminal values are often the most confused about what they actually value. Human values as judged by observing how people act rather than by what they claim to think are more self-consistent and more universal than the values professed by people who think they have discovered their own terminal values through reflection. Your conscious beliefs are but a distorted echo of the real values embodied in your brain.

Comment author: wuwei 04 May 2010 05:08:07AM 0 points [-]

Matt Simpson was talking about people who have in fact reflected on their values a lot. Why did you switch to talking about people who think they have reflected a lot?

What "someone actually values" or what their "terminal values" are seems to be ambiguous in this discussion. On one reading, it just means what motivates someone the most. In that case, your claims are pretty plausible.

On the other reading, which seems more relevant in this thread and the original comment, it means the terminal values someone should act on, which we might approximate as what they would value at the end of reflection. Switching back to people who have reflected a lot (not merely think they have), it doesn't seem all that plausible to suppose that people who have reflected a lot about their "terminal values" are often the most confused about them.

For the record, I'm perfectly happy to concede that in general, speaking of what someone "actually values" or what their present "terminal values" are should be reserved for what in fact most motivates people. I think it is tempting to use that kind of talk to refer to what people should value because it allows us to point to existing mental structures that play a clear causal role in influencing actions, but I think it is ultimately only confusing because it is the wrong mental structures to point to when analyzing rightness or shouldness.

Comment author: wuwei 27 April 2010 12:40:16AM *  16 points [-]

I suppose I might count as someone who favors "organismal" preferences over confusing the metaphorical "preferences" of our genes with those of the individual. I think your argument against this is pretty weak.

You claim that favoring the "organismal" over the "evolutionary" fails to accurately identify our values in four cases, but I fail to see any problem with these cases.

  • I find no problem with upholding the human preference for foods which taste fatty, sugary and salty. (Note that consistently applied, the "organismal" preference would be for the fatty, sugary and salty taste and not foods that are actually fatty, sugary and salty. E.g. We like drinking diet Pepsi with Splenda almost as much as Pepsi and in a way roughly proportional to the success with which Splenda mimics the taste of sugar. We could even go one step further and drop the actual food part, valuing just the experience of [seemingly] eating fatty, sugary and salty foods.) This doesn't necessarily commit me to valuing an unhealthy diet all things considered because we also have many other preferences, e.g. for our health, which may outweigh this true human value.
  • The next two cases (fear of snakes and enjoying violence) can be dealt with similarly.
  • The last one is a little trickier but I think it can be addressed by a similar principle in which one value gets outweighed by a different value. In this case, it would be some higher-order value such as treating like cases alike. The difference here is that rather than being a competing value that outweighs the initial value, it is more like a constitutive value which nullifies the initial value. (Technically, I would prefer to talk here of principles which govern our values rather than necessarily higher order values.)

I thought your arguments throughout this post were similarly shallow and uncharitable to the side you were arguing against. For instance, you go on at length about how disagreements about value are present and intuitions are not consistent across cultures and history, but I don't see how this is supposed to be any more convincing than talking about how many people in history have believed the earth is flat.

Okay, you've defeated the view that ethics is about the values all humans throughout history unanimously agree on. Now what about views that extrapolate not from perfectly consistent, unanimous and foundational intuitions or preferences, but from dynamics in human psychology that tend to shape initially inconsistent and incoherent intuitions to be more consistent and coherent -- dynamics, the end result of which can be hard to predict when iteratively applied and which can be misapplied in any given instance in a way analogous to applications of the dynamic over beliefs of favoring the simplest hypothesis consistent with the evidence.

By the way, I don't mean to claim that your conclusion is obviously wrong. I think someone favoring my type of view about ethics has a heavy burden of proof that you hint at, perhaps even one that has been underappreciated here. I just don't think your arguments here provide any support for your conclusion.

It seems to me that when you try to provide illustrative examples of how opposing views fail, you end up merely attacking straw men. Perhaps you'd do better if you tried to establish that any opposing views must have some property in common and that such a property dooms those views to failure. Or that opposing views must go one of two mutually exclusive and exhaustive routes in response to some central dilemma and both routes doom them to failure.

I really would like to see the most precise and cogent version of your argument here as I think it could prompt some important progress in filling in the gaps present in the sort of ethical view I favor.

Comment author: wuwei 17 April 2010 05:02:53AM *  7 points [-]

Hi.

I've read nearly everything on less wrong but except for a couple months last summer, I generally don't comment because a) I feel I don't have time, b) my perfectionist standards make me anxious about meeting and maintaining the high standards of discussion here and c) very often someone has either already said what I would have wanted to say or I anticipate from experience that someone will very soon.

In response to That Magical Click
Comment author: wuwei 22 January 2010 11:58:04PM 2 points [-]

There's the consequentialist/utilitarian click, and the intelligence explosion click, and the life-is-good/death-is-bad click, and the cryonics click.

I can find a number of blog posts from you clearly laying out the arguments in favor of each of those clicks except the consequentialism/utilitarianism one.

What do you mean by "consequentialism" and "utilitarianism" and why do you think they are not just right but obviously right?

Comment author: Yorick_Newsome 02 December 2009 11:06:32AM *  2 points [-]

Big Edit: Jack formulated my ideas better, so see his comment.
This was the original: The fact that the universe hasn't been noticeably paperclipped has got to be evidence for a) the unlikelihood of superintelligences, b) quantum immortality, c) our universe being the result of a non-obvious paperclipping (the theists were right after all, and the fine-tuned universe argument is valid), d) the non-existence of intelligent aliens, or e) that superintelligences tend not to optimize things that are astronomically visible (related to c). Which of these scenarios is most likely? Related question: If we built a superintelligence without worrying about friendliness or morality at all, what kind of things would it optimize? Can we even make a guess? Would it be satisfied to be a dormant Laplace's Demon?

Comment author: wuwei 03 December 2009 04:14:43AM 2 points [-]

d) should be changed to the sparseness of intelligent aliens and limits to how fast even a superintelligence can extend its sphere of influence.

Comment author: Eliezer_Yudkowsky 21 November 2009 01:34:01AM 0 points [-]

They all sound true to me.

Comment author: wuwei 21 November 2009 03:27:57AM 1 point [-]

Interesting, what about either of the following:

A) If X should do A, then it is rational for X to do A.

B) If it is rational for X to do A, then X should do A.

Comment author: Eliezer_Yudkowsky 18 November 2009 02:24:24AM 5 points [-]

Correct. I'm a moral cognitivist; "should" statements have truth-conditions. It's just that very few possible minds care whether should-statements are true or not; most possible minds care about whether alien statements (like "leads-to-maximum-paperclips") are true or not. They would agree with us on what should be done; they just wouldn't care, because they aren't built to do what they should. They would similarly agree with us that their morals are pointless, but would be concerned with whether their morals are justified-by-paperclip-production, not whether their morals are pointless. And under ordinary circumstances, of course, they would never formulate - let alone bother to compute - the function we name "should" (or the closely related functions "justifiable" or "arbitrary").

Comment author: wuwei 21 November 2009 01:29:15AM *  3 points [-]

I'm a moral cognitivist too but I'm becoming quite puzzled as to what truth-conditions you think "should" statements have. Maybe it would help if you said which of these you think are true statements.

1) Eliezer Yudkowsky should not kill babies.

2) Babyeating aliens should not kill babies.

3) Sharks should not kill babies.

4) Volcanoes should not kill babies.

5) Should not kill babies. (sic)

The meaning of "should not" in 2 through 5 are intended to be the same as the common usage of the words in 1.

Comment author: Eliezer_Yudkowsky 18 November 2009 02:58:48AM 2 points [-]

You do agree that you and Greene are actually saying the same thing, yes?

I don't think we anticipate different experimental results. We do, however, seem to think that people should do different things.

Comment author: wuwei 18 November 2009 04:20:42AM 0 points [-]

I don't think we anticipate different experimental results.

I find that quite surprising to hear. Wouldn't disagreements about meaning generally cash out in some sort of difference in experimental results?

Comment author: Eliezer_Yudkowsky 18 November 2009 02:24:24AM 5 points [-]

Correct. I'm a moral cognitivist; "should" statements have truth-conditions. It's just that very few possible minds care whether should-statements are true or not; most possible minds care about whether alien statements (like "leads-to-maximum-paperclips") are true or not. They would agree with us on what should be done; they just wouldn't care, because they aren't built to do what they should. They would similarly agree with us that their morals are pointless, but would be concerned with whether their morals are justified-by-paperclip-production, not whether their morals are pointless. And under ordinary circumstances, of course, they would never formulate - let alone bother to compute - the function we name "should" (or the closely related functions "justifiable" or "arbitrary").

Comment author: wuwei 18 November 2009 04:14:08AM *  1 point [-]

On your analysis of should, paperclip maximizers should not maximize paperclips. Do you think this is a more useful characterization of 'should' than one in which we should be moral and rational, etc., and paperclip maximizers should maximize paperclips?

View more: Next