wuwei
wuwei has not written any posts yet.

wuwei has not written any posts yet.

Matt Simpson was talking about people who have in fact reflected on their values a lot. Why did you switch to talking about people who think they have reflected a lot?
What "someone actually values" or what their "terminal values" are seems to be ambiguous in this discussion. On one reading, it just means what motivates someone the most. In that case, your claims are pretty plausible.
On the other reading, which seems more relevant in this thread and the original comment, it means the terminal values someone should act on, which we might approximate as what they would value at the end of reflection. Switching back to people who have reflected a lot... (read more)
I suppose I might count as someone who favors "organismal" preferences over confusing the metaphorical "preferences" of our genes with those of the individual. I think your argument against this is pretty weak.
You claim that favoring the "organismal" over the "evolutionary" fails to accurately identify our values in four cases, but I fail to see any problem with these cases.
Hi.
I've read nearly everything on less wrong but except for a couple months last summer, I generally don't comment because a) I feel I don't have time, b) my perfectionist standards make me anxious about meeting and maintaining the high standards of discussion here and c) very often someone has either already said what I would have wanted to say or I anticipate from experience that someone will very soon.
There's the consequentialist/utilitarian click, and the intelligence explosion click, and the life-is-good/death-is-bad click, and the cryonics click.
I can find a number of blog posts from you clearly laying out the arguments in favor of each of those clicks except the consequentialism/utilitarianism one.
What do you mean by "consequentialism" and "utilitarianism" and why do you think they are not just right but obviously right?
d) should be changed to the sparseness of intelligent aliens and limits to how fast even a superintelligence can extend its sphere of influence.
Interesting, what about either of the following:
A) If X should do A, then it is rational for X to do A.
B) If it is rational for X to do A, then X should do A.
I'm a moral cognitivist too but I'm becoming quite puzzled as to what truth-conditions you think "should" statements have. Maybe it would help if you said which of these you think are true statements.
1) Eliezer Yudkowsky should not kill babies.
2) Babyeating aliens should not kill babies.
3) Sharks should not kill babies.
4) Volcanoes should not kill babies.
5) Should not kill babies. (sic)
The meaning of "should not" in 2 through 5 are intended to be the same as the common usage of the words in 1.
I don't think we anticipate different experimental results.
I find that quite surprising to hear. Wouldn't disagreements about meaning generally cash out in some sort of difference in experimental results?
On your analysis of should, paperclip maximizers should not maximize paperclips. Do you think this is a more useful characterization of 'should' than one in which we should be moral and rational, etc., and paperclip maximizers should maximize paperclips?
CEV is not preference utilitarianism, or any other first-order ethical theory. Rather, preference utilitarianism is the sort of thing that might be CEV's output.