Esar comments on Rationality Quotes November 2012 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (898)
Is this a possible use of 'CEV?' So far as I understand CEV, it's not possible that it could change: our CEV is what we would want given all the correct moral arguments and all the information. Assuming that 'all the information' and 'all the correct moral arguments' are constants, how could the CEV of one society differ from that of another?
The only way I can think of is if the two societies are composed of fundamentally different kinds of beings. But the idea of moral progress you describe assumes that this is not the case.
Yes. Society's behaviors and their CEV can get closer together without the CEV changing at all. Also note that while CEV<CultureX_2003> is a (very slightly) different thing to CEV<CultureX_2004> even though neither of those "CEVs" change at all.
A potential criticism of army's definition is that it allows for "cultural wireheading" and as such would be a lost purpose if "moral progress" was substituted in as a all-purpose goal or measure of achievement. (That said, I've never really thought of "moral progress" as that-which-should-be-optimised anyhow.)
Then why is "moral progress" a useful concept?
It describes how to compute that-which-should-be-optimized.
EDIT: Replied to wrong message. (Curse my android!)
Shoes aren't that-which-should-be-optimized either, but that doesn't mean that the concept of shoe is not useful.
So we're not saying that the CEV of a culture changes (this is a constant), but that the culture's actual moral practices and reasoning can change in relation to its CEV. And change such that it is closer or further away. Do I have that right?
Presumably, we wouldn't want to optimize moral progress, but rather morality.
The CEV of a culture changes (a little bit) every day. CEV<CultureX_specific_time> is a constant. This is because humans (and groups of humans) aren't stable, consistent optimisers. From what I understand the CEV of a culture is relatively stable, certainly more stable than the culture itself. Nevertheless it is not a fixed. We, all things considered and collectively want (very nearly tautologically) for our CEV to be stable because that (approximately) maximises our current CEV. We just aren't that consistent.
That is one way in which the previously quoted proposition could be valid, yes.
I want to optimise whatever my preferences are. Morality seems to get a weight in there someplace.
I thought the whole point of CEV was to extrapolate forwards in time towards the ultimate reflectively-consistent set of values to formulate one single coherent utility function (with multiple parameters and variables, of course) that represents the optimal equilibrium of all that humans would want if they were exactly as they would want to be and would want exactly that which they would wish to want.
This reminds me more of CAV (Coherent Aggregated Volition) than CEV. CEV is, IIRC, intended as a bootstrap towards "Whatever humans would collectively find the best possible optimization after infinite re-evaluations", if any such meta-ethics exists.
The Coherent Extrapolated Volition of one group of humans is not the same thing as the Coherent Extrapolated Volition of another group of humans. Humans populations change and even evolve over time due to forces that are not carefully constructed to move the population in the same direction as the CEV of their ancestors and so later generations will not have the same CEV as previous ones.
Eliezer has a lot to answer for when it comes to encouraging magical thinking along the lines of "all (subsets of) humans have the same Coherent Extrapolated Volition". He may not be confused himself but his document certainly encourages it.
It depends on how you define "humans", but considering how old some of the references to the Golden Rule are at least some of our utility function is older than most civilizations. Do you have any proof that previous generations were fundamentally different to us, and not, like most (all?) humans today, confused about how to implement their utility function (if we give the poor healthcare, they wont have an incentive to work!)
Well... IMO, not counting psychopaths as human amounts to a no-true-Scotsman fallacy.
I was referring to extinct species and subspecies of human. Of course psychopaths are human, but AFAIK they have always been a small minority.
The existence of blind people is not usually taken to disprove "human beings have sight".
The no-true-Scotsman fallacy applies to an argument when it excludes particular cases by rhetoric rather than for objective reasons. It does not apply to any particular drawing of category boundaries on its own.
I've always interpreted no-true-Scotsman as warning about the dangers of arguing by definition. At the very least, saying psychopaths are not human runs the risk of being argument by definition.
Well, I'd say it depends on the complexity of those objective reasons. “The way to carve reality at its joints, is to draw simple boundaries around concentrations of unusually high probability density in Thingspace. Otherwise you would just gerrymander Thingspace.”
(OTOH I think language should also depend on what you value: if your utility function is the number of inwardly-thrice-bent metal wires capable of nondestructively fastening several standard sheets of paper together at an edge in the universe, it's handy to have a single word for ‘inwardly-thrice-bent metal wire capable of nondestructively fastening several standard sheets of paper together at an edge’, whether that's a natural category or not. But you shouldn't pretend it's a natural category.)
It is trivially true that restricting the definition of 'human' can reduce the possible differences between the CEVs of subsets of humans. This is just a matter of shifting the workload into the 'human' definition. Unless you plan to restrict the definition of human to one individual, however, there are still going to be differences between the CEV of subsets (except by coincidence).
Having a weak-to-moderate norm in favour of doing things that you would consider helpful or at least not harmful to others in your social group does seem to be popular (not as consistent or as strong as norms against excreting waste products in public but right up there!). That CEVs of various combinations of humans are similar isn't the point. Of course they will be. In fact, on average I'd expected them to be more similar than the groups of humans themselves are. But they are not identical (except by coincidence).
No!
That isn't a dichotomy. Clearly both past humans and current humans aren't effectively optimising toward their respective CEVs. But those CEVs are also going to be different because there isn't any magic (or focused expenditure of optimisation power) holding the CEV constant!
(I'm not sure what "fundamental" means exactly so I'll just note that I've never proposed any kind of difference beyond "not the same").
It would be great if you wrote up a short discussion level post to clear up what seems to be a common misconception. Please consider doing so.
I'm not sure how useful that would be, or rather whether I'm the right person to be doing it. I thought I said everything that needed to be said in this thread already but it wasn't necessarily successful at reaching the target audience. Perhaps someone more in tune with the idealism behind the disagreement could explain better.
I meant that, say, Neanderthals have a good chance of a serious CEV difference. However, your statement that all humans have different CEVs is unsupported by any evidence. For example:
Historically, dumping waste products was considered relatively harmless; sure it smells a little but hey, what doesn't? These people lacked the germ theory of disease, remember. No-one thought deliberately spreading disease was OK.
That is not a fully general counterargument against your lack of any evidence at all.
But there's no magic changing it! If you assume human morality evolved, why would our ethics have changed much more than, say, our diet?
Nobody said that they would have.
You are arguing against a straw man. Please read some of the message you replied to or the ones preceding it. Even, say, 1/3 of the sentences is likely to be sufficient---I've been repeating myself to make this clear.
It is, but my prior that two logically different things turn out to be exactly identical is pretty small. EDIT: OTOH, I think that almost all humans' CEVs would be so similar that a world with a FAI optimizing for CEV<Group A> would be very unlikely to feel like a dystopia to Group B, unless the membership criteria to Group A are deliberately gerrymandered to achieve that.
No, his argument is that CEVs of any (subset of) humans is a tiny cluster in value space.
He has, in fact, made that argument (as well). I repeat the claim:
Who knows? It's possible EY thinks it will be. There doens't seem to be any authoritative answer to that.
Poll here
Thank you. I had slightly misunderstood what you were saying, but I also hadn't looked at all the variables and you pointed right at what I was missing.
Maybe I just need to read up on the theory a little more, because I'm still quite confused. Is my CEV the set of things I would want given all the correct moral arguments and all the information? As opposed (probably) to be the set of things I want now?
I can see how the set of things I want now would change over time, but I'm having a hard time seeing why my CEV could ever change. Compare the CEPT, the Coherent Extrapolated Physical Theory, which is the theory of physics we would have if we had all the information and all the correct physics arguments. I can see how our present physical theories would change, but CEPT seems like it should be fixed.
But I suppose it's also true that CEPT supervenes on a set of basic, contingent physical facts. So does CEV also supervene on a set of basic, contingent wants? If so, I suppose a CEV can change depending on which basic wants I have. Is that right?
If so, does that mean I have to agree to disagree with an ancient greek person on moral matters? Or that, on some level, I can no longer reasonably ask whether my wanting something is good or bad?
Yes. This needn't be the same for all agents: a rock would still not want anything no matter how many correct moral arguments and how much information you gave it, so CEV<rock> is indifferent to everything. Now you and Homer are much more similar than you and a rock, so your CEVs will be much more similar, but it's not obvious to me that they are necessarily exactly identical just because you're individuals of the same species.
Technically this is just EV (extrapolated volition); then CEV is just some way of compromising between your EV and everyone else's (possibly including Homer, but presumably not including rocks).
Thanks, I think I get it. Do you have any thoughts on my last two questions:
I'd say that would just mean that the two of you mean different things by the word good (see also TimS's comment), but for some reason I feel that would just amount to dodging the question, so I'm going to say "I don't know" instead.
I think you've got the right idea that CEV aims to find that fixed, ultimately-best-possible set of values.
If I understand correctly, CEV is mostly intended as a shortcut to arrive as close as possible to the same ethics we would have if all humans sat and thought and discussed and researched ethics for [insert arbitrarily large amount of time] until no more changes would occur in those ethics and the system would remain logically consistent and always the best choice for all circumstances and in all futures barring direct alteration of elementary human values.
There may be some conflation between CEV and particular implementations of it that were discussed previously, or with other CEV-like theories (e.g. Coherent Blended Volition). I may also be the one doing the conflating, though.
None of the people alive in Homer's times is alive today. Dunno about how “fundamentally” different we are -- I'd guess the difference between CEV<Homer> and CEV<Esar> is very small but not exactly zero.
Okay, I think I'm starting to get it. Is the idea that, both of us given all the correct moral arguments and all the information, an archaic Greek person and myself would still want different things?
Yes. For a more philosophical (and extreme) take on the issue, you can read Friedrich Nietzsche's On the Genealogy of Morals. Warning: Nietzsche is made of hyperbole, so it's often quite difficult to understand his substantive point.
In this case, the point is that the Greeks divided the world into good and bad, while we moderns divide the world into good and evil. What's the difference? It is possible to bad at a sport, but acting within the norms of the sport, it is impossible to be evil. Imagine how your moral perspective would be different if you only judged people based on whether they were "good at life" or "bad at life".
Indeed, I like Nietzsche's philosophy as I know it from second-hand accounts, but when I tried to read his own writings I had to force myself through the pages and gave up. (Maybe I used a bad translation or something.)
ISTM that many (most?) LWers also divide the world into good and bad, so, to the extent this is a fundamental disagreement between values rather than someone's confusion due to not knowing something/not thinking stuff through, CEV<LW> might be closer to CEV<Homer> than to CEV<Catholics in the late second millennium>!
BTW, I think I've also seen a two-dimensional model for that; I don't remember how the quadrant other than “good”, “bad” and “evil” (people who aren't terribly good at life, but at least try hard not to harm others as a result of their incompetence, even to a cost to themselves) was labelled -- wimps?
Sounds like two axes, one going from competent to incompetent, the other from well-intentioned to ill-intentioned.
Yes. (Not sure about the exact labels on the axes, but that was the spirit.) IIRC, “good” was the quadrant (competent, well-intentioned), “bad” was (incompetent, ill-intentioned), “evil” was (competent, ill-intentioned) and I don't remember the label on the remaining quadrant.
Yes. Apparently sam0345 (if that's what he means by “his moral ideal”) thinks the two of you would still want very different things; wedrifid and I think you would want slightly different things.
Okay, thanks for taking the time to explain. This has been very helpful.
a) The word "different" seems to be missing from the above.
b) I don't k now how CEV is defined or whatt it is suppsed to be. Old-fashioned metaethics from that "diseseased discipline", philosophy, seem much clearer to me.
C) I have only ever been saying that, as so far stated, such questions are imponderable.
It's in the question; it seemed redundant to me to put it in the answers too.