Esar comments on Rationality Quotes November 2012 - Less Wrong

6 [deleted] 06 November 2012 10:38PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (898)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 11 November 2012 11:08:13PM 0 points [-]

I would call “moral progress” the process whereby a society's behaviours and their CEV get closer to each other than they used to be.

Is this a possible use of 'CEV?' So far as I understand CEV, it's not possible that it could change: our CEV is what we would want given all the correct moral arguments and all the information. Assuming that 'all the information' and 'all the correct moral arguments' are constants, how could the CEV of one society differ from that of another?

The only way I can think of is if the two societies are composed of fundamentally different kinds of beings. But the idea of moral progress you describe assumes that this is not the case.

Comment author: wedrifid 12 November 2012 01:21:39AM *  6 points [-]

Is this a possible use of 'CEV?' So far as I understand CEV, it's not possible that it could change

Yes. Society's behaviors and their CEV can get closer together without the CEV changing at all. Also note that while CEV<CultureX_2003> is a (very slightly) different thing to CEV<CultureX_2004> even though neither of those "CEVs" change at all.

A potential criticism of army's definition is that it allows for "cultural wireheading" and as such would be a lost purpose if "moral progress" was substituted in as a all-purpose goal or measure of achievement. (That said, I've never really thought of "moral progress" as that-which-should-be-optimised anyhow.)

Comment author: Eugine_Nier 13 November 2012 01:51:40AM 4 points [-]

(That said, I've never really thought of "moral progress" as that-which-should-be-optimised anyhow.)

Then why is "moral progress" a useful concept?

Comment author: Eliezer_Yudkowsky 13 November 2012 05:29:18AM 3 points [-]

It describes how to compute that-which-should-be-optimized.

Comment author: wedrifid 13 November 2012 08:11:23AM *  0 points [-]

EDIT: Replied to wrong message. (Curse my android!)

Comment author: [deleted] 13 November 2012 11:04:08PM 2 points [-]

Shoes aren't that-which-should-be-optimized either, but that doesn't mean that the concept of shoe is not useful.

Comment author: [deleted] 12 November 2012 01:26:45AM 1 point [-]

Yes. Society's behaviors and their CEV can get closer together without the CEV changing at all.

So we're not saying that the CEV of a culture changes (this is a constant), but that the culture's actual moral practices and reasoning can change in relation to its CEV. And change such that it is closer or further away. Do I have that right?

(That said, I've never really thought of "moral progress" as that-which-should-be-optimised anyhow.)

Presumably, we wouldn't want to optimize moral progress, but rather morality.

Comment author: wedrifid 12 November 2012 02:03:18AM 1 point [-]

So we're not saying that the CEV of a culture changes (this is a constant)

The CEV of a culture changes (a little bit) every day. CEV<CultureX_specific_time> is a constant. This is because humans (and groups of humans) aren't stable, consistent optimisers. From what I understand the CEV of a culture is relatively stable, certainly more stable than the culture itself. Nevertheless it is not a fixed. We, all things considered and collectively want (very nearly tautologically) for our CEV to be stable because that (approximately) maximises our current CEV. We just aren't that consistent.

but that the culture's actual moral practices and reasoning can change in relation to its CEV. And change such that it is closer or further away. Do I have that right?

That is one way in which the previously quoted proposition could be valid, yes.

Presumably, we wouldn't want to optimize moral progress, but rather morality.

I want to optimise whatever my preferences are. Morality seems to get a weight in there someplace.

Comment author: DaFranker 12 November 2012 05:57:32PM *  2 points [-]

I thought the whole point of CEV was to extrapolate forwards in time towards the ultimate reflectively-consistent set of values to formulate one single coherent utility function (with multiple parameters and variables, of course) that represents the optimal equilibrium of all that humans would want if they were exactly as they would want to be and would want exactly that which they would wish to want.

he CEV of a culture changes (a little bit) every day. CEV<CultureXspecifictime> is a constant. This is because humans (and groups of humans) aren't stable, consistent optimisers. From what I understand the CEV of a culture is relatively stable, certainly more stable than the culture itself. Nevertheless it is not a fixed. We, all things considered and collectively want (very nearly tautologically) for our CEV to be stable because that (approximately) maximises our current CEV. We just aren't that consistent.

This reminds me more of CAV (Coherent Aggregated Volition) than CEV. CEV is, IIRC, intended as a bootstrap towards "Whatever humans would collectively find the best possible optimization after infinite re-evaluations", if any such meta-ethics exists.

Comment author: wedrifid 14 November 2012 06:08:12AM 12 points [-]

I thought the whole point of CEV was to extrapolate forwards in time towards the ultimate reflectively-consistent set of values to formulate one single coherent utility function (with multiple parameters and variables, of course) that represents the optimal equilibrium of all that humans would want if they were exactly as they would want to be and would want exactly that which they would wish to want.

The Coherent Extrapolated Volition of one group of humans is not the same thing as the Coherent Extrapolated Volition of another group of humans. Humans populations change and even evolve over time due to forces that are not carefully constructed to move the population in the same direction as the CEV of their ancestors and so later generations will not have the same CEV as previous ones.

CEV is, IIRC, intended as a bootstrap towards "Whatever humans would collectively find the best possible optimization after infinite re-evaluations", if any such meta-ethics exists.

Eliezer has a lot to answer for when it comes to encouraging magical thinking along the lines of "all (subsets of) humans have the same Coherent Extrapolated Volition". He may not be confused himself but his document certainly encourages it.

Comment author: MugaSofer 14 November 2012 09:34:18AM -1 points [-]

It depends on how you define "humans", but considering how old some of the references to the Golden Rule are at least some of our utility function is older than most civilizations. Do you have any proof that previous generations were fundamentally different to us, and not, like most (all?) humans today, confused about how to implement their utility function (if we give the poor healthcare, they wont have an incentive to work!)

Comment author: [deleted] 14 November 2012 04:28:28PM 4 points [-]

It depends on how you define "humans"

Well... IMO, not counting psychopaths as human amounts to a no-true-Scotsman fallacy.

Comment author: MugaSofer 15 November 2012 09:40:46PM 1 point [-]

I was referring to extinct species and subspecies of human. Of course psychopaths are human, but AFAIK they have always been a small minority.

Comment author: Peterdjones 16 November 2012 12:20:10PM 0 points [-]

The existence of blind people is not usually taken to disprove "human beings have sight".

Comment author: thomblake 15 November 2012 10:08:02PM 0 points [-]

IMO, not counting psychopaths as human amounts to a no-true-Scotsman fallacy.

The no-true-Scotsman fallacy applies to an argument when it excludes particular cases by rhetoric rather than for objective reasons. It does not apply to any particular drawing of category boundaries on its own.

Comment author: TimS 15 November 2012 10:15:35PM 3 points [-]

I've always interpreted no-true-Scotsman as warning about the dangers of arguing by definition. At the very least, saying psychopaths are not human runs the risk of being argument by definition.

Comment author: [deleted] 16 November 2012 10:45:31AM *  1 point [-]

Well, I'd say it depends on the complexity of those objective reasons. “The way to carve reality at its joints, is to draw simple boundaries around concentrations of unusually high probability density in Thingspace. Otherwise you would just gerrymander Thingspace.

(OTOH I think language should also depend on what you value: if your utility function is the number of inwardly-thrice-bent metal wires capable of nondestructively fastening several standard sheets of paper together at an edge in the universe, it's handy to have a single word for ‘inwardly-thrice-bent metal wire capable of nondestructively fastening several standard sheets of paper together at an edge’, whether that's a natural category or not. But you shouldn't pretend it's a natural category.)

Comment author: wedrifid 14 November 2012 10:36:01AM *  5 points [-]

It depends on how you define "humans"

It is trivially true that restricting the definition of 'human' can reduce the possible differences between the CEVs of subsets of humans. This is just a matter of shifting the workload into the 'human' definition. Unless you plan to restrict the definition of human to one individual, however, there are still going to be differences between the CEV of subsets (except by coincidence).

but considering how old some of the references to the Golden Rule are at least some of our utility function is older than most civilizations.

Having a weak-to-moderate norm in favour of doing things that you would consider helpful or at least not harmful to others in your social group does seem to be popular (not as consistent or as strong as norms against excreting waste products in public but right up there!). That CEVs of various combinations of humans are similar isn't the point. Of course they will be. In fact, on average I'd expected them to be more similar than the groups of humans themselves are. But they are not identical (except by coincidence).

Do you have any proof

No!

that previous generations were fundamentally different to us, and not, like most (all?) humans today, confused about how to implement their utility function (if we give the poor healthcare, they wont have an incentive to work!)

That isn't a dichotomy. Clearly both past humans and current humans aren't effectively optimising toward their respective CEVs. But those CEVs are also going to be different because there isn't any magic (or focused expenditure of optimisation power) holding the CEV constant!

(I'm not sure what "fundamental" means exactly so I'll just note that I've never proposed any kind of difference beyond "not the same").

Comment author: [deleted] 15 November 2012 09:37:13AM *  7 points [-]

Eliezer has a lot to answer for when it comes to encouraging magical thinking along the lines of "all (subsets of) humans have the same Coherent Extrapolated Volition". He may not be confused himself but his document certainly encourages it.

...

Clearly both past humans and current humans aren't effectively optimising toward their respective CEVs. But those CEVs are also going to be different because there isn't any magic (or focused expenditure of optimisation power) holding the CEV constant!

It would be great if you wrote up a short discussion level post to clear up what seems to be a common misconception. Please consider doing so.

Comment author: wedrifid 15 November 2012 10:03:38AM 1 point [-]

It would be great if you wrote up a short discussion level post to clear up what seems to be a common misconception. Please consider doing so.

I'm not sure how useful that would be, or rather whether I'm the right person to be doing it. I thought I said everything that needed to be said in this thread already but it wasn't necessarily successful at reaching the target audience. Perhaps someone more in tune with the idealism behind the disagreement could explain better.

Comment author: MugaSofer 14 November 2012 10:49:51AM *  1 point [-]

Unless you plan to restrict the definition of human to one individual, however, there are still going to be differences between the CEV of subsets (except by coincidence).

I meant that, say, Neanderthals have a good chance of a serious CEV difference. However, your statement that all humans have different CEVs is unsupported by any evidence. For example:

norms against excreting waste products in public

Historically, dumping waste products was considered relatively harmless; sure it smells a little but hey, what doesn't? These people lacked the germ theory of disease, remember. No-one thought deliberately spreading disease was OK.

No!

That is not a fully general counterargument against your lack of any evidence at all.

there isn't any magic (or focused expenditure of optimisation power) holding the CEV constant!

But there's no magic changing it! If you assume human morality evolved, why would our ethics have changed much more than, say, our diet?

Comment author: wedrifid 14 November 2012 11:09:04AM 1 point [-]

why would our ethics have changed much more than, say, our diet?

Nobody said that they would have.

You are arguing against a straw man. Please read some of the message you replied to or the ones preceding it. Even, say, 1/3 of the sentences is likely to be sufficient---I've been repeating myself to make this clear.

Comment author: [deleted] 14 November 2012 04:30:58PM *  1 point [-]

However, your statement that all humans have different CEVs is unsupported by any evidence.

It is, but my prior that two logically different things turn out to be exactly identical is pretty small. EDIT: OTOH, I think that almost all humans' CEVs would be so similar that a world with a FAI optimizing for CEV<Group A> would be very unlikely to feel like a dystopia to Group B, unless the membership criteria to Group A are deliberately gerrymandered to achieve that.

Comment author: Eugine_Nier 15 November 2012 02:04:07AM 1 point [-]

Eliezer has a lot to answer for when it comes to encouraging magical thinking along the lines of "all (subsets of) humans have the same Coherent Extrapolated Volition".

No, his argument is that CEVs of any (subset of) humans is a tiny cluster in value space.

Comment author: wedrifid 15 November 2012 06:04:07AM *  0 points [-]

No, his argument is that CEVs of any (subset of) humans is a tiny cluster in value space.

He has, in fact, made that argument (as well). I repeat the claim:

Eliezer has a lot to answer for when it comes to encouraging magical thinking along the lines of "all (subsets of) humans have the same Coherent Extrapolated Volition". He may not be confused himself but his document certainly encourages it.

Comment author: Peterdjones 14 November 2012 02:39:03PM 1 point [-]

The Coherent Extrapolated Volition of one group of humans is not the same thing as the Coherent Extrapolated Volition of another group of humans.

Who knows? It's possible EY thinks it will be. There doens't seem to be any authoritative answer to that.

Comment author: [deleted] 14 November 2012 04:25:19PM 0 points [-]
Comment author: DaFranker 14 November 2012 02:12:59PM *  1 point [-]

Thank you. I had slightly misunderstood what you were saying, but I also hadn't looked at all the variables and you pointed right at what I was missing.

Comment author: [deleted] 12 November 2012 04:31:20PM *  2 points [-]

Maybe I just need to read up on the theory a little more, because I'm still quite confused. Is my CEV the set of things I would want given all the correct moral arguments and all the information? As opposed (probably) to be the set of things I want now?

I can see how the set of things I want now would change over time, but I'm having a hard time seeing why my CEV could ever change. Compare the CEPT, the Coherent Extrapolated Physical Theory, which is the theory of physics we would have if we had all the information and all the correct physics arguments. I can see how our present physical theories would change, but CEPT seems like it should be fixed.

But I suppose it's also true that CEPT supervenes on a set of basic, contingent physical facts. So does CEV also supervene on a set of basic, contingent wants? If so, I suppose a CEV can change depending on which basic wants I have. Is that right?

If so, does that mean I have to agree to disagree with an ancient greek person on moral matters? Or that, on some level, I can no longer reasonably ask whether my wanting something is good or bad?

Comment author: [deleted] 12 November 2012 09:26:15PM 2 points [-]

Is my CEV the set of things I would want given all the correct moral arguments and all the information? As opposed (probably) to be the set of things I want now?

Yes. This needn't be the same for all agents: a rock would still not want anything no matter how many correct moral arguments and how much information you gave it, so CEV<rock> is indifferent to everything. Now you and Homer are much more similar than you and a rock, so your CEVs will be much more similar, but it's not obvious to me that they are necessarily exactly identical just because you're individuals of the same species.

Comment author: Kindly 12 November 2012 10:55:55PM 1 point [-]

Technically this is just EV (extrapolated volition); then CEV is just some way of compromising between your EV and everyone else's (possibly including Homer, but presumably not including rocks).

Comment author: [deleted] 12 November 2012 09:36:23PM 0 points [-]

Thanks, I think I get it. Do you have any thoughts on my last two questions:

If so, does that mean I have to agree to disagree with an ancient greek person on moral matters? Or that, on some level, I can no longer reasonably ask whether my wanting something is good or bad?

Comment author: [deleted] 13 November 2012 08:46:00AM 0 points [-]

I'd say that would just mean that the two of you mean different things by the word good (see also TimS's comment), but for some reason I feel that would just amount to dodging the question, so I'm going to say "I don't know" instead.

Comment author: DaFranker 12 November 2012 06:02:58PM *  -1 points [-]

I think you've got the right idea that CEV aims to find that fixed, ultimately-best-possible set of values.

If I understand correctly, CEV is mostly intended as a shortcut to arrive as close as possible to the same ethics we would have if all humans sat and thought and discussed and researched ethics for [insert arbitrarily large amount of time] until no more changes would occur in those ethics and the system would remain logically consistent and always the best choice for all circumstances and in all futures barring direct alteration of elementary human values.

There may be some conflation between CEV and particular implementations of it that were discussed previously, or with other CEV-like theories (e.g. Coherent Blended Volition). I may also be the one doing the conflating, though.

Comment author: [deleted] 12 November 2012 11:13:03AM 0 points [-]

The only way I can think of is if the two societies are composed of fundamentally different kinds of beings.

None of the people alive in Homer's times is alive today. Dunno about how “fundamentally” different we are -- I'd guess the difference between CEV<Homer> and CEV<Esar> is very small but not exactly zero.

Comment author: [deleted] 12 November 2012 02:42:41PM 4 points [-]

Okay, I think I'm starting to get it. Is the idea that, both of us given all the correct moral arguments and all the information, an archaic Greek person and myself would still want different things?

Comment author: TimS 12 November 2012 03:23:34PM 5 points [-]

Yes. For a more philosophical (and extreme) take on the issue, you can read Friedrich Nietzsche's On the Genealogy of Morals. Warning: Nietzsche is made of hyperbole, so it's often quite difficult to understand his substantive point.

In this case, the point is that the Greeks divided the world into good and bad, while we moderns divide the world into good and evil. What's the difference? It is possible to bad at a sport, but acting within the norms of the sport, it is impossible to be evil. Imagine how your moral perspective would be different if you only judged people based on whether they were "good at life" or "bad at life".

Comment author: [deleted] 13 November 2012 12:48:39PM 0 points [-]

Warning: Nietzsche is made of hyperbole, so it's often quite difficult to understand his substantive point.

Indeed, I like Nietzsche's philosophy as I know it from second-hand accounts, but when I tried to read his own writings I had to force myself through the pages and gave up. (Maybe I used a bad translation or something.)

In this case, the point is that the Greeks divided the world into good and bad, while we moderns divide the world into good and evil. What's the difference? It is possible to bad at a sport, but acting within the norms of the sport, it is impossible to be evil. Imagine how your moral perspective would be different if you only judged people based on whether they were "good at life" or "bad at life".

ISTM that many (most?) LWers also divide the world into good and bad, so, to the extent this is a fundamental disagreement between values rather than someone's confusion due to not knowing something/not thinking stuff through, CEV<LW> might be closer to CEV<Homer> than to CEV<Catholics in the late second millennium>!

BTW, I think I've also seen a two-dimensional model for that; I don't remember how the quadrant other than “good”, “bad” and “evil” (people who aren't terribly good at life, but at least try hard not to harm others as a result of their incompetence, even to a cost to themselves) was labelled -- wimps?

Comment author: RichardKennaway 13 November 2012 01:01:38PM 1 point [-]

BTW, I think I've also seen a two-dimensional model for that; I don't remember how the quadrant other than “good”, “bad” and “evil” (people who aren't terribly good at life, but at least try hard not to harm others as a result of their incompetence, even to a cost to themselves) was labelled -- wimps?

Sounds like two axes, one going from competent to incompetent, the other from well-intentioned to ill-intentioned.

Comment author: [deleted] 13 November 2012 01:08:07PM *  0 points [-]

Yes. (Not sure about the exact labels on the axes, but that was the spirit.) IIRC, “good” was the quadrant (competent, well-intentioned), “bad” was (incompetent, ill-intentioned), “evil” was (competent, ill-intentioned) and I don't remember the label on the remaining quadrant.

Comment author: [deleted] 12 November 2012 09:18:18PM 2 points [-]

Yes. Apparently sam0345 (if that's what he means by “his moral ideal”) thinks the two of you would still want very different things; wedrifid and I think you would want slightly different things.

Comment author: [deleted] 12 November 2012 09:33:53PM 1 point [-]

Okay, thanks for taking the time to explain. This has been very helpful.

Comment author: [deleted] 12 November 2012 01:05:59PM *  2 points [-]

While we're speculating anyway...

How different do you guess CEV<humans alive in 700 BC> and CEV<humans alive in AD 2012> would be?

Not at all A lot

Submitting...

Comment author: Peterdjones 14 November 2012 07:43:30PM 1 point [-]

a) The word "different" seems to be missing from the above.

b) I don't k now how CEV is defined or whatt it is suppsed to be. Old-fashioned metaethics from that "diseseased discipline", philosophy, seem much clearer to me.

C) I have only ever been saying that, as so far stated, such questions are imponderable.

Comment author: [deleted] 15 November 2012 11:45:01AM 1 point [-]

a) The word "different" seems to be missing from the above.

It's in the question; it seemed redundant to me to put it in the answers too.