Esar comments on Rationality Quotes November 2012 - Less Wrong

6 [deleted] 06 November 2012 10:38PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (898)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 12 November 2012 01:26:45AM 1 point [-]

Yes. Society's behaviors and their CEV can get closer together without the CEV changing at all.

So we're not saying that the CEV of a culture changes (this is a constant), but that the culture's actual moral practices and reasoning can change in relation to its CEV. And change such that it is closer or further away. Do I have that right?

(That said, I've never really thought of "moral progress" as that-which-should-be-optimised anyhow.)

Presumably, we wouldn't want to optimize moral progress, but rather morality.

Comment author: wedrifid 12 November 2012 02:03:18AM 1 point [-]

So we're not saying that the CEV of a culture changes (this is a constant)

The CEV of a culture changes (a little bit) every day. CEV<CultureX_specific_time> is a constant. This is because humans (and groups of humans) aren't stable, consistent optimisers. From what I understand the CEV of a culture is relatively stable, certainly more stable than the culture itself. Nevertheless it is not a fixed. We, all things considered and collectively want (very nearly tautologically) for our CEV to be stable because that (approximately) maximises our current CEV. We just aren't that consistent.

but that the culture's actual moral practices and reasoning can change in relation to its CEV. And change such that it is closer or further away. Do I have that right?

That is one way in which the previously quoted proposition could be valid, yes.

Presumably, we wouldn't want to optimize moral progress, but rather morality.

I want to optimise whatever my preferences are. Morality seems to get a weight in there someplace.

Comment author: DaFranker 12 November 2012 05:57:32PM *  2 points [-]

I thought the whole point of CEV was to extrapolate forwards in time towards the ultimate reflectively-consistent set of values to formulate one single coherent utility function (with multiple parameters and variables, of course) that represents the optimal equilibrium of all that humans would want if they were exactly as they would want to be and would want exactly that which they would wish to want.

he CEV of a culture changes (a little bit) every day. CEV<CultureXspecifictime> is a constant. This is because humans (and groups of humans) aren't stable, consistent optimisers. From what I understand the CEV of a culture is relatively stable, certainly more stable than the culture itself. Nevertheless it is not a fixed. We, all things considered and collectively want (very nearly tautologically) for our CEV to be stable because that (approximately) maximises our current CEV. We just aren't that consistent.

This reminds me more of CAV (Coherent Aggregated Volition) than CEV. CEV is, IIRC, intended as a bootstrap towards "Whatever humans would collectively find the best possible optimization after infinite re-evaluations", if any such meta-ethics exists.

Comment author: wedrifid 14 November 2012 06:08:12AM 12 points [-]

I thought the whole point of CEV was to extrapolate forwards in time towards the ultimate reflectively-consistent set of values to formulate one single coherent utility function (with multiple parameters and variables, of course) that represents the optimal equilibrium of all that humans would want if they were exactly as they would want to be and would want exactly that which they would wish to want.

The Coherent Extrapolated Volition of one group of humans is not the same thing as the Coherent Extrapolated Volition of another group of humans. Humans populations change and even evolve over time due to forces that are not carefully constructed to move the population in the same direction as the CEV of their ancestors and so later generations will not have the same CEV as previous ones.

CEV is, IIRC, intended as a bootstrap towards "Whatever humans would collectively find the best possible optimization after infinite re-evaluations", if any such meta-ethics exists.

Eliezer has a lot to answer for when it comes to encouraging magical thinking along the lines of "all (subsets of) humans have the same Coherent Extrapolated Volition". He may not be confused himself but his document certainly encourages it.

Comment author: MugaSofer 14 November 2012 09:34:18AM -1 points [-]

It depends on how you define "humans", but considering how old some of the references to the Golden Rule are at least some of our utility function is older than most civilizations. Do you have any proof that previous generations were fundamentally different to us, and not, like most (all?) humans today, confused about how to implement their utility function (if we give the poor healthcare, they wont have an incentive to work!)

Comment author: [deleted] 14 November 2012 04:28:28PM 4 points [-]

It depends on how you define "humans"

Well... IMO, not counting psychopaths as human amounts to a no-true-Scotsman fallacy.

Comment author: MugaSofer 15 November 2012 09:40:46PM 1 point [-]

I was referring to extinct species and subspecies of human. Of course psychopaths are human, but AFAIK they have always been a small minority.

Comment author: Peterdjones 16 November 2012 12:20:10PM 0 points [-]

The existence of blind people is not usually taken to disprove "human beings have sight".

Comment author: MugaSofer 16 November 2012 01:34:19PM 0 points [-]

Indeed. Imagine someone arguing that past civilizations saw colour differently to modern humans; it makes a pretty god analogy for this discussion.

Comment author: thomblake 15 November 2012 10:08:02PM 0 points [-]

IMO, not counting psychopaths as human amounts to a no-true-Scotsman fallacy.

The no-true-Scotsman fallacy applies to an argument when it excludes particular cases by rhetoric rather than for objective reasons. It does not apply to any particular drawing of category boundaries on its own.

Comment author: TimS 15 November 2012 10:15:35PM 3 points [-]

I've always interpreted no-true-Scotsman as warning about the dangers of arguing by definition. At the very least, saying psychopaths are not human runs the risk of being argument by definition.

Comment author: [deleted] 16 November 2012 10:45:31AM *  1 point [-]

Well, I'd say it depends on the complexity of those objective reasons. “The way to carve reality at its joints, is to draw simple boundaries around concentrations of unusually high probability density in Thingspace. Otherwise you would just gerrymander Thingspace.

(OTOH I think language should also depend on what you value: if your utility function is the number of inwardly-thrice-bent metal wires capable of nondestructively fastening several standard sheets of paper together at an edge in the universe, it's handy to have a single word for ‘inwardly-thrice-bent metal wire capable of nondestructively fastening several standard sheets of paper together at an edge’, whether that's a natural category or not. But you shouldn't pretend it's a natural category.)

Comment author: thomblake 16 November 2012 03:34:17PM *  1 point [-]

"No true Scotsman":

A: No human thinks red shirts are better than blue shirts.
B: Lots of psychopaths think red shirts are better than blue shirts.
A: I meant true humans. Psychopaths aren't really humans, so don't count.
B: What about my friend Billy? He is not a psychopath but thinks red shirts are better than blue shirts.
A: True humans are non-psychopaths who are not your friend Billy.

Not "No true Scotsman":

A: No human thinks red shirts are better than blue shirts.
B: Lots of psychopaths think red shirts are better than blue shirts.
A: I meant true humans. Psychopaths aren't really humans, so don't count.
B: What about my friend Billy? He is not a psychopath but thinks red shirts are better than blue shirts.
A: Oh, I guess I was wrong - some humans think red shirts are better than blue shirts.

The second is just using a nonstandard definition, not redefining the word to fit the line of argument, so does not fall under the No True Scotsman fallacy. Even if you're gerrymandering reality ahead of time, it doesn't count as No True Scotsman (At the very least, that isn't even an argument yet, so can't be a fallacious argument!)

Comment author: Peterdjones 16 November 2012 03:45:14PM 1 point [-]

"Everybody likes to watch a beautiful sunset"

"Fred doens't. Mind you, he's blind".

"Then he doesn't count"

True Scotsman or not?

Comment author: wedrifid 14 November 2012 10:36:01AM *  5 points [-]

It depends on how you define "humans"

It is trivially true that restricting the definition of 'human' can reduce the possible differences between the CEVs of subsets of humans. This is just a matter of shifting the workload into the 'human' definition. Unless you plan to restrict the definition of human to one individual, however, there are still going to be differences between the CEV of subsets (except by coincidence).

but considering how old some of the references to the Golden Rule are at least some of our utility function is older than most civilizations.

Having a weak-to-moderate norm in favour of doing things that you would consider helpful or at least not harmful to others in your social group does seem to be popular (not as consistent or as strong as norms against excreting waste products in public but right up there!). That CEVs of various combinations of humans are similar isn't the point. Of course they will be. In fact, on average I'd expected them to be more similar than the groups of humans themselves are. But they are not identical (except by coincidence).

Do you have any proof

No!

that previous generations were fundamentally different to us, and not, like most (all?) humans today, confused about how to implement their utility function (if we give the poor healthcare, they wont have an incentive to work!)

That isn't a dichotomy. Clearly both past humans and current humans aren't effectively optimising toward their respective CEVs. But those CEVs are also going to be different because there isn't any magic (or focused expenditure of optimisation power) holding the CEV constant!

(I'm not sure what "fundamental" means exactly so I'll just note that I've never proposed any kind of difference beyond "not the same").

Comment author: [deleted] 15 November 2012 09:37:13AM *  7 points [-]

Eliezer has a lot to answer for when it comes to encouraging magical thinking along the lines of "all (subsets of) humans have the same Coherent Extrapolated Volition". He may not be confused himself but his document certainly encourages it.

...

Clearly both past humans and current humans aren't effectively optimising toward their respective CEVs. But those CEVs are also going to be different because there isn't any magic (or focused expenditure of optimisation power) holding the CEV constant!

It would be great if you wrote up a short discussion level post to clear up what seems to be a common misconception. Please consider doing so.

Comment author: wedrifid 15 November 2012 10:03:38AM 1 point [-]

It would be great if you wrote up a short discussion level post to clear up what seems to be a common misconception. Please consider doing so.

I'm not sure how useful that would be, or rather whether I'm the right person to be doing it. I thought I said everything that needed to be said in this thread already but it wasn't necessarily successful at reaching the target audience. Perhaps someone more in tune with the idealism behind the disagreement could explain better.

Comment author: MugaSofer 14 November 2012 10:49:51AM *  1 point [-]

Unless you plan to restrict the definition of human to one individual, however, there are still going to be differences between the CEV of subsets (except by coincidence).

I meant that, say, Neanderthals have a good chance of a serious CEV difference. However, your statement that all humans have different CEVs is unsupported by any evidence. For example:

norms against excreting waste products in public

Historically, dumping waste products was considered relatively harmless; sure it smells a little but hey, what doesn't? These people lacked the germ theory of disease, remember. No-one thought deliberately spreading disease was OK.

No!

That is not a fully general counterargument against your lack of any evidence at all.

there isn't any magic (or focused expenditure of optimisation power) holding the CEV constant!

But there's no magic changing it! If you assume human morality evolved, why would our ethics have changed much more than, say, our diet?

Comment author: wedrifid 14 November 2012 11:09:04AM 1 point [-]

why would our ethics have changed much more than, say, our diet?

Nobody said that they would have.

You are arguing against a straw man. Please read some of the message you replied to or the ones preceding it. Even, say, 1/3 of the sentences is likely to be sufficient---I've been repeating myself to make this clear.

Comment author: MugaSofer 14 November 2012 11:30:05AM -2 points [-]

You are claiming that the CEV of any group of humans - including all humanity - changes over time, yes? You seem to think this is a self-evident truth, but I have yet to see any examples of such a change. You removed the first half of that sentence - as I pointed out, if human morality evolved (which I assume you believe) then there is no reason to think that it would change any more than human dietary preferences - a child may discover sweets taste better than cabbage, and henceforth refuse cabbage in favor of sweets, but this is true for all children. What you are suggesting the same as if I claimed our taste buds had rearranged themselves, and that is why the Romans ate roast dormouse and we don't.

Comment author: Peterdjones 14 November 2012 01:02:36PM 3 points [-]

Both sides of this debate are hamstrung by failing to distinguish between basic values and extrapolated volition. There have been major shifts in ethics within living memory, regarding race, gender, the environment and sexuality. Whether they are shifts in basic valiues or in the way basica values are extrapolated is not obvious.

Comment author: [deleted] 14 November 2012 04:30:58PM *  1 point [-]

However, your statement that all humans have different CEVs is unsupported by any evidence.

It is, but my prior that two logically different things turn out to be exactly identical is pretty small. EDIT: OTOH, I think that almost all humans' CEVs would be so similar that a world with a FAI optimizing for CEV<Group A> would be very unlikely to feel like a dystopia to Group B, unless the membership criteria to Group A are deliberately gerrymandered to achieve that.

Comment author: MugaSofer 15 November 2012 09:38:24PM 1 point [-]

Of course there will be some variation between individuals, yes. But, as you say, probably not enough to matter; unless you're actively filtering it should average out the same for most large groups.

Comment author: Eugine_Nier 15 November 2012 02:04:07AM 1 point [-]

Eliezer has a lot to answer for when it comes to encouraging magical thinking along the lines of "all (subsets of) humans have the same Coherent Extrapolated Volition".

No, his argument is that CEVs of any (subset of) humans is a tiny cluster in value space.

Comment author: wedrifid 15 November 2012 06:04:07AM *  0 points [-]

No, his argument is that CEVs of any (subset of) humans is a tiny cluster in value space.

He has, in fact, made that argument (as well). I repeat the claim:

Eliezer has a lot to answer for when it comes to encouraging magical thinking along the lines of "all (subsets of) humans have the same Coherent Extrapolated Volition". He may not be confused himself but his document certainly encourages it.

Comment author: Peterdjones 14 November 2012 02:39:03PM 1 point [-]

The Coherent Extrapolated Volition of one group of humans is not the same thing as the Coherent Extrapolated Volition of another group of humans.

Who knows? It's possible EY thinks it will be. There doens't seem to be any authoritative answer to that.

Comment author: [deleted] 14 November 2012 04:25:19PM 0 points [-]
Comment author: DaFranker 14 November 2012 02:12:59PM *  1 point [-]

Thank you. I had slightly misunderstood what you were saying, but I also hadn't looked at all the variables and you pointed right at what I was missing.

Comment author: [deleted] 12 November 2012 04:31:20PM *  2 points [-]

Maybe I just need to read up on the theory a little more, because I'm still quite confused. Is my CEV the set of things I would want given all the correct moral arguments and all the information? As opposed (probably) to be the set of things I want now?

I can see how the set of things I want now would change over time, but I'm having a hard time seeing why my CEV could ever change. Compare the CEPT, the Coherent Extrapolated Physical Theory, which is the theory of physics we would have if we had all the information and all the correct physics arguments. I can see how our present physical theories would change, but CEPT seems like it should be fixed.

But I suppose it's also true that CEPT supervenes on a set of basic, contingent physical facts. So does CEV also supervene on a set of basic, contingent wants? If so, I suppose a CEV can change depending on which basic wants I have. Is that right?

If so, does that mean I have to agree to disagree with an ancient greek person on moral matters? Or that, on some level, I can no longer reasonably ask whether my wanting something is good or bad?

Comment author: [deleted] 12 November 2012 09:26:15PM 2 points [-]

Is my CEV the set of things I would want given all the correct moral arguments and all the information? As opposed (probably) to be the set of things I want now?

Yes. This needn't be the same for all agents: a rock would still not want anything no matter how many correct moral arguments and how much information you gave it, so CEV<rock> is indifferent to everything. Now you and Homer are much more similar than you and a rock, so your CEVs will be much more similar, but it's not obvious to me that they are necessarily exactly identical just because you're individuals of the same species.

Comment author: Kindly 12 November 2012 10:55:55PM 1 point [-]

Technically this is just EV (extrapolated volition); then CEV is just some way of compromising between your EV and everyone else's (possibly including Homer, but presumably not including rocks).

Comment author: [deleted] 12 November 2012 09:36:23PM 0 points [-]

Thanks, I think I get it. Do you have any thoughts on my last two questions:

If so, does that mean I have to agree to disagree with an ancient greek person on moral matters? Or that, on some level, I can no longer reasonably ask whether my wanting something is good or bad?

Comment author: [deleted] 13 November 2012 08:46:00AM 0 points [-]

I'd say that would just mean that the two of you mean different things by the word good (see also TimS's comment), but for some reason I feel that would just amount to dodging the question, so I'm going to say "I don't know" instead.

Comment author: DaFranker 12 November 2012 06:02:58PM *  -1 points [-]

I think you've got the right idea that CEV aims to find that fixed, ultimately-best-possible set of values.

If I understand correctly, CEV is mostly intended as a shortcut to arrive as close as possible to the same ethics we would have if all humans sat and thought and discussed and researched ethics for [insert arbitrarily large amount of time] until no more changes would occur in those ethics and the system would remain logically consistent and always the best choice for all circumstances and in all futures barring direct alteration of elementary human values.

There may be some conflation between CEV and particular implementations of it that were discussed previously, or with other CEV-like theories (e.g. Coherent Blended Volition). I may also be the one doing the conflating, though.