Wrongnesslessness comments on Rationality Quotes November 2012 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (898)
David Deutsch, The Beginning of Infinity
And yet they couldn't even defeat the Spartans or keep Savonarola from taking power.
To be fair, with a general like Napoleon, how could the Spartans lose?
Fixed typo.
What is this "moral progress" you speak of Earth monkey?
It usually means we now have a greater understanding of what we value.
[edited for clarity.]
Eh, no we don't.
See how easy it is to simply assert something? Let's now try actual arguments. First read the comments I linked to in the daughter comment, also feel free to provide links and references to material you think I should read. Then once we've closed the inferential gap and after we restate the other's case without straw manning it we can really start talking.
You asked. I answered. Whether it is correct is one thing, but you asked what it meant.
Ok. In the context of the question I assumed you believed in moral progress. Do you?
I believe we better understand our utility function and how to implement it than most civilizations have historically.
That is probably true, but in my experience this is not what everyone else calls "moral progress". Usually people use "moral progress" to mean "a changing of morals to be closer to my current morals".
That's what I was arguing against, that the appearance of moral progress (as I defined it) is merely an illusion caused by the CEV of our civilization changing over time.
Note: I responded to my best guess as to your meaning, but could you taboo "morals" for me to be on the safe side?
So why is this down voted? If anyone is confused by this challenge to moral progress which they consider perfectly obvious they should read Against Moral Progress and a related text dump, then argue their case there.
If this is about the tone, well we disagree on taste. I think applause lights need to be challenged on their emotional affect as well as their implied argument.
No comma between “of” and “Earth”? ;-) (Just kidding. I didn't downvote.)
More seriously, it looks like someone's downvoting the entire thread. (I'm upvoting most of it back to 0.)
I suggest the downvoting was due to quibbling about the word "moral" when:
The usage was peripheral, the more active phrase there was "technological progress." But "moral progress" does have a referent, morality is merely as perceived, it's subjective, that's all. Konkvistador, "moral progress" is something made up by Earth monkeys, and only applies to Earth monkeys dealing with Earth monkeys.
It may have no meaning for one who is not an Earth monkey.
The conclusion, the point of the quote, was ignored. That conclusion is, at least, interesting (to this Earth monkey!).
Yes. Even the point of the quote is subjective. "Should" could imply morality, i.e., we "should" take it personally, maybe we are "bad" if we don't.
However, it can be interpreted to mean something objective. I.e., we "should take it personally" means that we have been affected personally.
We have a choice, reading, to derive value and meaning, or to criticize and find fault. We may also see both, i.e., value and error, and both of these depend on the interpretive choices we make. The statement is just a pile of letters, without the meanings Earth monkeys supply to their arrangement.
It is highly likely that every down-voter was an Earth monkey.
I would call “moral progress” the process whereby a society's behaviours and their CEV get closer to each other than they used to be. And this looks pretty much like it, to me.
What? So would you call an incorruptibly evil society highly "morally progressed"? What about the baby eaters since they both believe they should and do in fact eat babies?
Babyeater babies don't want to be eaten, or particularly want to eat their peers, and those who will never develop a desire to eat babies constitute a majority of the sapient population at any given time, so "eat babies" isn't the 'coherent' part of the babyeater CEV.
Using the observation that being dead precludes wanting to (and endorsing) eating babies as an adult as a technical reason that "those who will never develop a desire to eat babies constitute a majority of the sapient population at any given time" is highly misleading. I'd go as far as to call it bullshit.
We could equally as accurately say "those who will never develop into non-paperclipping adults constitute a majority of the sapient population at any given time" (therefore CEV<Babyeaters> means paperclipping!)
Any attempted implementation of CEV<Babyeaters> that does not result in the eating of babies sounds like a catastrophic failure. It would seem to result in the sneaking in of non-babyeater values at every excuse.
Insert abortion debate: Right to choose is morally coherent, and right to life is morally coherent. It is debatable which of these would constitute moral progress.
However, what is not morally coherent, is that women have sole power over reproductive decisions, but men have an obligation to support those choices whatever they may be, that husbands don't have a say, that unmarried men can be forced to support babies, but women cannot.
This is not moral progress, but anti white male democratic coalition.
One could coherently argue that right to choose, but no right to child support is moral progress
One could coherently argue that right to life, plus right to child support is moral progress.
One cannot argue that right to choose plus right to child support is moral progress. It is morally right that he who pays the piper, calls the tune, and that she who calls the tune, gets stuck with the piper's bill.
I agree that the current system is inconsistent. If women are allowed to abort babies because babies are an expensive burden, men should either have an equal say in that decision or men shouldn't be obligated to support those children. Either one would make sense.
On second thoughts, let's not!
That's actually the best definition of "moral progress" that I've seen. A big step up from "more like the values that I currently wish to signal having", the default definition.
People on Lesswrong saying their CEV includes X or leads to Y, are not using the term technically but as a poetic way of saying X and Y look about right to me and I'm confident I'm not wrong. Substituting that into the definition makes it much less impressive for human use. And if you check writing on CEV you see the definition is nearly circular for technical use in FAI design.
Just not true. The writing on CEV does a lot to constrain technical thinking about FAI design. It isn't a complete solution, nor is it presented as one but it certainly does rule out a lot (most) proposals for how an FAI should be designed and created. It simply doesn't fit the definition "circular".
(I had previously ignored this comment but upvotes as of now suggest that it may be successful in being actively misleading! As such, rejecting it seems more important.)
We seem to have a misunderstanding. Lots of writing on CEV refers to something called moral development or moral progress. I was criticizing the usefulness of the quoted definition of this something to CEV development not work on CEV in general. I've edited the sentence somewhat to clarify that.
Perhaps. If so we can curse our shared language. I was replying to the below quote but I notice "its" isn't a unique reference!
I don't think violence has declined. State violence has increased. Further, since we are imprisoning a lot more people, looks like private violence has increased, supposing, as seems likely, most of them are being reasonably imprisoned.
Genghis Khan and the African slave trade cannot remotely match the crimes of communism.
And if it has declined, Xenophon would interpret this as us becoming pussies and cowards. Was Xenophon more violent and cruel than any similarly respectable modern man? Obviously. But he was nonetheless deservedly respectable. We rightly call the ten thousand brave, not criminal.
Social acceptance of brave, honorable, and manly violence has greatly diminished, and so brave, honorable and manly violence has greatly diminished. But vicious, horrifying, evil and depraved violence, for example petty crime and the various communist mass murders, has enormously increased.
This doesn't follow, unless by 'violence has increased' you mean that there are more incidents of violence. But this would be consistant with violence being extremely rare. So are we imprisoning people for violent crimes and at a higher rate?
I have the same questions about your claims of increased state violence. Has the rate of state violence gone up, or just the number of incidents? It's the former we're interested in.
I have read Pinker's arguments in detail in his book. I don't think Homer would have agreed. I bet this is not approaching Homer's CEV, this is self-domestication of humans. In any case mind sharing how you implemented CEV checking on a mere human brain?
I meant, our behaviour being closer to our CEV than Homer's behaviour was to his CEV, if that makes sense. (Are you thinking of anything in particular about Homer or was it an arbitrary example?)
I wouldn't, but I can roughly guess what the result would be. (Likewise, I couldn't implement Solomonoff induction on any brain, but I still guess general relativity has less complexity than MOND.) If I had no way of guessing whether a given action is more likely to be good or to be bad, how should I ever decide what to do?
"It is entirely seemly for a young man killed in battle to lie mangled by the bronze spear. In his death all things appear fair. But when dogs shame the gray head and gray chin and nakedness of an old man killed, it is the most piteous thing that happens among wretched mortals."
in context of Pinker's observation.
I don't think that makes sense. Also, I am pretty sure that Xenophon's behavior (massacre and pillage the bad guys and abduct their women) was a lot closer to his moral ideal than our behavior is to Xenophon's moral ideal.
Further, the behavior Xenophon describes others of the ten thousand performing is astonishingly close to his moral ideal, in that astonishing acts of heroism were routine, while the behavior I observe around me exhibits major disconnect from our purported moral ideals, for example the John Derbyshire incident, though, of course, Xenophon was doubtless selective in what incidents he though worthy to record.
Hmm... Well, the definition of a CEV is something like a point attractor in the space of a society's moral attitudes. So it's not too surprising if there is convergence of the society towards that CEV ie "moral progress" as you define it is to be expected. Though whether there is a point attractor as opposed to an attractor cycle (or chaotic attractor) seems to be an open question of course.
However, I'm struck by the thought that a Spartan society becoming "more perfectly Spartan" or a Taliban society becoming "more purely Islamic" over time would count as moral progress by the same token. So that the more thoroughly the slaves, women etc. are indoctrinated to accept the prevailing Spartan or Taliban norms, the "better" the society becomes. Does that also match your concept of moral progress?
No, CEV is the goals societies "moral attidudes" (eg slavery) are tying to satisfy, applied to the FAI's best guess of the actual stste of reality (eg black people weren't created by God with the purpose of acting as slaves to whites) and averaged out.
Well, is the indoctrination reversible? i.e., could those people to reject Spartan or Taliban norms if they heard the right arguments, as happened to Lukeprog? (Which suggests to me a heuristic to tell which of two memeplexes is closer to the CEV of humanity: is the fraction of adult A-ists who convert to B-ism per year larger or smaller than the fraction of adult B-ists who convert to A-ism?)
If you're curious, you could try doing this heuristic on the Pew American religion survey which includes rich conversion data: http://religions.pewforum.org/pdf/report-religious-landscape-study-full.pdf (The results may surprise you!)
Thank you, I'll take a look at that.
Perhaps a superintelligent mind could create an argument that would convince any human of any belief. Why should such an ability have moral implications?
Well, such an ability would just as easily persuade you that the sky is green, so I'm guessing no.
That's my point.
I know, I was agreeing with you. Persuasiveness is not the same as accuracy.
That seems like a good heuristic for telling who has the best missionaries, writes the cleverest arguments, or best engineers society to reward their believers. I'm not sure it's a good heuristic for actually extrapolating volition.
Yes, it probably is reversible. It seems quite plausible to me that for most pairs of human ideological systems A, B, there is some combination of arguments, evidences, life-experiences etc. that would cause a randomly-selected adherent of A to switch to B. (The random selection would tend to avoid the most fanatical and committed adherents, but I'd guess even most of them could probably be "deprogrammed" by the right combination of stimuli.)
However, if you want to count the actual numbers of conversions happening right now, the statistics are messy: apparently just about every religious group (including the group of non-religious) claims they are the "fastest growing", all with some empirical justification. I found this Wikipedia article highly amusing in that context.
But what's the point here? If we are talking about humanity as a whole, then this may just show that there is no single CEV for all human societies everywhere. Instead, there are a huge number of attractor points in the moral attitudes space, and any given society tends to converge to the nearest attractor point... unless and until a major shock throws it out again (or breaks up the society).
Global humanity as a whole may perhaps now constitute a single society, and be moving towards the "liberal democracy" attractor point, which therefore defines a local CEV... but only because it's already in that basin of attraction. And even that's empirically more dubious than it was twenty years ago (I don't see China, Russia, or most of the Islamic world still moving that way, and a lot of Western countries have themselves become distinctly less liberal / democratic in recent years.)
Note that if at the beginning of the year A had one billion adult adherents and B had one hundred, and since then 160 of the former have converted to B and 60 of the latter have converted to A, my heuristic would still point towards B being wronger than A even though B has doubled in size and A has stayed pretty much the same. (And anyway, I was thinking more of memeplexes who have existed for at least a couple of generations -- with new ones it would be much more noisy.)
Is this a possible use of 'CEV?' So far as I understand CEV, it's not possible that it could change: our CEV is what we would want given all the correct moral arguments and all the information. Assuming that 'all the information' and 'all the correct moral arguments' are constants, how could the CEV of one society differ from that of another?
The only way I can think of is if the two societies are composed of fundamentally different kinds of beings. But the idea of moral progress you describe assumes that this is not the case.
Yes. Society's behaviors and their CEV can get closer together without the CEV changing at all. Also note that while CEV<CultureX_2003> is a (very slightly) different thing to CEV<CultureX_2004> even though neither of those "CEVs" change at all.
A potential criticism of army's definition is that it allows for "cultural wireheading" and as such would be a lost purpose if "moral progress" was substituted in as a all-purpose goal or measure of achievement. (That said, I've never really thought of "moral progress" as that-which-should-be-optimised anyhow.)
Then why is "moral progress" a useful concept?
It describes how to compute that-which-should-be-optimized.
EDIT: Replied to wrong message. (Curse my android!)
Shoes aren't that-which-should-be-optimized either, but that doesn't mean that the concept of shoe is not useful.
So we're not saying that the CEV of a culture changes (this is a constant), but that the culture's actual moral practices and reasoning can change in relation to its CEV. And change such that it is closer or further away. Do I have that right?
Presumably, we wouldn't want to optimize moral progress, but rather morality.
The CEV of a culture changes (a little bit) every day. CEV<CultureX_specific_time> is a constant. This is because humans (and groups of humans) aren't stable, consistent optimisers. From what I understand the CEV of a culture is relatively stable, certainly more stable than the culture itself. Nevertheless it is not a fixed. We, all things considered and collectively want (very nearly tautologically) for our CEV to be stable because that (approximately) maximises our current CEV. We just aren't that consistent.
That is one way in which the previously quoted proposition could be valid, yes.
I want to optimise whatever my preferences are. Morality seems to get a weight in there someplace.
I thought the whole point of CEV was to extrapolate forwards in time towards the ultimate reflectively-consistent set of values to formulate one single coherent utility function (with multiple parameters and variables, of course) that represents the optimal equilibrium of all that humans would want if they were exactly as they would want to be and would want exactly that which they would wish to want.
This reminds me more of CAV (Coherent Aggregated Volition) than CEV. CEV is, IIRC, intended as a bootstrap towards "Whatever humans would collectively find the best possible optimization after infinite re-evaluations", if any such meta-ethics exists.
The Coherent Extrapolated Volition of one group of humans is not the same thing as the Coherent Extrapolated Volition of another group of humans. Humans populations change and even evolve over time due to forces that are not carefully constructed to move the population in the same direction as the CEV of their ancestors and so later generations will not have the same CEV as previous ones.
Eliezer has a lot to answer for when it comes to encouraging magical thinking along the lines of "all (subsets of) humans have the same Coherent Extrapolated Volition". He may not be confused himself but his document certainly encourages it.
Maybe I just need to read up on the theory a little more, because I'm still quite confused. Is my CEV the set of things I would want given all the correct moral arguments and all the information? As opposed (probably) to be the set of things I want now?
I can see how the set of things I want now would change over time, but I'm having a hard time seeing why my CEV could ever change. Compare the CEPT, the Coherent Extrapolated Physical Theory, which is the theory of physics we would have if we had all the information and all the correct physics arguments. I can see how our present physical theories would change, but CEPT seems like it should be fixed.
But I suppose it's also true that CEPT supervenes on a set of basic, contingent physical facts. So does CEV also supervene on a set of basic, contingent wants? If so, I suppose a CEV can change depending on which basic wants I have. Is that right?
If so, does that mean I have to agree to disagree with an ancient greek person on moral matters? Or that, on some level, I can no longer reasonably ask whether my wanting something is good or bad?
Yes. This needn't be the same for all agents: a rock would still not want anything no matter how many correct moral arguments and how much information you gave it, so CEV<rock> is indifferent to everything. Now you and Homer are much more similar than you and a rock, so your CEVs will be much more similar, but it's not obvious to me that they are necessarily exactly identical just because you're individuals of the same species.
I think you've got the right idea that CEV aims to find that fixed, ultimately-best-possible set of values.
If I understand correctly, CEV is mostly intended as a shortcut to arrive as close as possible to the same ethics we would have if all humans sat and thought and discussed and researched ethics for [insert arbitrarily large amount of time] until no more changes would occur in those ethics and the system would remain logically consistent and always the best choice for all circumstances and in all futures barring direct alteration of elementary human values.
There may be some conflation between CEV and particular implementations of it that were discussed previously, or with other CEV-like theories (e.g. Coherent Blended Volition). I may also be the one doing the conflating, though.
None of the people alive in Homer's times is alive today. Dunno about how “fundamentally” different we are -- I'd guess the difference between CEV<Homer> and CEV<Esar> is very small but not exactly zero.
Okay, I think I'm starting to get it. Is the idea that, both of us given all the correct moral arguments and all the information, an archaic Greek person and myself would still want different things?
Yes. For a more philosophical (and extreme) take on the issue, you can read Friedrich Nietzsche's On the Genealogy of Morals. Warning: Nietzsche is made of hyperbole, so it's often quite difficult to understand his substantive point.
In this case, the point is that the Greeks divided the world into good and bad, while we moderns divide the world into good and evil. What's the difference? It is possible to bad at a sport, but acting within the norms of the sport, it is impossible to be evil. Imagine how your moral perspective would be different if you only judged people based on whether they were "good at life" or "bad at life".
Indeed, I like Nietzsche's philosophy as I know it from second-hand accounts, but when I tried to read his own writings I had to force myself through the pages and gave up. (Maybe I used a bad translation or something.)
ISTM that many (most?) LWers also divide the world into good and bad, so, to the extent this is a fundamental disagreement between values rather than someone's confusion due to not knowing something/not thinking stuff through, CEV<LW> might be closer to CEV<Homer> than to CEV<Catholics in the late second millennium>!
BTW, I think I've also seen a two-dimensional model for that; I don't remember how the quadrant other than “good”, “bad” and “evil” (people who aren't terribly good at life, but at least try hard not to harm others as a result of their incompetence, even to a cost to themselves) was labelled -- wimps?
Sounds like two axes, one going from competent to incompetent, the other from well-intentioned to ill-intentioned.
Yes. Apparently sam0345 (if that's what he means by “his moral ideal”) thinks the two of you would still want very different things; wedrifid and I think you would want slightly different things.
Okay, thanks for taking the time to explain. This has been very helpful.
a) The word "different" seems to be missing from the above.
b) I don't k now how CEV is defined or whatt it is suppsed to be. Old-fashioned metaethics from that "diseseased discipline", philosophy, seem much clearer to me.
C) I have only ever been saying that, as so far stated, such questions are imponderable.
It's in the question; it seemed redundant to me to put it in the answers too.
I downvoted because it implies moral progress is meaningless, rather than nonexistant.
Of course, I would probably downvote a bald claim that moral progress doesn't hapen anyway.
One form of moral progress that doesn't look the same as "random walk plus retrospective teleology", is the case where some inconsistency of moral views is resolved.
For example, some dudes say that it's self-evident that all men are created equal. Then somebody notices that this doesn't really jive with the whole slavery thing. So at least some of what gets called moral progress is just people learning to live up to their own stated principles.
Another way of looking at it is that there is little difference between the terminal values of me vs a mediaeval lord, but the lord is very confused about what instrumental values best achieve his terminal values (he wants the best for the peasants, but thinks that that is achieved by the application of strong discipline to prevent idleness, so values severe discipline).
By this reasoning, abolishing slavery was moral progress, but declaring that all men are equal was moral regress.
If the fallacy is slavery, then moral progress. What if the fallacy is that all men are created equal?
By your measure, hypocritical values dissonance, we morally regressed when some dudes said it was self-evident that all men are created equal, and have indeed been morally regressing ever since, since affirmative action and so forth are accompanied by ever greater levels of hypocrisy and pretense. While the abolition of slavery reduced one form of hypocritical values dissonance, other forms of hypocritical values dissonance have been increasing.
Example: Female emancipation, high accreditation rates for females. Most successful long lived marriages are quietly eighteenth century in private, and most people, whether out of sexism or realism, quietly act as if female credentials are less meaningful than equivalent male credentials. Your criterion is neutral as to whether we do this out of sexism or realism. Either way, by your criterion, it is an equally bad thing, and there is a mighty lot of it going on.
Not really. It was previous notions of class/race/nationality granting moral worth that were incoherent morally.
Because he's not a monkey, he's an ape.
I think he vastly overestimates the affect of optimism on technological development, vs say population size, disease levels and food supply.
Or, more accurately, you and I would be non-existent and some other group of beings would be quasi-immortal.