Lumifer comments on Welcome to Less Wrong! (7th thread, December 2014) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (635)
--
"The more you believe you can create heaven on earth the more likely you are to set up guillotines in the public square to hasten the process." -- James Lileks
--
That thing:
Besides, we're talking about "more likely", not "inevitably".
--
There is historical precedent for groups advocating equality, altruism, and other humanitarian causes to do a lot of damage and start guillotining people. You would probably be horrified and step off the train before it got to that point. But it's important to understand the failure modes of egalitarian, altruistic movements.
The French Revolution, and Russian Revolution / Soviet Union ran into these failure modes where they started killing lots of people. After slavery was abolished in the US, around one quarter of the freed slaves died.
These events were all horrible disasters from a humanitarian perspective. Yet I doubt that the original French Revolutionaries planned from the start to execute the aristocracy, and then execute many of their own factions for supposedly being counter-revolutionaries. I don't think Marx ever intended for the Russian Revolution and Soviet Union to have a high death toll. I don't think the original abolitionists ever expected the bloody Civil War followed by 25% of the former slaves dying.
Perhaps, once a movement for egalitarianism and altruism got started, an ideological death spiral caused so much polarization that it was impossible to stop people from going overboard and extending the movement's mandate in a violent direction. Perhaps at first, they tried to persuade their opponents to help them towards the better new world. When persuasion failed, they tried suppression. And when suppression failed, someone proposed violence, and nobody could stop them in such a polarized environment.
Somehow, altruism can turn pathological, and well-intentioned interventions have historically resulted in disastrous side-effects or externalities. That's why some people are cynical about altruistic political attitudes.
--
You yourself are unlikely to start the French Revolution, but somehow, well-intentioned people seem to get swept up in those movements. Even teachers, doctors, and charity workers can contribute to an ideological environment that goes wrong; this doesn't mean that they started it, or that they supported it every step of the way. But they were part of it.
The French Revolution and guillotines is indeed a rarer event. But if pathological altruism can result in such large disasters, then it's quite likely that it can also backfire in less spectacular ways that are still problematic.
As you point out, many interventions to change the world risk going wrong and making things worse, but it would be a shame to completely give on making the world a better place. So what we really want is interventions that are very well-thought out, with a lot of care towards the likely consequences, taking into account the lessons of history for similar interventions.
"So what we really want is interventions that are very well-thought out, with a lot of care towards the likely consequences, taking into account the lessons of history for similar interventions."
That is exactly why I want to study social science. I want to do lots of experiments and research and reading and talking and thinking before I dare try and do any world-changing. That's why I think social science is important and valuable, and we should try very hard to be rational and careful when we do social science, and then listen to the conclusions. I think interventions should be well-thought-through, evidence-based, and tried and observed on a small scale before implemented on a large scale. Thinking through your ideas about laws/policies/interventions and gathering evidence on whether they might work or not - that's the kind of social science that I think is important and the kind I want to do.
You're ignoring the rather large pachyderm in the room which goes by the name of Values.
Differences in politics and policies are largely driven not by disagreements over the right way to reach the goal, but by decisions which goals to pursue and what trade-offs are acceptable as the price. Most changes in the world have both costs and benefits, you need to balance them to decide whether it's worth it, and the balancing necessarily involves deciding what is more important and what is less important.
For example, imagine a trade-off: you can decrease the economic inequality in your society by X% by paying the price of slowing down the economic growth by Y%. Science won't tell you whether that price is acceptable -- you need to ask your values about it.
Failure often comes with worse consequences than just an unchanged status quo.
My model is that these revolutions created a power vacuum that got filled up. Whenever a revolution creates a power vacuum, you're kinda rolling the dice on the quality of the institutions that grow up in that power vacuum. The United States had a revolution, but it got lucky in that the institutions resulting from that revolution turned out to be pretty good, good enough that they put the US on the path to being the world's dominant power a few centuries later. The US could have gotten unlucky if local military hero George Washington had declared himself king.
Insofar as leftist revolutions create worse outcomes, I think it's because since the leftist creed is so anti-power, leftists don't carefully think through the incentives for institutions to manage that power. So the stable equilibrium they tend to drift towards is a sociopathic leader who can talk the talk about egalitarianism while viciously oppressing anyone who contests their power (think Mao or Stalin). Anyone intelligent can see that the sociopathic leader is pushing cartoon egalitarianism, and that's why these leaders are so quick to go for the throats of society's intellectuals. Pervasive propaganda takes care of the rest of the population.
Leftism might work for a different species such as bonobos, but human avarice needs to be managed through carefully designed incentive structures. Sticking your head in the sand and pretending avarice doesn't exist doesn't work. Eliminating it doesn't work because avaricious humans gain control of the elimination process. (Or, to put it another way, almost everyone who likes an idea like "let's kill all the avaricious humans" is themselves avaricious at some level. And by trying to put this plan in to action, they're creating a new "defect/defect" equilibrium where people compete for power through violence, and the winners in this situation tend not to be the sort of people you want in power.)
Ask them, I'm not an altruist. But I heard it may have something to do with the concept of compassion.
Historically, it correlates quite well. You want to help the "good" people and in order to do this you need to kill the "bad" people. The issue, of course, is that definitions of "good" and "bad" in this context... can vary, and rather dramatically too.
If we take the metaphor literally, setting up guillotines in the public square was something much favoured by the French Revolution, not by Napoleon Bonaparte.
Bollocks. You want to change the world and change is never painless. Tearing down chunks of the existing world, chunks you don't like, will necessarily cause suffering.
The French Revolution wanted to design a better world to the point of introducing the 10-day week. Napoleon just wanted to conquer.
--
Don't mind Lumifer. He's one of our resident Anti-Spirals.
But, here's a question: if you're angry at the Bad, why? Where's your hope for the Good?
Of course, that's something our culture has a hard time conceptualizing, but hey, you need to be able to do it to really get anywhere.
And yet he's consistently one of the highest karma earners in the 30-day karma leaderboard. It seems to be mainly due to his heavy participation... his 80% upvote rate is not especially high. I find him incredibly frustrating to engage with (though I try not to let it show). I can't help but think that he is driving valuable people away; having difficult people dominate the conversation can't be a good thing.
(To clarify, I'm not trying to speak out against the perspectives people like Lumifer and VoiceOfRa offer, which I am generally sympathetic to. I think their perspectives are valuable. I just wish they would make a stronger effort to engage in civil & charitable discussion, and I think having people who don't do this and participate heavily is likely to have pernicious effects on LW culture in the long term. In general, I agree with the view that Paul Graham has advanced re: Hacker News moderation: on a group rationality level, in an online forum context, civility & niceness end up being very important.)
Really? Their "perspective" appears to consist in attempting to tear down any hopes, beliefs, or accomplishments someone might have, to the point of occasionally just making a dumb comment out of failure to understand substantive material.
Of course, I stated that a little too disparagingly, but see below...
Not just civility and niceness, but affirmative statements. That is, if you're trying to achieve group epistemic rationality, it is important to come out and say what one actually believes. Statistical learning from a training-set of entirely positive or entirely negative examples is known to be extraordinarily difficult, in fact, nigh impossible (modulo "blah blah Solomonoff") to do in efficient time.
I think a good group norm is, "Even if you believe something controversial, come out and say it, because only by stating hypotheses and examining evidence can we ever update." Fully General Critique actually induces a uniform distribution across everything, which means one knows precisely nothing.
Besides which, nobody actually has a uniform distribution built into their real expectations in everyday life. They just adopt that stance when it comes time to talk about Big Issues, because they've heard of how Overconfidence Is Bad without having gotten to the part where Systematic Underconfidence Makes Reasoning Nigh-Impossible.
I think that anger at the Bad and hope for the Good are kind of flip sides of the same coin. I have a vague idea of how the world should be, and when the world does not conform to that idea, it irritates me. I would like a world full of highly rational and happy people cooperating to improve one another's lives, and I would like to see the subsequent improvements taking effect. I would like to see bright people and funding being channeled into important stuff like FAI and medicine and science, everyone working for the common good of humanity, and a lot of human effort going towards the endeavour of making everyone happy. I would like to see a human species which is virtuous enough that poverty is solved by everyone just sharing what they need, and war is solved because nobody wants to start violence. I want people to work together and be rational, basically, and I've already seen that work on a small scale so I have a lot of hope that we can upgrade it to a societal scale. I also have a lot of hope for things like cryonics/Alcor bringing people back to life eventually, MIRI succeeding in creating FAI, and effective altruism continuing to gain new members until we start solving problems from sheer force of numbers and funding.
But I try not to be too confident about exactly what a Good world looks like; a) I don't have any idea what the world will look like once we start introducing crazy things like superintelligence, b) that sounds suspiciously like an ideology and I would rather do lots of experiments on what makes people happy and then implement that, and c) a Good world would have to satisfy people's preferences and I'm not a powerful enough computer to figure out a way to satisfy 7 billion sets of preferences.
If you can simply improve the odds of people cooperating in such a manner, then I think that you will bring the world you envision closer. And the better you can improve those odds, the better the world will be.
And these are all very virtuous things to say, but you're a human, not a computer. You really ought to at least lock your mind on some positive section of the nearby-possible and try to draw motivation from that (by trying to make it happen).
There is a price to be paid. If you use fury and anger too much, you will become a furious and angry kind of person. Embrace the Dark Side and you will become one with it :-/
Maybe :-) The reason you've met a certain... lack of enthusiasm about your anger for good causes is because you're not the first kid who wanted to help people and was furious about the injustice and the blindness of the world. And, let's just say, it does not always lead to good outcomes.
--
If you stick around long enough, we shall see :-)