Lumifer comments on Welcome to Less Wrong! (7th thread, December 2014) - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (635)
"The more you believe you can create heaven on earth the more likely you are to set up guillotines in the public square to hasten the process." -- James Lileks
--
That thing:
Besides, we're talking about "more likely", not "inevitably".
--
There is historical precedent for groups advocating equality, altruism, and other humanitarian causes to do a lot of damage and start guillotining people. You would probably be horrified and step off the train before it got to that point. But it's important to understand the failure modes of egalitarian, altruistic movements.
The French Revolution, and Russian Revolution / Soviet Union ran into these failure modes where they started killing lots of people. After slavery was abolished in the US, around one quarter of the freed slaves died.
These events were all horrible disasters from a humanitarian perspective. Yet I doubt that the original French Revolutionaries planned from the start to execute the aristocracy, and then execute many of their own factions for supposedly being counter-revolutionaries. I don't think Marx ever intended for the Russian Revolution and Soviet Union to have a high death toll. I don't think the original abolitionists ever expected the bloody Civil War followed by 25% of the former slaves dying.
Perhaps, once a movement for egalitarianism and altruism got started, an ideological death spiral caused so much polarization that it was impossible to stop people from going overboard and extending the movement's mandate in a violent direction. Perhaps at first, they tried to persuade their opponents to help them towards the better new world. When persuasion failed, they tried suppression. And when suppression failed, someone proposed violence, and nobody could stop them in such a polarized environment.
Somehow, altruism can turn pathological, and well-intentioned interventions have historically resulted in disastrous side-effects or externalities. That's why some people are cynical about altruistic political attitudes.
--
You yourself are unlikely to start the French Revolution, but somehow, well-intentioned people seem to get swept up in those movements. Even teachers, doctors, and charity workers can contribute to an ideological environment that goes wrong; this doesn't mean that they started it, or that they supported it every step of the way. But they were part of it.
The French Revolution and guillotines is indeed a rarer event. But if pathological altruism can result in such large disasters, then it's quite likely that it can also backfire in less spectacular ways that are still problematic.
As you point out, many interventions to change the world risk going wrong and making things worse, but it would be a shame to completely give on making the world a better place. So what we really want is interventions that are very well-thought out, with a lot of care towards the likely consequences, taking into account the lessons of history for similar interventions.
"So what we really want is interventions that are very well-thought out, with a lot of care towards the likely consequences, taking into account the lessons of history for similar interventions."
That is exactly why I want to study social science. I want to do lots of experiments and research and reading and talking and thinking before I dare try and do any world-changing. That's why I think social science is important and valuable, and we should try very hard to be rational and careful when we do social science, and then listen to the conclusions. I think interventions should be well-thought-through, evidence-based, and tried and observed on a small scale before implemented on a large scale. Thinking through your ideas about laws/policies/interventions and gathering evidence on whether they might work or not - that's the kind of social science that I think is important and the kind I want to do.
You're ignoring the rather large pachyderm in the room which goes by the name of Values.
Differences in politics and policies are largely driven not by disagreements over the right way to reach the goal, but by decisions which goals to pursue and what trade-offs are acceptable as the price. Most changes in the world have both costs and benefits, you need to balance them to decide whether it's worth it, and the balancing necessarily involves deciding what is more important and what is less important.
For example, imagine a trade-off: you can decrease the economic inequality in your society by X% by paying the price of slowing down the economic growth by Y%. Science won't tell you whether that price is acceptable -- you need to ask your values about it.
Disagreements including this one? It sounds as though you are saying in a conversation such as this one, you are more focused on working to achieve your values than trying to figure out what's true about the world... like, say, Arthur Chu. Am I reading you correctly in supporting something akin to Arthur Chu's position, or do I misunderstand?
Given how irrational people can be about politics, I'd guess that in many cases apparent "value" differences boil down to people being mindkilled in different ways. As rationalists, the goal is to have a calm, thoughtful, evidence-based discussion and figure out what's true. Building a map and unmindkilling one another is a collaborative project.
There are times when there is a fundamental value difference, but my feeling is that this is the possibility to be explored last. And if you do want to explore it, you should ask clarifying values questions (like "do you give the harms from a European woman who is raped and a Muslim woman who is raped equal weight?") in order to suss out the precise nature of the value difference.
Anyway, if you do agree with Arthur Chu that the best approach is to charge ahead imposing your values, why are you on Less Wrong? There's an entire internet out there of people having Arthur Chu style debates you could join. Less Wrong is a tiny region of the internet where we have Scott Alexander style debates, and we'd like to keep it that way.
In my (admittedly limited, I'm young) experience, people don't disagree on whether that tradeoff is worth it. People disagree on whether the tradeoff exists. I've never seen people arguing about "the tradeoff is worth it" followed by "no it isn't". I've seen a lot of arguments about "We should decrease inequality with policy X!" followed by "But that will slow economic growth!" followed by "No it won't! Inequality slows down economic growth!" followed by "Inequality is necessary for economic growth!" followed by "No it isn't!" Like with Obamacare - I didn't hear any Republicans saying "the tradeoff of raising my taxes in return for providing poor people with healthcare is an unacceptable tradeoff" (though I am sometimes uncharitable and think that some people are just selfish and want their taxes to stay low at any cost), I heard a lot of them saying "this policy won't increase health and long life and happiness the way you think it will".
"Is this tradeoff worth it?" is, indeed, a values question and not a scientific question. But scientific questions (or at least, factual questions that you could predict the answer to and be right/wrong about) could include: Will this policy actually definitely cause the X% decrease in inequality? Will this policy actually definitely cause the Y% slowdown in economic growth? Approximately how large is X? Approximately how much will a Y% slowdown affect the average household income? How high is inflation likely to be in the next few years? Taking that expected rate of inflation into account, what kind of things would the average family no longer be able to afford / not become able to afford, presuming the estimated decrease in average household income happens? What relation does income have to happiness anyway? How much unhappiness does inequality cause, and how much unhappiness do economic recessions cause? Does a third option (beyond implement this policy / don't implement it) exist, like implementing the policy but also implementing another policy that helps speed economic growth, or implementing some other radical new idea? Is this third option feasible? Can we think up any better policies which we predict might decrease inequality without slowing economic growth? If we set a benchmark that would satisfy our values, like percentage of households able to afford Z valuable-and-life-improving item, then which policy is likely to better satisfy that benchmark - economic growth so that more people on average can afford Z, or inequality reduction so that more poor people become average enough to afford an Z?
But, of course, this is a factual question. We could resolve this by doing an experiment, maybe a survey of some kind. We could take a number of left-wing policies, and a number of right-wing policies, and survey members of the "other tribe" on "why do you disagree with this policy?" and give them options to choose between like "I think reducing inequality is more important than economic growth" and "I don't think reducing inequality will decrease economic growth, I think it will speed it up". I think there are a lot of issues where people disagree on facts.
Like prisons - you have people saying "prisons should be really nasty and horrid to deter people from offending", and you have people saying "prisons should be quite nice and full of education and stuff so that prisoners are rehabilitated and become productive members of society and don't reoffend", and both of those people want to bring the crime rate down, but what is actually best at bringing crime rates down - nasty prisons or nice prisons? Isn't that a factual question, and couldn't we do some science (compare a nice prison, nasty prison, and average-kinda-prison control group, compare reoffending rates for ex-inmates of those prisons, maybe try an intervention where kids are deterred from committing crime by visiting nasty prison and seeing what it's like versus kids who visit the nicer prison versus a control group who don't visit a prison and then 10 years later see what percentage of each group ended up going to prison) to see who is right? And wouldn't doing the science be way better than ideological arguments about "prisoners are evil people and deserve to suffer!" versus "making people suffer is really mean!" since what we actually all want and agree on is that we would like the crime rate to come down?
So we should ask the scientific question: "Which policies are most likely to lead to the biggest reductions in inequality and crime and the most economic growth, keep the most members of our population in good health for the longest, and provide the most cost-efficient and high-quality public services?" If we find the answer, and some of those policies seem to conflict, then we can consult our values to see what tradeoff we should make. But if we don't do the science first, how do we even know what tradeoff we're making? Are we sure the tradeoff is real / necessary / what we think it is?
In other words, a question of "do we try an intervention that costs £10,000 and is 100% effective, or do we do the 80% effective intervention that costs £80,000 and spend the money we saved on something else?" is a values question. But "given £10,000, what's the most effective intervention we could try that will do the most good?" is a scientific question and one that I'd like to have good, evidence-based answers to. "Which intervention gives most improvement unit per money unit?" is a scientific question and you could argue that we should just ask that question and then do the optimal intervention.
Failure often comes with worse consequences than just an unchanged status quo.
My model is that these revolutions created a power vacuum that got filled up. Whenever a revolution creates a power vacuum, you're kinda rolling the dice on the quality of the institutions that grow up in that power vacuum. The United States had a revolution, but it got lucky in that the institutions resulting from that revolution turned out to be pretty good, good enough that they put the US on the path to being the world's dominant power a few centuries later. The US could have gotten unlucky if local military hero George Washington had declared himself king.
Insofar as leftist revolutions create worse outcomes, I think it's because since the leftist creed is so anti-power, leftists don't carefully think through the incentives for institutions to manage that power. So the stable equilibrium they tend to drift towards is a sociopathic leader who can talk the talk about egalitarianism while viciously oppressing anyone who contests their power (think Mao or Stalin). Anyone intelligent can see that the sociopathic leader is pushing cartoon egalitarianism, and that's why these leaders are so quick to go for the throats of society's intellectuals. Pervasive propaganda takes care of the rest of the population.
Leftism might work for a different species such as bonobos, but human avarice needs to be managed through carefully designed incentive structures. Sticking your head in the sand and pretending avarice doesn't exist doesn't work. Eliminating it doesn't work because avaricious humans gain control of the elimination process. (Or, to put it another way, almost everyone who likes an idea like "let's kill all the avaricious humans" is themselves avaricious at some level. And by trying to put this plan in to action, they're creating a new "defect/defect" equilibrium where people compete for power through violence, and the winners in this situation tend not to be the sort of people you want in power.)
Ask them, I'm not an altruist. But I heard it may have something to do with the concept of compassion.
Historically, it correlates quite well. You want to help the "good" people and in order to do this you need to kill the "bad" people. The issue, of course, is that definitions of "good" and "bad" in this context... can vary, and rather dramatically too.
If we take the metaphor literally, setting up guillotines in the public square was something much favoured by the French Revolution, not by Napoleon Bonaparte.
Bollocks. You want to change the world and change is never painless. Tearing down chunks of the existing world, chunks you don't like, will necessarily cause suffering.
The French Revolution wanted to design a better world to the point of introducing the 10-day week. Napoleon just wanted to conquer.
--
Don't mind Lumifer. He's one of our resident Anti-Spirals.
But, here's a question: if you're angry at the Bad, why? Where's your hope for the Good?
Of course, that's something our culture has a hard time conceptualizing, but hey, you need to be able to do it to really get anywhere.
And yet he's consistently one of the highest karma earners in the 30-day karma leaderboard. It seems to be mainly due to his heavy participation... his 80% upvote rate is not especially high. I find him incredibly frustrating to engage with (though I try not to let it show). I can't help but think that he is driving valuable people away; having difficult people dominate the conversation can't be a good thing.
(To clarify, I'm not trying to speak out against the perspectives people like Lumifer and VoiceOfRa offer, which I am generally sympathetic to. I think their perspectives are valuable. I just wish they would make a stronger effort to engage in civil & charitable discussion, and I think having people who don't do this and participate heavily is likely to have pernicious effects on LW culture in the long term. In general, I agree with the view that Paul Graham has advanced re: Hacker News moderation: on a group rationality level, in an online forum context, civility & niceness end up being very important.)
Really? Their "perspective" appears to consist in attempting to tear down any hopes, beliefs, or accomplishments someone might have, to the point of occasionally just making a dumb comment out of failure to understand substantive material.
Of course, I stated that a little too disparagingly, but see below...
Not just civility and niceness, but affirmative statements. That is, if you're trying to achieve group epistemic rationality, it is important to come out and say what one actually believes. Statistical learning from a training-set of entirely positive or entirely negative examples is known to be extraordinarily difficult, in fact, nigh impossible (modulo "blah blah Solomonoff") to do in efficient time.
I think a good group norm is, "Even if you believe something controversial, come out and say it, because only by stating hypotheses and examining evidence can we ever update." Fully General Critique actually induces a uniform distribution across everything, which means one knows precisely nothing.
Besides which, nobody actually has a uniform distribution built into their real expectations in everyday life. They just adopt that stance when it comes time to talk about Big Issues, because they've heard of how Overconfidence Is Bad without having gotten to the part where Systematic Underconfidence Makes Reasoning Nigh-Impossible.
I think that anger at the Bad and hope for the Good are kind of flip sides of the same coin. I have a vague idea of how the world should be, and when the world does not conform to that idea, it irritates me. I would like a world full of highly rational and happy people cooperating to improve one another's lives, and I would like to see the subsequent improvements taking effect. I would like to see bright people and funding being channeled into important stuff like FAI and medicine and science, everyone working for the common good of humanity, and a lot of human effort going towards the endeavour of making everyone happy. I would like to see a human species which is virtuous enough that poverty is solved by everyone just sharing what they need, and war is solved because nobody wants to start violence. I want people to work together and be rational, basically, and I've already seen that work on a small scale so I have a lot of hope that we can upgrade it to a societal scale. I also have a lot of hope for things like cryonics/Alcor bringing people back to life eventually, MIRI succeeding in creating FAI, and effective altruism continuing to gain new members until we start solving problems from sheer force of numbers and funding.
But I try not to be too confident about exactly what a Good world looks like; a) I don't have any idea what the world will look like once we start introducing crazy things like superintelligence, b) that sounds suspiciously like an ideology and I would rather do lots of experiments on what makes people happy and then implement that, and c) a Good world would have to satisfy people's preferences and I'm not a powerful enough computer to figure out a way to satisfy 7 billion sets of preferences.
If you can simply improve the odds of people cooperating in such a manner, then I think that you will bring the world you envision closer. And the better you can improve those odds, the better the world will be.
--
And these are all very virtuous things to say, but you're a human, not a computer. You really ought to at least lock your mind on some positive section of the nearby-possible and try to draw motivation from that (by trying to make it happen).
--
There is a price to be paid. If you use fury and anger too much, you will become a furious and angry kind of person. Embrace the Dark Side and you will become one with it :-/
Maybe :-) The reason you've met a certain... lack of enthusiasm about your anger for good causes is because you're not the first kid who wanted to help people and was furious about the injustice and the blindness of the world. And, let's just say, it does not always lead to good outcomes.
--
If you stick around long enough, we shall see :-)