Lumifer comments on Welcome to Less Wrong! (7th thread, December 2014) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (635)
Ask them, I'm not an altruist. But I heard it may have something to do with the concept of compassion.
Historically, it correlates quite well. You want to help the "good" people and in order to do this you need to kill the "bad" people. The issue, of course, is that definitions of "good" and "bad" in this context... can vary, and rather dramatically too.
If we take the metaphor literally, setting up guillotines in the public square was something much favoured by the French Revolution, not by Napoleon Bonaparte.
Bollocks. You want to change the world and change is never painless. Tearing down chunks of the existing world, chunks you don't like, will necessarily cause suffering.
The French Revolution wanted to design a better world to the point of introducing the 10-day week. Napoleon just wanted to conquer.
--
Don't mind Lumifer. He's one of our resident Anti-Spirals.
But, here's a question: if you're angry at the Bad, why? Where's your hope for the Good?
Of course, that's something our culture has a hard time conceptualizing, but hey, you need to be able to do it to really get anywhere.
And yet he's consistently one of the highest karma earners in the 30-day karma leaderboard. It seems to be mainly due to his heavy participation... his 80% upvote rate is not especially high. I find him incredibly frustrating to engage with (though I try not to let it show). I can't help but think that he is driving valuable people away; having difficult people dominate the conversation can't be a good thing.
(To clarify, I'm not trying to speak out against the perspectives people like Lumifer and VoiceOfRa offer, which I am generally sympathetic to. I think their perspectives are valuable. I just wish they would make a stronger effort to engage in civil & charitable discussion, and I think having people who don't do this and participate heavily is likely to have pernicious effects on LW culture in the long term. In general, I agree with the view that Paul Graham has advanced re: Hacker News moderation: on a group rationality level, in an online forum context, civility & niceness end up being very important.)
Really? Their "perspective" appears to consist in attempting to tear down any hopes, beliefs, or accomplishments someone might have, to the point of occasionally just making a dumb comment out of failure to understand substantive material.
Of course, I stated that a little too disparagingly, but see below...
Not just civility and niceness, but affirmative statements. That is, if you're trying to achieve group epistemic rationality, it is important to come out and say what one actually believes. Statistical learning from a training-set of entirely positive or entirely negative examples is known to be extraordinarily difficult, in fact, nigh impossible (modulo "blah blah Solomonoff") to do in efficient time.
I think a good group norm is, "Even if you believe something controversial, come out and say it, because only by stating hypotheses and examining evidence can we ever update." Fully General Critique actually induces a uniform distribution across everything, which means one knows precisely nothing.
Besides which, nobody actually has a uniform distribution built into their real expectations in everyday life. They just adopt that stance when it comes time to talk about Big Issues, because they've heard of how Overconfidence Is Bad without having gotten to the part where Systematic Underconfidence Makes Reasoning Nigh-Impossible.
I think that anger at the Bad and hope for the Good are kind of flip sides of the same coin. I have a vague idea of how the world should be, and when the world does not conform to that idea, it irritates me. I would like a world full of highly rational and happy people cooperating to improve one another's lives, and I would like to see the subsequent improvements taking effect. I would like to see bright people and funding being channeled into important stuff like FAI and medicine and science, everyone working for the common good of humanity, and a lot of human effort going towards the endeavour of making everyone happy. I would like to see a human species which is virtuous enough that poverty is solved by everyone just sharing what they need, and war is solved because nobody wants to start violence. I want people to work together and be rational, basically, and I've already seen that work on a small scale so I have a lot of hope that we can upgrade it to a societal scale. I also have a lot of hope for things like cryonics/Alcor bringing people back to life eventually, MIRI succeeding in creating FAI, and effective altruism continuing to gain new members until we start solving problems from sheer force of numbers and funding.
But I try not to be too confident about exactly what a Good world looks like; a) I don't have any idea what the world will look like once we start introducing crazy things like superintelligence, b) that sounds suspiciously like an ideology and I would rather do lots of experiments on what makes people happy and then implement that, and c) a Good world would have to satisfy people's preferences and I'm not a powerful enough computer to figure out a way to satisfy 7 billion sets of preferences.
If you can simply improve the odds of people cooperating in such a manner, then I think that you will bring the world you envision closer. And the better you can improve those odds, the better the world will be.
--
Let us consider them, one by one.
This means that the goals of the people and groups will be more effectively realised. It is world-improving if and only if the goals towards which the group works are world-improving.
A group can be expected, on the whole, to work towards goals which appear to be of benefit to the group. The best way to ensure that the goals are world-improving, then, might be to (a) ensure that the "group" in question consists of all intelligent life (and not merely, say, Brazilians) and (b) the groups' goals are carefully considered and inspected for flaws by a significant number of people.
(b) is probably best accomplished be encouraging voluntary cooperation, as opposed to unquestioning obedience of orders. (a) simply requires ensuring that it is well-known that bigger groups are more likely to be successful, and punishing the unfair exploitation of outside groups.
On the whole, I think this is most likely a world-improving goal.
Alturism certainly sounds like a world-improving goal. Historically, there have been a few missteps in this field - mainly when one person proposes a way to get people to be more altruistic, but then someone else implements it and does so in a way that ensures that he reaps the benefit of everyone else's largesse.
So, likely to be world-improving, but keep an eye on the people trying to implement your research. (Be careful if you implement it yourself - have someone else keep a close eye on you in that circumstance).
Critical thinking is good. However, again, take care in the implementation; simply teaching students what to write in the exam is likely to do much less good than actually teaching critical thinking. Probably the most important thing to teach students is to ask questions and to think about the answers - and the traditional exam format makes it far too easy to simply teach students to try to guess the teacher's password.
If implemented properly, likely to be world-improving.
...that's my thoughts on those goals. Other people will likely have different thoughts.
And these are all very virtuous things to say, but you're a human, not a computer. You really ought to at least lock your mind on some positive section of the nearby-possible and try to draw motivation from that (by trying to make it happen).
--
I think there's an implicit premise or two that you may have mentally included but failed to express, running along the lines of:
The all-controlling state is run by completely benevolent beings who are devoted to their duty and never make errors.
Sans such a premise, one lazy bureaucrat cribbing his cubicle neighbor's allocations, or a sloppy one switching the numbers on two careers, can cause a hell of a lot of pain by assigning an inappropriate set of tasks for people to do. Zero say and the death penalty for disobedience then makes the pain practically irremediable. A lot of the reason for weak and ineffective government is trying to mitigate and limit government's ability to do terribly terribly wicked things, because governments are often highly skilled at doing terribly terribly wicked things, and in unique positions to do so, and can do so by minor accident. You seem to have ignored the possibility of anything going wrong when following your intuition.
Moreover, there's a second possible implicit premise:
These angels hold exactly and only the values shared by all mankind, and correct knowledge about everything.
Imagine someone with different values or beliefs in charge of that all-controlling state with the death penalty. For instance, I have previously observed that Boko Haram has a sliver of a valid point in their criticism of Western education when noting that it appears to have been a major driver in causing Western fertility rates to drop below replacement and show no sign of recovery. Obviously you can't have a wonderful future full of happy people if humans have gone extinct, therefore the Boko Haram state bans Western education on pain of death. For those already poisoned by it, such as you, you will spend your next ten years remedially bearing and rearing children and you are henceforth forbidden access to any and all reading material beyond instructions on diaper packaging. Boko Haram is confident that this is the optimal career for you and that they're maximizing the integral of human happiness over time, despite how much you may scream in the short term at the idea.
With such premises spelled out, I predict people wouldn't object to your ideal world so much as they'd object to the grossly unrealistic prospect. But without such, you're proposing a totalitarian dictatorship and triggering a hell of a lot of warning signs and heuristics and pattern-matching to slavery, tyranny, the Soviet Union, and various other terrible bad things where one party holds absolute power to tell other people how to live their life.
"But it's a benevolent dictatorship", I imagine you saying. Pull the other one, it has bells on. The neoreactionaries at least have a proposed incentive structure to encourage the dictator to be benevolent in their proposal to bring back monarchy. (TL;DR taxes go into the king's purse giving the king a long planning horizon) What have you got? Remember, you are one in seven billion people, you will almost certainly not be in charge of this all-powerful state if it's ever implemented, and when you do your safety design you should imagine it being in the hands of randoms at the least, and of enemies if you want to display caution.
--
There are reasons to suspect the tests would not work. "It would be nice to think that you can trust powerful people who are aware that power corrupts. But this turns out not to be the case." (Content Note: killing, mild racism.)
If you are "procrastinate-y" you wouldn't be able to survive this state yourself. Following a set schedule every moment for the rest of your life is very, very difficult and it is unlikely that you would be able to do it, so you would soon be dead yourself in this state.
I don't know you well enough to say, but it's quite easy to pretend that one has no ideology. For clear thinking it's very useful to understand one's own ideological positions.
There also a difference between doing science and scientism with is about banner wearing.
Oh, I definitely have some kind of inbuilt ideology - it's just that right now, I'm consciously trying to suppress/ignore it. It doesn't seem to converge with what most other humans want. I'd rather treat it as a bias, and try and compensate for it, in order to serve my higher level goals of satisfying people's preferences and increasing happiness and decreasing suffering and doing correct true science.
Ignoring something and working around a bias are two different things.
"Greetings, Comrade Acty. Today the Collective has decreed that you..." Do these words make your heart skip a beat in joyous anticipation, no matter how they continue?
Have you read "Brave New World"? "1984"? "With Folded Hands"? Do those depict societies you find attractive?
Exinanition is an attractive fantasy for some, but personal fantasies are not a foundation to build a society on.
You are clearly intelligent, but do you think? You have described the rich intellectual life at your school, but how much of that activity is of the sort that can solve a problem in the real world, rather than a facility at making complex patterns out of ideas? The visions that you have laid out here merely imagine problems solved. People will not do as you would want? Then they will be made to. How? "On pain of death." How can the executioners be trusted? They will be tested to ensure they use the power well.
How will they be tested? Who tests them? How does this system ever come into existence? I'm sure your imagination can come up with answers to all these questions, that you can slot into a larger and larger story. But it would be an exercise in creative fiction, an exercise in invisible dragonology.
And all springing from "My intuitions say that specialism increases output."
Exterminate all life, then. That will stop the suffering.
I'm sure you're really smart, and will go far. I'm concerned about the direction, though. Right now, I'm looking at an Unfriendly Natural Intelligence.
--
Wait a minute. You don't want them, or you do want them but shouldn't rely on what you want?
And I'm not just nitpicking here. This is why people are having bad reactions. On one level, you don't want those things, and on another you do. Seriously mixed messages.
Also, if you are physically there with your foot on someone's toe, that triggers your emotional instincts that say that you shouldn't cause pain. If you are doing things which cause some person to get hurt in some faraway place where you can't see it, that doesn't. I'm sure that many of the people who decided to use terrorism as an excuse for NSA surveillance won't step on people's toes or hurt any cats. If anything, their desire not to hurt people makes it worse. "We have to do these things for everyone's own good, that way nobody gets hurt!"
--
Of course, while most people would not want to live in BNW, most characters in BNW would not want to live in our society.
Why do you call inhabitants of such a state "citizens"? They are slaves.
Interesting. So you would like to be a slave.
...and do you understand why?
--
There is a price to be paid. If you use fury and anger too much, you will become a furious and angry kind of person. Embrace the Dark Side and you will become one with it :-/
Maybe :-) The reason you've met a certain... lack of enthusiasm about your anger for good causes is because you're not the first kid who wanted to help people and was furious about the injustice and the blindness of the world. And, let's just say, it does not always lead to good outcomes.
--
If you stick around long enough, we shall see :-)