Noticing the 5-second mindkill
I've been reading news and a headline popped out at me:
Some Conservative backbenchers stirred up controversy in the House of Commons Tuesday when they accused their own party of preventing them from speaking out in Parliament.
(For the US audience, the Canadian (and British) House of Commons is like the House of Representatives, only with less democracy.)
My first thought: how dare the Prime Minister muzzles democratically elected MPs!
(For the US audience, the Prime Minister in a majority government has the power of the President and the majority leaders in both chambers combined, and much much more. "MP" (Member of Parliament) is the equivalent of a "Rep." Backbenchers are the reps who don't get a portfolio in the administration. Indeed, basically no separation of the legislative power from the executive. As I said, less democracy. Blame the Brits.)
Then I keep reading:
Warawa [the MP in question] did not specify the topic, but it’s widely believed that he wanted to bring up his motion calling on parliamentarians to condemn sex-selective abortion.
My next thought: oh, good on the Prime Minister to prevent that crazy lunatic from pushing his pro-life agenda!
And finally, my third thought: WTF did just happen? I changed my mind 180 degrees instantly because I disagree with the person's opinion, even though the original issue didn't go away. Mindkill in action. Had he been trying to promote, say, legalization of marijuana instead, I would have been twice as indignant about the evil PM.
Now, I do notice this sometimes (often when reading something on LW), but probably not every time it happens to me. I want to notice it more often.
So, I'm asking people to give your own (hopefully non-political) examples of noticing your instant about-face and hopefully some experience in recognizing it more reliably.
Choose To Be Happy
Related to: I'm Scared; Purchase utilons and fuzzies separately
Expanded from this comment.
You have awakened as a rationalist, discarded your false beliefs, and updated on new evidence. You understand the dangers of UFAI, you do not look away from death or justify it. You realize your own weakness, and the Vast space of possible failures.
And understanding all this, you feel bad about it. Very bad, in fact. You are afraid of the dangers of the future, and you are horrified by the huge amounts of suffering. You have shut up and calculated, and the calculation output that you should feel 3^^^3 times as bad as over a stubbed toe. And a stubbed toe can be pretty bad.
But this reaction of yours is not rational. You should consider the options of choosing not to feel bad about bad things happening, and choosing to feel good no matter what.
If it were morally correct to kill everyone on earth, would you do it?
First consider the following question to make sure we're on the same page in terms of moral reasoning: social consequences aside, is it morally correct to kill one person to create a million people who would not have otherwise existed? Let's suppose these people are whisked into existence on a spaceship travelling away from earth at light speed, and they live healthy, happy lives, but eventually die.
I'd argue that anyone who adheres to "shut up and multiply" (i.e. total utilitarianism) has to say yes. Is it better to create one such person than to donate 200 dollars to Oxfam? Is one life worth more than a 200 million dollar donation to Oxfam? Seems pretty clear that the answers are "yes" and "no".
Now, suppose we have a newly created superintelligent FAI that's planning out how to fill the universe with human value. Should it first record everyone's brain, thus saving them, or should do whatever it takes to explode as quickly as possible? It's hard to estimate how much it would slow things down to get everyone's brain recorded, but it's certainly some sort of constraint. Depending on the power of the FAI, my guess is somewhere between a second and a few hours. If the FAI is going to be filling the universe with computronium simulating happy, fulfilled humans at extremely high speeds, that's a big deal! A second's delay across the future light-cone of earth could easily add up to more than the value of every currently living human's life. It may sound bad to kill everyone on earth just to save a second (or maybe scan only a few thousand people for "research"), but that's only because of scope insensitivity. If only we understood just how good saving that second would be, maybe we would all agree that it is not only right but downright heroic to do so!
A related scenario: a FAI that we are very, very sure correctly implements CEV sets up a universe in which everyone gets 20 years to live, starting from a adult transhuman state. It turns out that there are diminishing returns in terms of value to longer and longer life spans, and this is the best way to use the computational power. The transhumans have been modified not to have any anxiety or fear about death, and agree this is the best way to do things. Their human ancestors' desire for immortality is viewed as deeply wrong, even barbaric. In short, all signs point to this really being the coherent extrapolated volition of humanity.
Besides opinions on whether or not either of these scenarios are plausible, I'd like hear reactions to these scenarios as thought experiments. Is this a problem for total utilitarianism or for CEV? Is this an argument for "grabbing the banana" as a species and if necessary knowingly making an AI that does something other than the morally correct thing? Anyone care to bite the bullet?
Sayeth the Girl
Disclaimer: If you are prone to dismissing women's complaints of gender-related problems as the women being whiny, emotionally unstable girls who see sexism where there is none, this post is unlikely to interest you.
For your convenience, links to followup posts: Roko says; orthonormal says; Eliezer says; Yvain says; Wei_Dai says
As far as I can tell, I am the most active female poster on Less Wrong. (AnnaSalamon has higher karma than I, but she hasn't commented on anything for two months now.) There are not many of us. This is usually immaterial. Heck, sometimes people don't even notice in spite of my girly username, my self-introduction, and the fact that I'm now apparently the feminism police of Less Wrong.
My life is not about being a girl. In fact, I'm less preoccupied with feminism and women's special interest issues than most of the women I know, and some of the men. It's not my pet topic. I do not focus on feminist philosophy in school. I took an "Early Modern Women Philosophers" course because I needed the history credit, had room for a suitable class in a semester when one was offered, and heard the teacher was nice, and I was pretty bored. I wound up doing my midterm paper on Malebranche in that class because we'd covered him to give context to Mary Astell, and he was more interesting than she was. I didn't vote for Hilary Clinton in the primary. Given the choice, I have lots of things I'd rather be doing than ferreting out hidden or less-than-hidden sexism on one of my favorite websites.
Unfortunately, nobody else seems to want to do it either, and I'm not content to leave it undone. I suppose I could abandon the site and leave it even more masculine so the guys could all talk in their own language, unimpeded by stupid chicks being stupidly offended by completely unproblematic things like objectification and just plain jerkitude. I would almost certainly have vacated the site already if feminism were my pet issue, or if I were more easily offended. (In general, I'm very hard to offend. The fact that people here have succeeded in doing so anyway without even, apparently, going out of their way to do it should be a great big red flag that something's up.) If you're wondering why half of the potential audience of the site seems to be conspicuously not here, this may have something to do with it.
Morality is Awesome
(This is a semi-serious introduction to the metaethics sequence. You may find it useful, but don't take it too seriously.)
Meditate on this: A wizard has turned you into a whale. Is this awesome?

"Maybe? I guess it would be pretty cool to be a whale for a day. But only if I can turn back, and if I stay human inside and so on. Also, that's not a whale.
"Actually, a whale seems kind of specific, and I'd be suprised if that was the best thing the wizard can do. Can I have something else? Eternal happiness maybe?"
Meditate on this: A wizard has turned you into orgasmium, doomed to spend the rest of eternity experiencing pure happiness. Is this awesome?
...
"Kindof... That's pretty lame actually. On second thought I'd rather be the whale; at least that way I could explore the ocean for a while.
"Let's try again. Wizard: maximize awesomeness."
Meditate on this: A wizard has turned himself into a superintelligent god, and is squeezing as much awesomeness out of the universe as it could possibly support. This may include whales and starships and parties and jupiter brains and friendship, but only if they are awesome enough. Is this awesome?
...
"Well, yes, that is awesome."
What we just did there is called Applied Ethics. Applied ethics is about what is awesome and what is not. Parties with all your friends inside superintelligent starship-whales are awesome. ~666 children dying of hunger every hour is not.
(There is also normative ethics, which is about how to decide if something is awesome, and metaethics, which is about something or other that I can't quite figure out. I'll tell you right now that those terms are not on the exam.)
"Wait a minute!" you cry, "What is this awesomeness stuff? I thought ethics was about what is good and right."
I'm glad you asked. I think "awesomeness" is what we should be talking about when we talk about morality. Why do I think this?
-
"Awesome" is not a philosophical landmine. If someone encounters the word "right", all sorts of bad philosophy and connotations send them spinning off into the void. "Awesome", on the other hand, has no philosophical respectability, hence no philosophical baggage.
-
"Awesome" is vague enough to capture all your moral intuition by the well-known mechanisms behind fake utility functions, and meaningless enough that this is no problem. If you think "happiness" is the stuff, you might get confused and try to maximize actual happiness. If you think awesomeness is the stuff, it is much harder to screw it up.
-
If you do manage to actually implement "awesomeness" as a maximization criteria, the results will be actually good. That is, "awesome" already refers to the same things "good" is supposed to refer to.
-
"Awesome" does not refer to anything else. You think you can just redefine words, but you can't, and this causes all sorts of trouble for people who overload "happiness", "utility", etc.
-
You already know that you know how to compute "Awesomeness", and it doesn't feel like it has a mysterious essence that you need to study to discover. Instead it brings to mind concrete things like starship-whale math-parties and not-starving children, which is what we want anyways. You are already enabled to take joy in the merely awesome.
-
"Awesome" is implicitly consequentialist. "Is this awesome?" engages you to think of the value of a possible world, as opposed to "Is this right?" which engages to to think of virtues and rules. (Those things can be awesome sometimes, though.)
I find that the above is true about me, and is nearly all I need to know about morality. It handily inoculates against the usual confusions, and sets me in the right direction to make my life and the world more awesome. It may work for you too.
I would append the additional facts that if you wrote it out, the dynamic procedure to compute awesomeness would be hellishly complex, and that right now, it is only implicitly encoded in human brains, and no where else. Also, if the great procedure to compute awesomeness is not preserved, the future will not be awesome. Period.
Also, it's important to note that what you think of as awesome can be changed by considering things from different angles and being exposed to different arguments. That is, the procedure to compute awesomeness is dynamic and created already in motion.
If we still insist on being confused, or if we're just curious, or if we need to actually build a wizard to turn the universe into an awesome place (though we can leave that to the experts), then we can see the metaethics sequence for the full argument, details, and finer points. I think the best post (and the one to read if only one) is joy in the merely good.
META: Deletion policy
http://wiki.lesswrong.com/wiki/Deletion_policy
This is my attempt to codify the informal rules I've been working by.
I'll leave this post up for a bit, but strongly suspect that it will have to be deleted not too long thereafter. I haven't been particularly encouraged to try responding to comments, either. Nonetheless, if there's something I missed, let me know.
Singularity the hard way
So far, we only have one known example of the development of intelligent life; and that example is us. Humanity. That means that we have only one machanism that is known to be able to produce intelligent life; and that is evolution. But by far the majority of life that is produced by evolution is not
intelligent. (In fact, by far the majority of life produced by evolution appears to be bacteria, as far as I can tell. There's also a lot of beetles).
Why did evolution produce such a steep climb in human intelligence, while not so much in the case of other creatures? That, I suspect, is at least partially because as humans we are not competing against other creatures anymore. We are competing against each other.
Also, once we managed to start writing things down and sharing knowledge, we shifted off the slow, evolutionary timescale and onto the faster, technological timescale. As technology improves, we find ourselves being more right, less wrong; our ability to affect the environment continually increases. Our intellectual development, as a species, speeds up dramatically.
And I believe that there is a hack that can be applied to this process; a mechanism by which the total intelligence of humanity as a whole can be rather dramatically increased. (It will take time). The process is simple enough in concept.
These thoughts were triggered by an article on some Ethiopian children who were given tablets by OLPC. They were chosen specifically on the basis of illiteracy (through the whole village) and were given no teaching (aside from the teaching apps on the tablets; some instruction on how to use the solar chargers was also given to the adults) and in fairly short order, they taught themselves basic literacy. (And had modified the operating system to customise it, and re-enable the camera).
My first thought was that this gives an upper limit to the minimum cost of world literacy; the minimum cost of world literacy is limited to the cost of one tablet per child (plus a bit for transportation).
In short, we need world literacy. World literacy will allow anyone and everyone to read up on that which interests them. It will allow a vastly larger number of people to start thinking about certain hard problems (such as any hard problem you care to name). It will allow more eyes to look at science; more experiments to be done and published; more armour-piercing questions which no-one has yet thought to ask because there simply are not enough scientists to ask them.
World literacy would improve the technological progress of humanity; and probably, after enough generations, result in a humanity who we would, by todays standards, consider superhumanly intelligent. (This may or may not necessitate direct brain-computer interfaces)
The aim, therefore, is to allow humanity, and not some human-made AI, to go *foom*. It will take some significant amount of time - following this plan means that our generation will do no more than continue a process that began some millions of years ago - but it does have this advantage; if it is humanity that goes *foom*, then the resulting superintelligences are practically guaranteed to be human-Friendly since they will be human. (For the moment, I discard the possibility of a suicidal superintelligence).
It also has this advantage; the process is likely to be slow enough that a significant fraction of humanity will be enhanced at the same time, or close enough to the same time that none will be able to stop any of the others' enhancements. This drastically reduces the probability of being trapped by a single Unfriendly enhanced human.
The main disadvantage is the time taken; this will take centuries at the least, perhaps millenia. It is likely that, along the way, a more traditional AI will be created.
[Link] "An OKCupid Profile of a Rationalist"
The rationalist in question, of course, is our very own EY.
Quotes giving a reasonable sample of the spectrum of reactions:
Epic Fail on the e-harmony profile. He’s over-signalling intelligence. There’s a good paper about how much to optimally signal, like when you have a PhD to put it on your business card or not. This guy is going around giving out business cards that read Prof. Dr. John Doe, PhD, MA, BA. He won’t be getting laid any time soon.
His profile is probably very effective for aspergery girls who like reading the kinds of things that appear on LessWrong. Yudkowsky is basically a celebrity within a small niche of hyper-nerdy rationalists, so I doubt he has much trouble getting laid by girls in that community.
You make it sound like a cult leader or something....And reading the profile again with that lens, it actually makes a lot of sense.
I was about to agree [that the profile is oversharing], but then come to think of it, I realize I have an orgasm denial fetish, too. It’s an aroused preference that never escaped to my non-aroused self-consciousness.
Why is this important to consider?
LessWrong as a community is dedicated to trying to "raise the sanity waterline," and its most respected members in particular put a lot of resources into outreach, via CFAR, HPMoR, and maintaining this site. But a big factor in how people perceive our brand of rationality is about image. If we're serious about raising the sanity waterline, that means image management - or at least avoiding active image malpractice - is something we should enthusiastically embrace as a way to achieve our goals. [1]
This is also a valuable exercise in considering the outside view. Marginal Revolution is already a fairly WEIRD site, focused on abstract economic issues. If any major blog is likely to be sympathetic to our cultural quirks, this would be it. Yet a plurality of commenters reacted negatively.
To the extent that we didn't notice anything strange about LW's figurehead having this OKCupid profile, LW either failed at calibrating mainstream reaction, or failed at consequentialism and realizing the drag this would have on our other recruitment efforts. In our last discussion, there were only a few commenters raising concerns, and the consensus of the thread was that it was harmless and had no PR consequences worth noting.
As one commenter cogently put it,
I’m not saying that he’s trying to make a statement with this, I’m saying that he is making a statement about this whether he’s trying to or not. Ideas have consequences for how we live our lives, and that Eliezer has a public, identifiable profile up where he talks about his sexual fetishes is not some sort of randomly occurring event with no relationship to his other ideas.
I'd argue the same reasoning applies to the community at large, not just EY specifically.
[1] From Anna's excellent article: 5. I consciously attempt to welcome bad news, or at least not push it away. (Recent example from Eliezer: At a brainstorming session for future Singularity Summits, one issue raised was that we hadn't really been asking for money at previous ones. My brain was offering resistance, so I applied the "bad news is good news" pattern to rephrase this as, "This point doesn't change the fixed amount of money we raised in past years, so it is good news because it implies that we can fix the strategy and do better next year.")
Thwarting a Catholic conversion?
I recently learned that a friend of mine, and a long-time atheist (and atheist blogger), is planning to convert to Catholicism. It seems the impetus for her conversion was increasing frustration that she had no good naturalistic account for objective morality in the form of virtue ethics; that upon reflection, she decided she felt like morality "loved" her; that this feeling implied God; and that she had sufficient "if God, then Catholicism" priors to point toward Catholicism, even though she's bisexual (!) and purports to still feel uncertain about the Church's views on sexuality. (Side note: all of this information is material she's blogged about herself, so it's not as if I'm sharing personal details she would prefer to be kept private.)
First, I want to state the rationality lesson I learned from this episode: atheists who spend a great deal of their time analyzing and even critiquing the views of a particular religion are at-risk atheists. Eliezer's spoken about this sort of issue before ("Someone who spends all day thinking about whether the Trinity does or does not exist, rather than Allah or Thor or the Flying Spaghetti Monster, is more than halfway to Christianity."), but I guess it took a personal experience to really drive the point home. When I first read my friend's post, I had a major "I notice that I am confused" moment, because it just seemed so implausible that someone who understood actual atheist arguments (as opposed to dead little sister Hollywood Atheism) could convert to religion, and Catholicism of all things. I seriously considered (and investigated) the possibility that her post was some kind of prank or experiment or otherwise not sincere, or that her account had been hijacked by a very good impersonator (both of these seem quite unlikely at this point).
But then I remembered how I had been frustrated in the past by her tolerance for what seemed like rank religious bigotry and how often I thought she was taking seriously theological positions that seemed about as likely as the 9/11 attacks being genuinely inspired and ordained by Allah. I remembered how I thought she had a confused conception of meta-ethics and that she often seemed skeptical of reductionism, which in retrospect should have been a major red flag for purported atheists. So yeah, spending all your time arguing about Catholic doctrine really is a warning sign, no matter how strongly you seem to champion the "atheist" side of the debate. Seriously.
But second, and more immediately, I wonder if anybody has advice on how to handle this, or if they've had similar experiences with their friends. I do care about this person, and I was devastated to hear this news, so if there's something I can do to help her, I want to. Of course, I would prefer most that she stop worrying about religion entirely and just grok the math that makes religious hypotheses so unlikely as to not be worth your time. But in the short term I'd settle for her not becoming a Catholic, and not immersing herself further in Dark Side Epistemology or surrounding herself with people trying to convince her that she needs to "repent" of her sexuality.
I think I have a pretty good understanding of the theoretical concepts at stake here, but I'm not sure where to start or what style of argument is likely to have the best effect at this point. My tentative plan is to express my concern, try to get more information about what she's thinking, and get a dialogue going (I expect she'll be open to this), but I wanted to see if you all had more specific suggestions, especially if you've been through similar experiences yourself. Thanks!
No need for gravity?
I don’t know where else to go with this idea. I’m not a physicist and it could be obviously wrong for some reason I’m missing, but it seems to me that there is a small chance that I’ve figured out how to remove one of the fundamental forces from our models of the universe; gravity, to be specific.
So we’ve all heard of dark energy, the force driving the accelerating expansion of the universe. Presumably it comes from somewhere, perhaps from every piece of matter in the universe, perhaps only stars, perhaps only black holes, but as long as it’s not all coming from a single source, it’s probably coming from something that is relatively common, and primarily found within galaxies. And if I magically came to KNOW that it does all come from one source, I would do very little beyond deleting a few arrows in my diagrams to change this post.
And as the Hubble deep field scans showed, there are A LOT of galaxies in any direction you look in (at least from Earth). So, countless galaxies in all directions are emitting dark matter, with tiny rays from each one hitting our galaxy, as well as galaxies in every other direction.
Of course our galaxy doesn’t have a plastic shell around it for dark energy to hit. It’s specific things in the galaxy that get hit by specific emissions of dark energy, just like it’s specific objects that emit those emissions.
Here are some galaxies. Beyond the ones shown, there are more and more in all directions for as far as anyone knows so far. Please note that nothing in any of these diagrams is drawn to scale.

Any one of them emits dark energy pretty uniformly in all directions, as it does with light.

Given the sheer number of other galaxies off in all directions, the total dark energy hitting a galaxy would look something like this. The magnitude of the dark energy forces coming in from the rest of the universe ought to be a lot more than what our one little galaxy puts out.

Now let’s look inside this galaxy, at a single solar system. Dark energy converges from all directions, as at the perimeter of the galaxy, since the galaxy is mostly empty space, and (I’m presuming) a more or less negligible amount is added from other objects within the galaxy.

Now let’s consider a single planet within the solar system

The sun casts a shadow in the dark energy field; some of the dark energy headed in the direction of our planet strikes the sun along the way and never makes it to the planet. To a lesser extent, the planet shields the sun as well. As the planet revolves around the sun, there is always a void in the otherwise all-pervasive dark energy field in the direction of the sun. As rudimentary as these diagrams are, you might as well just rotate your monitor if you really need a visual (keeping the screen on the same plane).
Each dark energy vector has an equal opposite cancelling it out, except in the shadow. Any other imbalance would be the same for the planet as it is for the star, and therefore would not alter their positions relative to each other. Even if it was all coming from one direction.

The planet always has one region that is only hit with the sun’s own dark energy emissions (if it has any), whereas all other sides are being hit with dark energy from all of the rest of the universe (in that direction), hence the illusion of gravity.
Similarly, a supermassive black hole at the center of a galaxy would cast a dark energy shadow on everything else in the galaxy, so the net push of all the dark energy vectors hitting a given star is toward the center of the galaxy, keeping it in orbit. This would be true of any given moment; the direction of the greatest push rotates around, but it is always toward the black hole.

The planets around the star are shielded by the black hole roughly the same amount as the star is, but they are much more strongly affected by the shielding from the star than the black hole is.
Now, consider 2 galaxies:

Each emits a little bit of dark energy of its own, and is mostly empty space so that much of the dark energy from other galaxies beyond it passes right through. I'm thinking no one object within a galaxy is emitting more dark energy than it shields its neighbors from. There is some galaxy-to-galaxy shielding, but it is very weak due to the amount of dark energy that can pass right through without hitting anything. This would be consistent with the galaxies spreading out from each other without being ripped apart by the force causing them to spread. So, between galaxies, there is a repulsive effect, while within galaxies, there is primarily an effect of shielding from the all-pervasive repulsion from dark energy.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)