Why Many-Worlds Is Not The Rationally Favored Interpretation
Eliezer recently posted an essay on "the fallacy of privileging the hypothesis". What it's really about is the fallacy of privileging an arbitrary hypothesis. In the fictional example, a detective proposes that the investigation of an unsolved murder should begin by investigating whether a particular, randomly chosen citizen was in fact the murderer. Towards the end, this is likened to the presumption that one particular religion, rather than any of the other existing or even merely possible religions, is especially worth investigating.
However, in between the fictional and the supernatural illustrations of the fallacy, we have something more empirical: quantum mechanics. Eliezer writes, as he has previously, that the many-worlds interpretation is the one - the rationally favored interpretation, the picture of reality which rationally should be adopted given the empirical success of quantum theory. Eliezer has said this before, and I have argued against it before, back when this site was just part of a blog. This site is about rationality, not physics; and the quantum case is not essential to the exposition of this fallacy. But given the regularity with which many-worlds metaphysics shows up in discussion here, perhaps it is worth presenting a case for the opposition.
Mathematical simplicity bias and exponential functions
One of biases that are extremely prevalent in science, but are rarely talked about anywhere, is bias towards models that are mathematically simple and easier to operate on. Nature doesn't care all that much for mathematical simplicity. In particular I'd say that as a good first approximation, if you think something fits exponential function of either growth or decay, you're wrong. We got so used to exponential functions and how convenient they are to work with, that we completely forgot the nature doesn't work that way.
But what about nuclear decay, you might be asking now... That's as close you get to real exponential decay as you get... and it's not nowhere close enough. Well, here's a log-log graph of Chernobyl release versus theoretical exponential function, plotted in log-log.

Well, that doesn't look all that exponential... The thing is that even if you have perfect exponential decay processes as with single nucleotide decay, when you start mixing a heterogeneous group of such processes, the exponential character is lost. Early in time faster-decaying cases dominate, then gradually those that decay more slowly, somewhere along the way you might have to deal with results of decay (pure depleted uranium gets more radioactive with time at first, not less, as it decays into low half-life nuclides), and perhaps even some processes you didn't have to consider (like creation of fresh radioactive nuclides via cosmic radiation).
And that's the ideal case of counting how much radiation a sample produces, where the underlying process is exponential by the basic laws of physics - it still gets us orders of magnitude wrong. When you're measuring something much more vague, and with much more complicated underlying mechanisms, like changes in population, economy, or processing power.
According to IMF, world economy in 2008 was worth 69 trillion $ PPP. Assuming 2% annual growth and naive growth models, the entire world economy produces 12 cents PPP worth of value in entire first century. And assuming fairly stable population, an average person in 3150 will produce more that the entire world does now. And with enough time dollar value of one hydrogen atom will be higher than current dollar value of everything on Earth. And of course with proper time discounting of utility, life of one person now is worth more than half of humanity millennium into the future - exponential growth and exponential decay are both equally wrong.
To me they all look like clear artifacts of our growth models, but there are people who are so used to them that they treat predictions like that seriously.
In case you're wondering, here are some estimates of past world GDP.
Sayeth the Girl
Disclaimer: If you are prone to dismissing women's complaints of gender-related problems as the women being whiny, emotionally unstable girls who see sexism where there is none, this post is unlikely to interest you.
For your convenience, links to followup posts: Roko says; orthonormal says; Eliezer says; Yvain says; Wei_Dai says
As far as I can tell, I am the most active female poster on Less Wrong. (AnnaSalamon has higher karma than I, but she hasn't commented on anything for two months now.) There are not many of us. This is usually immaterial. Heck, sometimes people don't even notice in spite of my girly username, my self-introduction, and the fact that I'm now apparently the feminism police of Less Wrong.
My life is not about being a girl. In fact, I'm less preoccupied with feminism and women's special interest issues than most of the women I know, and some of the men. It's not my pet topic. I do not focus on feminist philosophy in school. I took an "Early Modern Women Philosophers" course because I needed the history credit, had room for a suitable class in a semester when one was offered, and heard the teacher was nice, and I was pretty bored. I wound up doing my midterm paper on Malebranche in that class because we'd covered him to give context to Mary Astell, and he was more interesting than she was. I didn't vote for Hilary Clinton in the primary. Given the choice, I have lots of things I'd rather be doing than ferreting out hidden or less-than-hidden sexism on one of my favorite websites.
Unfortunately, nobody else seems to want to do it either, and I'm not content to leave it undone. I suppose I could abandon the site and leave it even more masculine so the guys could all talk in their own language, unimpeded by stupid chicks being stupidly offended by completely unproblematic things like objectification and just plain jerkitude. I would almost certainly have vacated the site already if feminism were my pet issue, or if I were more easily offended. (In general, I'm very hard to offend. The fact that people here have succeeded in doing so anyway without even, apparently, going out of their way to do it should be a great big red flag that something's up.) If you're wondering why half of the potential audience of the site seems to be conspicuously not here, this may have something to do with it.
Link: The Case for Working With Your Hands
The NYTimes recently publised a long semi-autobiographical article written by Michael Crawford, a University of Chicago Phd graduate who is currently employed as a motorcycle mechanic. The article is partially a somewhat standard lament about the alienation and drudgery of modern corporate work. But it is also very much about rationality. Here's an excerpt:
As it happened, in the spring I landed a job as executive director of a policy organization in Washington. This felt like a coup. But certain perversities became apparent as I settled into the job. It sometimes required me to reason backward, from desired conclusion to suitable premise. The organization had taken certain positions, and there were some facts it was more fond of than others. As its figurehead, I was making arguments I didn’t fully buy myself. Further, my boss seemed intent on retraining me according to a certain cognitive style — that of the corporate world, from which he had recently come. This style demanded that I project an image of rationality but not indulge too much in actual reasoning. As I sat in my K Street office, Fred’s life as an independent tradesman gave me an image that I kept coming back to: someone who really knows what he is doing, losing himself in work that is genuinely useful and has a certain integrity to it. He also seemed to be having a lot of fun.
I think this article will strike a chord with programmers. A large part of the satisfaction of motorcycle work that Crawford describes comes from the fact that such work requires one to confront reality, however harsh it may be. Reality cannot be placated by hand-waving, Powerpoint slides, excuses, or sweet talk. But the very harshness of the challenge means that when reality yields to the finesse of a craftsman, the reward is much greater. Programming has a similar aspect: a piece of software is basically either correct or incorrect. And programming, like mechanical work, allows one to interrogate and engage the system of interest through a very high-bandwidth channel: you write a test, run it, tweak it, re-run, etc.
Catchy Fallacy Name Fallacy (and Supporting Disagreement)
Related: The Pascal's Wager Fallacy Fallacy, The Fallacy Fallacy
We need a catchy name for the fallacy of being over-eager to accuse people of fallacies that you have catchy names for.
When you read an argument you don't like, but don't know how to attack on its merits, there is a trick you can turn to. Just say it commits1 some fallacy, preferably one with a clever name. Others will side with you, not wanting to associate themselves with a fallacy. Don't bother to explain how the fallacy applies, just provide a link to an article about it, and let stand the implication that people should be able to figure it out from the link. It's not like anyone would want to expose their ignorance by asking for an actual explanation.
What a horrible state of affairs I have described in the last paragraph. It seems, if we follow that advice, that every fallacy we even know the name of makes us stupider. So, I present a fallacy name that I hope will exactly counterbalance the effects I described. If you are worried that you might defend an argument that has been accused of committing some fallacy, you should be equally worried that you might support an accusation that commits the Catchy Fallacy Name Fallacy. Well, now that you have that problem either way, you might as well try to figure if the argument did indeed commit the fallacy, by examining the actual details of the fallacy and whether they actually describe the argument.
But, what is the essence of this Catchy Fallacy Name Fallacy? The problem is not the accusation of committing a fallacy itself, but that the accusation is vague. The essence is "Don't bother to explain". The way to avoid this problem is to entangle your counterargument, whether it makes a fallacy accusation or not, with the argument you intend to refute. Your counterargument should distinguish good arguments from bad arguments, in that it specifies criteria that systematically apply to a class of bad arguments but not to good arguments. And those criteria should be matched up with details of the allegedly bad argument.
The wrong way:
It seems that you've committed the Confirmation Bias.
The right way:
The Confirmation Bias is when you find only confirming evidence because you only look for confirming evidence. You looked only for confirming evidence by asking people for stories of their success with Technique X.
Notice how the right way would seem very out of place when applied against an argument it does not fit. This is what I mean when I say the counterargument should distinguish the allegedly bad argument from good arguments.
And, if someone commits the Catchy Fallacy Name Fallacy in trying to refute your arguments, or even someone else's, call them on it. But don't just link here, you wouldn't want to commit the Catchy Fallacy Name Fallacy Fallacy. Ask them how their counterargument distinguishes the allegedly bad argument from arguments that don't have the problem.
1 Of course, when I say that an argument commits a fallacy, I really mean that the person who made that argument, in doing so, committed the fallacy.
A Parable On Obsolete Ideologies
Followup to: Yudkowsky and Frank on Religious Experience, Yudkowksy and Frank On Religious Experience Pt 2
With sincere apologies to: Mike Godwin
You are General Eisenhower. It is 1945. The Allies have just triumphantly liberated Berlin. As the remaining leaders of the old regime are being tried and executed, it begins to become apparent just how vile and despicable the Third Reich truly was.
In the midst of the chaos, a group of German leaders come to you with a proposal. Nazism, they admit, was completely wrong. Its racist ideology was false and its consequences were horrific. However, in the bleak poverty of post-war Germany, people need to keep united somehow. They need something to believe in. And a whole generation of them have been raised on Nazi ideology and symbolism. Why not take advantage of the national unity Nazism provides while discarding all the racist baggage? "Make it so," you say.
The swastikas hanging from every boulevard stay up, but now they represent "traditional values" and even "peace". Big pictures of Hitler still hang in every government office, not because Hitler was right about racial purity, but because he represents the desire for spiritual purity inside all of us, and the desire to create a better society by any means necessary. It's still acceptable to shout "KILL ALL THE JEWS AND GYPSIES AND HOMOSEXUALS!" in public places, but only because everyone realizes that Hitler meant "Jews" as a metaphor for "greed", "gypsies" as a metaphor for "superstition", and "homosexuals" as a metaphor for "lust", and so what he really meant is that you need to kill the greed, lust, and superstition in your own heart. Good Nazis love real, physical Jews! Some Jews even choose to join the Party, inspired by their principled stand against spiritual evil.
The Hitler Youth remains, but it's become more or less a German version of the Boy Scouts. The Party infrastructure remains, but only as a group of spiritual advisors helping people fight the untermenschen in their own soul. They suggest that, during times of trouble, people look to Mein Kampf for inspiration. If they open to a sentence like "The Aryan race shall conquer all in its path", then they can interpret "the Aryan race" to mean "righteous people", and the sentence is really just saying that good people can do anything if they set their minds to it. Isn't that lovely?
Soon, "Nazi" comes to just be a synonym for "good person". If anyone's not a member of the Nazi Party, everyone immediately becomes suspicious. Why is she against exterminating greed, lust, and superstition from her soul? Does she really not believe good people can do anything if they set their minds to it? Why does he oppose caring for your aging parents? We definitely can't trust him with high political office.
Hardened Problems Make Brittle Models
Consider a simple decision problem: you arrange a date with someone, you arrive on time, your partner isn't there. How long do you wait before giving up?
Humans naturally respond to this problem by acting outside the box. Wait a little then send a text message. If that option is unavailable, pluck a reasonable waiting time from cultural context, e.g. 15 minutes. If that option is unavailable...
Wait, what?
The toy problem was initially supposed to help us improve ourselves - to serve as a reasonable model of something in the real world. The natural human solution seemed too messy and unformalizable so we progressively remove nuances to make the model more extreme. We introduce Omegas, billions of lives at stake, total informational isolation, perfect predictors, finally arriving at some sadistic contraption that any normal human would run away from. But did the model stay useful and instructive? Or did we lose important detail along the way?
Many physical models, like gravity, have the nice property of stably approximating reality. Perturbing the positions of planets by one millimeter doesn't explode the Solar System the next second. Unfortunately, many of the models we're discussing here don't have this property. The worst offender yet seems to be Eliezer's "True PD" which requires the whole package of hostile psychopathic AIs, nuclear-scale payoffs and informational isolation; any natural out-of-the-box solution like giving the damn thing some paperclips or bargaining with it would ruin the game. The same pattern has recurred in discussions of Newcomb's Problem where people have stated that any miniscule amount of introspection into Omega makes the problem "no longer Newcomb's". That naturally led to more ridiculous use of superpowers, like Alicorn's bead jar game where (AFAIU) the mention of Omega is only required to enforce a certain assumption about its thought mechanism that's wildly unrealistic for a human.
Artificially hardened logic problems make brittle models of reality.
So I'm making a modest proposal. If you invent an interesting decision problem, please, first model it as a parlor game between normal people with stakes of around ten dollars. If the attempt fails, you have acquired a bit of information about your concoction; don't ignore it outright.
How to come up with verbal probabilities
Unfortunately, we are kludged together, and we can't just look up our probability estimates in a register somewhere when someone asks us "How sure are you?".
The usual heuristic for putting a number on the strength of beliefs is to ask "When you're this sure about something, what fraction of the time do you expect to be right in the long run?". This is surely better than just "making up" numbers with no feel for what they mean, but still has it's faults. The big one is that unless you've done your calibrating, you may not have a good idea of how often you'd expect to be right.
I can think of a few different heuristics to use when coming up with probabilities to assign.
1) Pretend you have to bet on it. Pretend that someone says "I'll give you ____ odds, which side do you want?", and figure out what the odds would have to be to make you indifferent to which side you bet on. Consider the question as if though you were actually going to put money on it . If this question is covered on a prediction market, your answer is given to you.
2) Ask yourself how much evidence someone would have to give you before you're back to 50%. Since we're trying to update according to bayes law, knowing how much evidence it takes to bring you to 50% tells you the probability you're implicitely assigning.
For example, pretend someone said something like "I can guess peoples names by their looks". If he guesses the first name right, and it's a common name, you'll probably write it off as fluke. The second time you'll probably think he knew the people or is somehow fooling you, but conditional on that, you'd probably say he's just lucky. By bayes law, this suggests that you put the prior probability of him pulling this stunt at 0.1%<p<3%, and less than 0.1% prior probability of him having his claimed skill. If it takes 4 correct calls to bring you to equally unsure either way, then thats about 0.03^4 if they're common names, or one in a million1...
This Didn't Have To Happen
My girlfriend/SO's grandfather died last night, running on a treadmill when his heart gave out.
He wasn't signed up for cryonics, of course. She tried to convince him, and I tried myself a little the one time I met her grandparents.
"This didn't have to happen. Fucking religion."
That's what my girlfriend said.
I asked her if I could share that with you, and she said yes.
Just so that we're clear that all the wonderful emotional benefits of self-delusion come with a price, and the price isn't just to you.
Instrumental Rationality is a Chimera
Eliezer observes, “Among all self-identified "rationalist" communities that I know of, and Less Wrong in particular, there is an obvious gender imbalance - a male/female ratio tilted strongly toward males.” and provides us with a selection of hypotheses that attempt to explain this notable fact, ranging over the normal cultural and biological explanations for male/female imbalances in any community. One important point was missing however, a point raised by Yvain last week under the title, Extreme Rationality: It's Not That Great. That fact is that we have not done anything yet. Eliezer writes under the assumption that women ought to want to study our writings, but since we have so far failed to produce a single practical application of our rationalist techniques, I really cannot blame women for staying away. They may be being more rational than we are.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)