What Do We Mean By "Rationality"?
We mean:
- Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory. The art of obtaining beliefs that correspond to reality as closely as possible. This correspondence is commonly termed "truth" or "accuracy", and we're happy to call it that.
- Instrumental rationality: achieving your values. Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as "winning".
If that seems like a perfectly good definition, you can stop reading here; otherwise continue.
Let them eat cake: Interpersonal Problems vs Tasks
When I read Alicorn's post on problems vs tasks, I immediately realized that the proposed terminology helped express one of my pet peeves: the resistance in society to applying rationality to socializing and dating.
In a thread long, long ago, SilasBarta described his experience with dating advice:
I notice all advice on finding a girlfriend glosses over the actual nuts-and-bolts of it.
In Alicorn's terms, he would be saying that the advice he has encountered treats problems as if they were tasks. Alicorn defines these terms a particular way:
It is a critical faculty to distinguish tasks from problems. A task is something you do because you predict it will get you from one state of affairs to another state of affairs that you prefer. A problem is an unacceptable/displeasing state of affairs, now or in the likely future. So a task is something you do, or can do, while a problem is something that is, or may be.
Yet as she observes in her post, treating genuine problems as if they were defined tasks is a mistake:
Because treating problems like tasks will slow you down in solving them. You can't just become immortal any more than you can just make a peanut butter sandwich without any bread.
Similarly, many straight guys or queer women can't just find a girlfriend, and many straight women or queer men can't just find a boyfriend, any more than they can "just become immortal."
Dying Outside
A man goes in to see his doctor, and after some tests, the doctor says, "I'm sorry, but you have a fatal disease."
Man: "That's terrible! How long have I got?"
Doctor: "Ten."
Man: "Ten? What kind of answer is that? Ten months? Ten years? Ten what?"
The doctor looks at his watch. "Nine."
Recently I received some bad medical news (although not as bad as in the joke). Unfortunately I have been diagnosed with a fatal disease, Amyotrophic Lateral Sclerosis or ALS, sometimes called Lou Gehrig's disease. ALS causes nerve damage, progressive muscle weakness and paralysis, and ultimately death. Patients lose the ability to talk, walk, move, eventually even to breathe, which is usually the end of life. This process generally takes about 2 to 5 years.
There are however two bright spots in this picture. The first is that ALS normally does not affect higher brain functions. I will retain my abilities to think and reason as usual. Even as my body is dying outside, I will remain alive inside.
The second relates to survival. Although ALS is generally described as a fatal disease, this is not quite true. It is only mostly fatal. When breathing begins to fail, ALS patients must make a choice. They have the option to either go onto invasive mechanical respiration, which involves a tracheotomy and breathing machine, or they can die in comfort. I was very surprised to learn that over 90% of ALS patients choose to die. And even among those who choose life, for the great majority this is an emergency decision made in the hospital during a medical respiratory crisis. In a few cases the patient will have made his wishes known in advance, but most of the time the procedure is done as part of the medical management of the situation, and then the ALS patient either lives with it or asks to have the machine disconnected so he can die. Probably fewer than 1% of ALS patients arrange to go onto ventilation when they are still in relatively good health, even though this provides the best odds for a successful transition.
When Willpower Attacks
Less Wrong has held many discussions of willpower. All of them have focused on the cases where willpower fails, and its failure causes harm, such as procrastination, overeating and addiction. Collectively, we call these behaviors akrasia. Akrasia is any behavior that we believe is harmful, but do anyways due to a lack of willpower. Akrasia, however, represents only a small subset of the cases in which willpower fails, and focusing on it too much creates an availability bias that skews our perception of what willpower is, how it works and how much of it is desireable. To counter this bias, I present here some common special cases where strong willpower is harmful or even fatal.
The Lifespan Dilemma
One of our most controversial posts ever was "Torture vs. Dust Specks". Though I can't seem to find the reference, one of the more interesting uses of this dilemma was by a professor whose student said "I'm a utilitarian consequentialist", and the professor said "No you're not" and told them about SPECKS vs. TORTURE, and then the student - to the professor's surprise - chose TORTURE. (Yay student!)
In the spirit of always making these things worse, let me offer a dilemma that might have been more likely to unconvince the student - at least, as a consequentialist, I find the inevitable conclusion much harder to swallow.
Let Them Debate College Students
(EDIT: Woozle has an even better idea, which would apply to many debates in general if the true goal were seeking resolution and truth.)
Friends, Romans, non-Romans, lend me your ears. I have for you a modest proposal, in this question of whether we should publicly debate creationists, or freeze them out as unworthy of debate.
My fellow humans, I have two misgivings about this notion that there should not be a debate. My first misgiving is that - even though on this particular occasion scientific society is absolutely positively not wrong to dismiss creationism - this business of not having debates sounds like dangerous business to me. Science is sometimes wrong, you know, even if it is not wrong this time, and debating is part of the recovery process.
And my second misgiving is that, like it or not, the creationists are on the radio, in the town halls, and of course on the Web, and they are already talking to large audiences; and the idea that there is not going to be a debate about this, may be slightly naive.
"But," you cry, "when prestigious scientists lower themselves so far as to debate creationists, afterward the creationists smugly advertise that prestigious scientists are debating them!"
Ah, but who says that prestigious scientists are required to debate creationists?
Fake Justification
Many Christians who've stopped really believing now insist that they revere the Bible as a source of ethical advice. The standard atheist reply is given by Sam Harris: "You and I both know that it would take us five minutes to produce a book that offers a more coherent and compassionate morality than the Bible does." Similarly, one may try to insist that the Bible is valuable as a literary work. Then why not revere Lord of the Rings, a vastly superior literary work? And despite the standard criticisms of Tolkien's morality, Lord of the Rings is at least superior to the Bible as a source of ethics. So why don't people wear little rings around their neck, instead of crosses? Even Harry Potter is superior to the Bible, both as a work of literary art and as moral philosophy. If I really wanted to be cruel, I would compare the Bible to Jacqueline Carey's Kushiel series.
"How can you justify buying a $1 million gem-studded laptop," you ask your friend, "when so many people have no laptops at all?" And your friend says, "But think of the employment that this will provide—to the laptop maker, the laptop maker's advertising agency—and then they'll buy meals and haircuts—it will stimulate the economy and eventually many people will get their own laptops." But it would be even more efficient to buy 5,000 OLPC laptops, thus providing employment to the OLPC manufacturers and giving out laptops directly.
I've touched before on the failure to look for third alternatives. But this is not really motivated stopping. Calling it "motivated stopping" would imply that there was a search carried out in the first place.
Evolved Bayesians will be biased
I have a small theory which strongly implies that getting less biased is likely to make "winning" more difficult.
Imagine some sort of evolving agents that follow vaguely Bayesianish logic. They don't have infinite resources, so they use a lot of heuristics, not direct Bayes rule with priors based on Kolmogorov complexity. Still, they employ a procedure A to estimate what the world is like based on data available, and a procedure D to make decisions based on their estimations, both of vaguely Bayesian kind.
Let's be kind to our agents and grant that for every possible data and every possible decision they might have encountered in their ancestral environment, they make exactly the same decision as an ideal Bayesian agent would. A and D have been fine-tuned to work perfectly together.
That doesn't mean that either A or D are perfect even within this limited domain. Evolution wouldn't care about that at all. Perhaps different biases within A cancel each other. For example an agent might overestimate snakes' dangerousness and also overestimate his snake-dodging skills - resulting in exactly the right amount of fear of snakes.
Or perhaps a bias in A cancels another bias in D. For example an agent might overestimate his chance of success at influencing tribal policy, what neatly cancels his unreasonably high threshold for trying to do so.
And then our agents left their ancestral environment, and found out that for some of the new situations their decisions aren't that great. They thought about it a lot, noticed how biased they are, and started a website on which they teach each other how to make their A more like perfect Bayesian's A. They even got quite good at it.
Unfortunately they have no way of changing their D. So biases in their decisions which used to neatly counteract biases in their estimation of the world now make them commit a lot of mistakes even in situations where naive agents do perfectly well.
The problem is that for virtually every A and D pair that could have possibly evolved, no matter how good the pair is together, neither A nor D would be perfect in isolation. In all likelihood both A and D are ridiculously wrong, just in a special way that never hurts. Improving one without improving the other, or improving just part of either A or D, will lead to much worse decisions, even if your idea of what the world is like gets better.
I think humans might be a lot like that. As an artifact of evolution we make incorrect guesses about the world, and choices that would be incorrect given our guesses - just in a way that worked really well in ancestral environment, and works well enough most of the time even now. Depressive realism is a special case of this effect, but the problem is much more general.
Scott Aaronson's "On Self-Delusion and Bounded Rationality"
Poignant short story about truth-seeking that I just found. Quote:
"No," interjected an internal voice. "You need to prove that your dad will appear by a direct argument from the length of your nails, one that does not invoke your subsisting in a dream state as an intermediate step."
"Nonsense," retorted another voice. "That we find ourselves in a dream state was never assumed; rather, it follows so straightforwardly from the long-nail counterfactual that the derivation could be done, I think, even in an extremely weak system of inference."
The full thing reads like a flash tour of OB/LW, except it was written in 2001.
Rationality Quotes - August 2009
A monthly thread for posting any interesting rationality-related quotes you've seen recently on the Internet, or had stored in your quotesfile for ages.
- Please post all quotes separately (so that they can be voted up/down separately) unless they are strongly related/ordered.
- Do not quote yourself.
- Do not quote comments/posts on LW/OB - if we do this, there should be a separate thread for it.
- No more than 5 quotes per person per monthly thread, please.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)