Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Let us say you are a doctor, and you are dealing with a malaria epidemic in your village. You are faced with two problems. First, you have no access to the drugs needed for treatment. Second, you are one of two doctors in the village, and the two of you cannot agree on the nature of the disease itself. You, having carefully tested many patients, being a highly skilled, well-educated diagnostician, have proven to yourself that the disease in question is malaria. Of this you are >99% certain. Yet your colleague, the blinkered fool, insists that you are dealing with an outbreak of bird flu, and to this he assigns >99% certainty.
Well, it need hardly be said that someone here is failing at rationality. Rational agents do not have common knowledge of disagreements etc. But... what can we say? We're human, and it happens.
So, let's say that one day,
OmegaDr. House calls you both into his office and tells you that he knows, with certainty, which disease is afflicting the villagers. As confident as you both are in your own diagnoses, you are even more confident in House's abilities. House, however, will not tell you his diagnosis until you've played a game with him. He's going to put you in one room and your colleague in another. He's going to offer you a choice between 5,000 units of malaria medication, and 10,000 units of bird-flu medication. At the same time, he's going to offer your colleague a choice between 5,000 units of bird-flu meds, and 10,000 units of malaria meds.
Correlation does not imply causation. Sometimes corr(X,Y) means X=>Y; sometimes it means Y=>X; sometimes it means W=>X, W=>Y. And sometimes it's an artifact of people's beliefs about corr(X, Y). With intelligent agents, perceived causation causes correlation.
Volvos are believed by many people to be safe. Volvo has an excellent record of being concerned with safety; they introduced 3-point seat belts, crumple zones, laminated windshields, and safety cages, among other things. But how would you evaluate the claim that Volvos are safer than other cars?
Presumably, you'd look at the accident rate for Volvos compared to the accident rate for similar cars driven by a similar demographic, as reflected, for instance in insurance rates. (My google-fu did not find accident rates posted on the internet, but insurance rates don't come out especially pro-Volvo.) But suppose the results showed that Volvos had only 3/4 as many accidents as similar cars driven by similar people. Would that prove Volvos are safer?
I suspect there's a Pons Asinorum of probability between the bettor who thinks that you make money on horse races by betting on the horse you think will win, and the bettor who realizes that you can only make money on horse races if you find horses whose odds seem poorly calibrated relative to superior probabilistic guesses.
There is, I think, a second Pons Asinorum associated with more advanced finance, and it is the concept that markets are an anti-inductive environment.
Let's say you see me flipping a coin. It is not necessarily a fair coin. It's a biased coin, and you don't know the bias. I flip the coin nine times, and the coin comes up "heads" each time. I flip the coin a tenth time. What is the probability that it comes up heads?
If you answered "ten-elevenths, by Laplace's Rule of Succession", you are a fine scientist in ordinary environments, but you will lose money in finance.
In finance the correct reply is, "Well... if everyone else also saw the coin coming up heads... then by now the odds are probably back to fifty-fifty."
Recently on Hacker News I saw a commenter insisting that stock prices had nowhere to go but down, because the economy was in such awful shape. If stock prices have nowhere to go but down, and everyone knows it, then trades won't clear - remember, for every seller there must be a buyer - until prices have gone down far enough that there is once again a possibility of prices going up.
So you can see the bizarreness of someone saying, "Real estate prices have gone up by 10% a year for the last N years, and we've never seen a drop." This treats the market like it was the mass of an electron or something. Markets are anti-inductive. If, historically, real estate prices have always gone up, they will keep rising until they can go down.
Related to: Infinite Certainty
Suppose the people at FiveThirtyEight have created a model to predict the results of an important election. After crunching poll data, area demographics, and all the usual things one crunches in such a situation, their model returns a greater than 999,999,999 in a billion chance that the incumbent wins the election. Suppose further that the results of this model are your only data and you know nothing else about the election. What is your confidence level that the incumbent wins the election?
Mine would be significantly less than 999,999,999 in a billion.
When an argument gives a probability of 999,999,999 in a billion for an event, then probably the majority of the probability of the event is no longer in "But that still leaves a one in a billion chance, right?". The majority of the probability is in "That argument is flawed". Even if you have no particular reason to believe the argument is flawed, the background chance of an argument being flawed is still greater than one in a billion.
More than one in a billion times a political scientist writes a model, ey will get completely confused and write something with no relation to reality. More than one in a billion times a programmer writes a program to crunch political statistics, there will be a bug that completely invalidates the results. More than one in a billion times a staffer at a website publishes the results of a political calculation online, ey will accidentally switch which candidate goes with which chance of winning.
So one must distinguish between levels of confidence internal and external to a specific model or argument. Here the model's internal level of confidence is 999,999,999/billion. But my external level of confidence should be lower, even if the model is my only evidence, by an amount proportional to my trust in the model.
Good online communities die primarily by refusing to defend themselves.
Somewhere in the vastness of the Internet, it is happening even now. It was once a well-kept garden of intelligent discussion, where knowledgeable and interested folk came, attracted by the high quality of speech they saw ongoing. But into this garden comes a fool, and the level of discussion drops a little—or more than a little, if the fool is very prolific in their posting. (It is worse if the fool is just articulate enough that the former inhabitants of the garden feel obliged to respond, and correct misapprehensions—for then the fool dominates conversations.)
So the garden is tainted now, and it is less fun to play in; the old inhabitants, already invested there, will stay, but they are that much less likely to attract new blood. Or if there are new members, their quality also has gone down.
Then another fool joins, and the two fools begin talking to each other, and at that point some of the old members, those with the highest standards and the best opportunities elsewhere, leave...
I am old enough to remember the USENET that is forgotten, though I was very young. Unlike the first Internet that died so long ago in the Eternal September, in these days there is always some way to delete unwanted content. We can thank spam for that—so egregious that no one defends it, so prolific that no one can just ignore it, there must be a banhammer somewhere.
But when the fools begin their invasion, some communities think themselves too good to use their banhammer for—gasp!—censorship.
After all—anyone acculturated by academia knows that censorship is a very grave sin... in their walled gardens where it costs thousands and thousands of dollars to enter, and students fear their professors' grading, and heaven forbid the janitors should speak up in the middle of a colloquium.
As Tom slips on the ice puddle, his arm automatically pulls back to slap the ground. He’s been taking Jiu-Jitsu for only a month, but, already, he’s practiced falling hundreds of times. Tom’s training keeps him from getting hurt.
By contrast, Sandra is in her second year of university mathematics. She got an “A” in calculus and in several more advanced courses, and she can easily recite that “derivatives” are “rates of change”. But when she goes on her afternoon walk and stares at the local businesses, she doesn’t see derivatives.
For many of us, rationality is more like Sandra’s calculus than Tom’s martial arts. You may think “overconfidence” when you hear an explicit probability (“It’s 99% likely I’ll make it to Boston on Tuesday”). But when no probability is mentioned -- or, worse, when you act on a belief without noticing that belief at all -- your training has little impact.
Learn error patterns ahead of time
If you want to notice errors while you’re making them, think ahead of time about what your errors might look like. List the circumstances in which to watch out and the alternative action to try then.
Here's an example of what your lists might look like. A bunch of visiting fellows generated this list at one of our rationality trainings last summer; I’m including their list here (with some edits) because I found the specific suggestions useful, and because you may be able to use it as a model for your own lists.
There was strong interest in the first two posts in my sequence, and I apologize for the long delay. The reason for it is that I've accumulated hundreds of pages of relevant material in draft form, and have struggled with how to organize such a large body of material. I still don't know what's best, but since people have been asking, I decided to continue posting on the subject, even if I don't have my thoughts as organized as I'd like. I'd greatly welcome and appreciate any comments, but I won't have time to respond to them individually, because I already have my hands full with putting my hundreds of pages of writing in public form.
I've often heard LWers describe themselves as having autism, or Asperger's Syndrome (which is no longer considered a valid construct, and was removed from the Diagnostic and Statistical Manual of Mental Disorders two years ago.) This is given as an explanation for various forms of social dysfunction. The suggestion is that such people have a genetic disorder.
I've come to think that the issues are seldom genetic in origin. There's a simpler explanation. LWers are often intellectually gifted. This is conducive to early isolation. In The Outsiders Grady Towers writes:
The single greatest adjustment problem faced by the intellectually gifted, however, is their tendency to become isolated from the rest of humanity. Hollingworth points out that the exceptionally gifted do not deliberately choose isolation, but are forced into it against their wills. These children are not unfriendly or ungregarious by nature. Typically they strive to play with others but their efforts are defeated by the difficulties of the case... Other children do not share their interests, their vocabulary, or their desire to organize activities. [...] Forms of solitary play develop, and these, becoming fixed as habits, may explain the fact that many highly intellectual adults are shy, ungregarious, and unmindful of human relationships, or even misanthropic and uncomfortable in ordinary social intercourse.
Most people pick up a huge amount of tacit social knowledge as children and adolescents, through very frequent interaction with many peers. This is often not true of intellectually gifted people, who usually grew up in relative isolation on account of lack of peers who shared their interests.
They often have the chance to meet others similar to themselves later on in life. One might think that this would resolve the issue. But in many cases intellectually gifted people simply never learn how beneficial it can be to interact with others. For example, the great mathematician Robert Langlands wrote:
Bochner pointed out my existence to Selberg and he invited me over to speak with him at the Institute. I have known Selberg for more than 40 years. We are on cordial terms and our offices have been essentially adjacent for more than 20 years.This is nevertheless the only mathematical conversation I ever had with him. It was a revelation.
At first blush, this seems very strange: much of Langlands' work involves generalizations of Selberg's trace formula. It seems obvious that it would be fruitful for Langlands to have spoken with Selberg about math more than once, especially given that the one conversation that he had was very fruitful! But if one thinks about what their early life experiences must have been like, as a couple of the most brilliant people in the world, it sort of makes sense: they plausibly had essentially nobody to talk to about their interests for many years, and if you go for many years without having substantive conversations with people, you might never get into the habit.
When intellectually gifted people do interact, one often sees cultural clashes, because such people created their own cultures as a substitute for usual cultural acclimation, and share no common background culture. From the inside, one sees other intellectually gifted people, recognizes that they're very odd by mainstream standards, and thinks "these people are freaks!" But at the same time, the people who one sees as freaks see one in the same light, and one is often blind to how unusual one's own behavior is, only in different ways. Thus, one gets trainwreck scenarios, as when I inadvertently offended dozens of people when I made strong criticisms of MIRI and Eliezer back in 2010, just after I joined the LW community.
Grady Towers concludes the essay by writing:
The tragedy is that none of the super high IQ societies created thus far have been able to meet those needs, and the reason for this is simple. None of these groups is willing to acknowledge or come to terms with the fact that much of their membership belong to the psychological walking wounded. This alone is enough to explain the constant schisms that develop, the frequent vendettas, and the mediocre level of their publications. But those are not immutable facts; they can be changed. And the first step in doing so is to see ourselves as we are.
Past and Present
Ten years ago teenager me was hopeful. And stupid.
The world neglected aging as a disease, Aubrey had barely started spreading memes, to the point it was worth it for him to let me work remotely to help with Metuselah foundation. They had not even received that initial 1,000,000 donation from an anonymous donor. The Metuselah prize was running for less than 400,000 if I remember well. Still, I was a believer.
Now we live in the age of Larry Page's Calico, 100,000,000 dollars trying to tackle the problem, besides many other amazing initiatives, from the research paid for by Life Extension Foundation and Bill Faloon, to scholars in top universities like Steve Garan and Kenneth Hayworth fixing things from our models of aging to plastination techniques. Yet, I am much more skeptical now.
I am skeptical because I could not find a single individual who already used a simple technique that could certainly save you many years of healthy life. I could not even find a single individual who looked into it and decided it wasn't worth it, or was too pricy, or something of that sort.
That technique is freezing some of your cells now.
Freezing cells is not a far future hope, this is something that already exists, and has been possible for decades. The reason you would want to freeze them, in case you haven't thought of it, is that they are getting older every day, so the ones you have now are the youngest ones you'll ever be able to use.
Using these cells to create new organs is not something that may help you if medicine and technology continue progressing according to the law of accelerating returns in 10 or 30 years. We already know how to make organs out of your cells. Right now. Some organs live longer, some shorter, but it can be done - for instance to bladders - and is being done.
Hope versus Reason
Now, you'd think if there was an almost non-invasive technique already shown to work in humans that can preserve many years of your life and involves only a few trivial inconveniences - compared to changing diet or exercising for instance- the whole longevist/immortalist crowd would be lining up for it and keeping back up tissue samples all over the place.
Well I've asked them. I've asked some of the adamant researchers, and I've asked the superwealthy; I've asked the cryonicists and supplement gorgers; I've asked those who work on this 8 hour a day every day, and I've asked those who pay others to do so. I asked it mostly for selfish reasons, I saw the TEDs by Juan Enriquez and Anthony Atala and thought: hey look, clearly beneficial expected life length increase, yay! let me call someone who found this out before me - anyone, I'm probably the last one, silly me - and fix this.
I've asked them all, and I have nothing to show for it.
My takeaway lesson is: whatever it is that other people are doing to solve their own impending death, they are far from doing it rationally, and maybe most of the money and psychology involved in this whole business is about buying hope, not about staring into the void and finding out the best ways of dodging it. Maybe people are not in fact going to go all-in if the opportunity comes.
How to fix this?
Let me disclose first that I have no idea how to fix this problem. I don't mean the problem of getting all longevists to freeze their cells, I mean the problem of getting them to take information from the world of science and biomedicine and applying it to themselves. To become users of the technology they are boasters of. To behave rationally in a CFAR or even homo economicus sense.
I was hoping for a grandiose idea in this last paragraph, but it didn't come. I'll go with a quote from this emotional song sung by us during last year's Secular Solstice celebration
View more: Next