On at least two occasions - one only a year past - my life was at serious risk because I was not thinking clearly.  Both times, I was lucky (and once, the car even survived!).  As a gambler I don't like counting on luck, and I'd much rather be rational enough to avoid serious mistakes.  So when I checked the top-ranked posts here and saw Robin's Rational Me or We? arguing against rationality as a martial art I was dumbfounded.  To me, individual rationality is a matter of life and death[1].

In poker, much attention is given to the sexy art of reading your opponent, but the true veteran knows that far more important is the art of reading and controlling yourself.  It is very rare that a situation comes up where a "tell" matters, and each of my opponents is only in an occasional hand.  I and my irrationalities, however, are in every decision in every hand.  This is why self-knowledge and self-discipline are first-order concerns in poker, while opponent reading is second or perhaps even third.

And this is why Robin's post is so wrong[2].  Our minds and their irrationalities are part of every second of our lives, every moment we experience, and every decision that we make.  And contra to Robin's security metaphor, few of our decisions can be outsourced.  My two bad decisions regarding motor vehicles, for example, could not have easily been outsourced to a group rationality mechanism[3].  Only a tiny percentage of the choices I make every day can be punted to experts.

We have long since left the Hobbesian world where physical security depends on individual skills, but when it comes to rationality, we are all "isolated survivalist Einsteins".  We are in a world where our individual mental skills are constantly put to the test.  And even when we can rely on experts, it is our individual choices (influenced by the quality of our minds) that determine our success in life.  (How long would a professor's reputation last if he never did any original work?)

So while I respect and admire Robin's interest in improving institutions, I believe that his characterization of the relative merits of individual and collective mechanisms is horridly wrong.  To have more and better rational collective institutions is a speculative, long-term goal with limited scope (albeit in some very important areas).  Learning the martial art of rationality is something that all of us can do now to improve the quality of our decisions and thus positively influence every part of our lives.  By making us more effective as individuals (hell, just keeping us from stupidly getting ourselves killed), it will help us work on all of our goals - like getting society to accept ambitious new social institutions.

In the modern world, karate is unlikely to save your life.  But rationality can.  For example, if one believes that cryonics is a good gamble at immortality, and people don't do it because of irrationality, then improved individual rationality can give people a shot at immortality instead of certain death.  And that's only one of the myriad decisions we each face in optimizing our life!

Which is why, while I spend my days working on better institutions, I also practice my rationality katas, so that I will survive to reach the new future our institutions will bring.

[1] I have a post about the more recent incident that's been written in my mind for months, and just hasn't fallen out onto the screen yet.

[2] Or at least, this is related - I freely admit to liking poker metaphors enough that I'm willing to stretch to make them!

[3] Yes, I'm sure a clever person can come up with markets to keep young men from doing stupid things with cars.  That's not the point.  Markets have significant overhead, and it takes high public interest for it to be worth opening, funding, trading in, and running a market.  They may have great value for large decisions, but they are never going to replace the majority of decisions in our day to day lives.

New Comment
44 comments, sorted by Click to highlight new comments since:
[-]roland100

My two bad decisions regarding motor vehicles, for example, could not have easily been outsourced to a group rationality mechanism[3].

Cars kill a LOT of people every month. One rational thing to do would be to simply restrict their use as much as possible and instead implement an efficient mass transit system(busses, trains, etc...). You seem to advocate the other route of making drivers more rational but I think this approach is inherently flawed and limited. Consider probability: one million car drivers on the streets are going to have much more accidents than correspondingly fewer busses and trains.

There are other concerns as well, such as individual freedom. If you randomly chose half the population and stuck them in padded rooms, you'd also reduce the number of car accidents. There's value in allowing people to make stupid decisions. What the OP is advocating is how to prevent yourself from making stupid decisions in situations where you're allowed to.

Then again, maybe that's what this debate is about... whether we should help people individually be rational, or give incentives at a group level for being rational. But it seems to me that restricting the use of cars doesn't make people rational, it just takes away the freedom to make stupid choices.

Consider that in the West, life expectancy is very high, and people are very wealthy in historical perspective. This is the default position - to end up prematurely dead or poor (in an absolute, not relative, sense) you need to either take a lot of risk or be otherwise very unlucky. Sure, life could be better. But most (Western) folks have it OK as it is - yet they're not rational by OB standards.

LW readers seek a great deal of rationality, which is above and beyond what is required for an OK life in a human society. But remember that LW's prophets have extraordinary goals (Eliezer put a temporary moratorium on the discussion of his, but Robin has futarchy, as far as I understand). If your goal is simply to live well, you can allow yourself to be average. If your goal is to live better than average, you need some thinking tricks, but not much. If you want to tackle an Adult Problem (TM), then you have to start the journey. (Also if you're curious or want to be strong for strength's sake. But your life definitely will not depend on it!)

Cryonics seems to be an exception, but in most cases we'll do best by listening to the collective advice of domain experts. And we shouldn't believe that we can magically do better.

It is not economically feasible to outsmart or even match everyone. And even in an Adult Problem (TM), you can't hope to do it all by yourself. The lone hero who single-handedly defeats the monster, saves the world and gets the girl is a myth of movies and video games. In reality, he needs allies, supplies, transportation, weapon know-how, etc.

If you want to contribute, your best bet is to focus on a specific field. And you'll be much more productive if your background (which includes a lot of institutions) provides better support, evidence- and theory-wise. If we strive to improve institutions in general, that's a net gain for all of us, no matter what field we pursue. That's Robin's point, as I understand it.

[-][anonymous]20

And we shouldn't believe that we can magically do better.

Agreed! We should believe that we can non-magically do better.

Cryonics... and whether to spend your money at the margins on healthcare... and...

And?... (Well, Everett's QM interpretation comes to mind.)

There may be many dissenting choices (with cryonics being the only important one, I think), but there is a huge number of conforming choices. Are we better (than experts, not laymen) at predicting the weather? Building cars? Flying to the moon? Running countries? Studying beetles?

And, ironically enough, I picked most of the interesting dissenting opinions from OB. In this sense, isn't OB is an institution of general clear thinking, to which people defer? To take that thought to the extreme - if our beloved Omega takes up a job as an oracle for humanity, and we can just ask him any question at any time and be confident in his answer, what should happen to our pursuit of rationality?

if our beloved Omega takes up a job as an oracle for humanity, and we can just ask him any question at any time and be confident in his answer, what should happen to our pursuit of rationality?

dunno, ask Omega

(Well, Everett's QM interpretation comes to mind.)

Most of the QM guys I know personally believe in this (although they specialise in quantum computing, which makes NO SENSE if you use the Copenhagen interpretation). I also know a philosopher who likes the Bohmian mechanics viewpoint, but that certainly puts him in a minority.

Robin's post seemed to be about the marginal value of rationality. Being completely irrational is a one way ticket to death or ruin, I agree. But there are fewer ways to die by refusing to go from ordinary high-IQ university-educated person to person who has read and applied the Overcoming Bias techniques. They're still there, but they're not quite as obvious. Most of the ones I can think of involve medicine, and Robin probably disagrees and doesn't think those matter so much.

Good point about the marginal value of rationality. But my experience with myself and with almost all of the smart graduate-degree holding people I know, is that there is significant irrationality left, and significant gains to be had from self-improvement. You may believe differently.

It is hard to evaluate how essential your martial-art-style rationality was in your life, relative to possible institutional substitutes, without knowing more about it. "Two bad decisions about cars" just doesn't say enough. Poker is designed exactly to be a martial-art-style rationality competition, so of course such skills would be more useful there.

Perhaps I am prejudiced by poker (and games in general), but I see life as a constant series of decisions. The quality of those decisions, combined with luck, gives an outcome. Life is a game of chance and skill, in other words.

MAS rationality makes for better quality decisions, and thus makes for better outcomes. When there are institutional substitutes, I agree they can also make for better outcomes, but there are no institutional substitutes for the vast majority of the constant stream of decisions we encounter in life. I predict if you went through your day, noticed every decision you make (hundreds?), and scored them based on whether it is plausible that the decision could entirely be made via an institutional substitute, removing your own need to be rational completely, you would find almost none qualify. Those that do would be among the most important (medical decisions, how to invest your money), but some important decisions would remain (acting in an emergency situation).

One would also notice that almost never did one consciously use rationality techniques. Consider that we are already highly evolved to survive, and we are all descendants of survivalist winners. We have some baseline rationality hard-wired in us. It is this wiring that guides most of our actions, and it is there even if we don't have a single year of schooling.

but I see life as a constant series of decisions.

If you have to make all those decisions yourself, sooner or later you are going to make a mistake(law of the conjunction, what is the probability to get it right every time?). The idea is to take off the burden of as much decisions as possible(at least the important ones) from the individual.

In the case of cars for example, it's much safer to just take a bus and sit down and relax.

The idea is to take off the burden of as much decisions as possible(at least the important ones) from the individual.

Luckily, we have built-in mechanisms for this. By behaving rationally, we can develop good habits that will help us automatically make the right decision in the future. Aristotle called these sorts of habits 'virtues'.

Are buses safer than cars? For one thing, they don't have seat belts.

This is a whole new discussion but I'll still give some pointers.

If you consider a city as a whole it would probably be much safer to take all cars off the street and put buses in their place. Less vehicles + trained drivers + less drunk driving => less accidents.

But even considering the normal city with lots of cars, I consider buses safer because:

  • they usually drive slower
  • they are big and heavy, so even in the case that a bus collides with a car it will probably be safer in the bus. Ok, if you have a collision against another bus it is another question.
  • btw, there are buses with seat-belts.

Road safety is a bad example. That cause is advanced tremendously by "group rationality". The global auto industry spends billions on making safer vehicles. Enforcement of speeding laws is, in practice, precisely the sort of market that you describe in your third footnote. (Edit: Auto insurance premiums are a better example than speeding tickets, actually)

This is also one area in which group irrationality is costing a tremendous number of lives. According to some friends from CMU's robotics lab, autonomous vehicle technology is already good enough that autonomous cars could be far safer on the road than human drivers. Yet, getting them adopted is, politically, almost inconceivable. If you want to give an example of an irrational meme that causes tragedy, I think aversion to autonomous vehicles is a much better example than aversion to cryonics.

There isn't enough data to say that autonomous vehicles are safer than human drivers. On the order of 10,000-20,000 fatal accidents a year out of, I don't know, maybe 1,000,000,000 trips per year means you would need about ten million trips by autonomous vehicles before you have enough data to say anything. I also note that nobody AFAIK takes autonomous vehicles out at night or in the rain.

That said, I agree with your general point. A similar, but better, example is automated air traffic control and autopilots. We already rely on software to present all the data to air traffic controllers and to pilots that they rely on not to crash into each other; software errors or power failures can already lead to deaths.

No need to use made-up numbers when we have real ones. In the US in 2007 there were 37,248 fatal crashes and 3.030 trillion vehicle-miles driven. (Source). That's one fatal accident per 81.35 million miles. So, solving a Poisson distribution for P(E|H) >= 0.95, where the evidence is the number of miles driven by autonomous vehicles without a fatal accident:

λ^k * e^-λ / k! = .05; k = 0

e^-λ = .05

λ = 2.996

2.996 * 81.35E6 = 243.7 million miles required for statistical significance.

This, however, is only frequentist reasoning. I would actually be inclined to trust autonomous vehicles after considerably less testing, because I consider P(H) to be a priori quite high.

I can't agree. AI - yes, even mundane old domain-specific AI - has all sorts of potential weird failure modes. (Not an original observation, just conveying the majority opinion of the field.)

Yes, but humans also have all sorts of weird failure modes. We're not looking for perfection here, just better than humans.

In this instance "weird failure mode" means "incident causing many deaths at once, probable enough to be a significant risk factor but rare enough that it takes a lot more autonomous miles in much more realistic circumstances to measure who the safer driver is".

Yup, humans have weird failure modes but they don't occur all over the country simultaneously at 3:27pm on Wednesday.

Automobile fatalities are only a small fraction of all fatalities, and smart cars for all would be more expensive than cryopreservation for only the people who actually died that year.

And when I've heard Sebastian Thrun talk about the altruistic case for autonomous vehicles, he doesn't say, "We're ready now," he says, "We need to develop this as quickly as possible." Though that's mixing autonomous vehicles with human-driven ones, I suppose, not autonomous-only roads.

With that said you certainly have a strong point!

I think autonomous vehicles are a better example not because I think the EV is higher than that of cryonics, but because there are fewer ways to dispute it. There are a number of arguments, most of them well-known here, as to why cryopreservation is unlikely to work. It seems like a virtual certainty, on the other hand, that autonomous vehicles, if deployed, would save a large number of lives.

Edit: Also, you have your dimensions wrong on the financial calculation. The cost of autonomous vehicles should be amortized over their MTBF, not over one year.

Also, for it to be an unbiased comparison the two statements, "smart cars for all" and "cryopreservation for only the people who actually died that year" should be limited to the same domain.

If you compare different sets, one substantially larger than the other, then of course cryo is going to be cheaper!

A more balanced statement would be: "buying smart cars to save the lives of only the people who would have otherwise died by car accident in any given year would probably cost less than cryo-surance for the same set of people."

Plus you don't die. Which, for me, is preferable.

The institutional rationality vs. individual rationality debate doesn't seem resolvable at this point. Neither side is saying the other is worthless; it's a debate over marginal benefits. Without some decent data, I have a hard time figuring out how much to emphasize each strategy.

I'm not entirely sure there is even a disagreement here. Sure, Eliezer talks about individual rationality a lot while Robin is seems to be more interested in techniques for group rationality such as prediction markets, but I've yet to see what each thinks is the optimal strategy for increasing rationality. The differences may be purely in terms of style, or comparative advantage. So how about it? Can you guys pin down your actual differences here? (Patri included, of course.)

I strongly suspect that with a rational policy on driving, e.g. no taxi medallion system and in the presence of insurance generated internalization of expected driving externalities (with appropriate monitoring e.g. of speed by sattelite and of reflexes by simple in-car electronic tests), and with adequate wrongful death penalties we would end up with driving as a lower middle class profession, less traffic, faster traffic flow and very few auto fatalities.

I also don't think that it makes sense to even talk about the possibility of much better institutions without much better elite individual rationality. Good institutions don't evolve, fit ones do. Good institutions in a peaceful and fairly unified and hegemonic world have to be intelligently designed.

It seems that car driving is just about the best example where some good group rationality is much more important than individual rationality. Imagine a world where everyone can drive a car just as well as Michael Schumacher, but they can't agree to drive on the same side of the road and stop at traffic lights.

Sure, individual rationality can be useful, but it's much more effective inside the right institutions.

Robin's post was about balance. The presentation on OB and Less Wrong is unbalanced, because it emphasizes the individual view over the collective view; yet collective rationality IMHO has a much bigger effect on our lives. EDIT: The expected change in your lifespan from having a car accident due to irrational behavior is, I'm sure, much smaller than the expected change from living in a society without any of the following: antibiotics, the germ theory, clean drinking water, sterile surgical technique, garbage collection, limited liability, and widespread literacy.

Sorry, but I voted you down - largely for calling Robin's post "so wrong". It is a matter of balance, so the word "wrong" is inappropriate.

Sure, balance is important. But if you look at Robin's closing paragraph, it is not calling for balance:

"Perhaps martial-art-style rationality makes sense for isolated survivalist Einsteins forced by humanity's vast stunning cluelessness to single-handedly block the coming robot rampage. But for those of us who respect the opinions of enough others to want to work with them to find truth, it makes more sense to design and field institutions which give each person better incentives to update a common consensus."

What I get from the metaphor is that practicing martial-arts style rationality is as useless to our lives as practicing physical martial arts. And that is horridly wrong.

Thanks for explaining your downvote, but don't apologize for it!

I certainly didn't mean to give the impression of "useless"; balance was more the idea.

It was the characterization of martial-art-style rationality as only making sense for isolated survivalist Einsteins that gave me the impression - do you now agree that martial-arts-style rationality is actually useful for everyone?

Sure, even combat martial arts is useful for everyone to some degree; the issue is the size of that degree.

The sizes here are so wildly different I don't see them as really comparable. I have never in my life had to defend myself physically against serious harm. Yet I make decisions with my flawed monkey brain every minute of every day of my life! The benefit to me from improving the quality of my decisions (whatever you want to call that - martial-arts-style rationality works for me, but perhaps the term means something else to others) is orders of magnitude greater than the benefit of improving my ability to defend myself physically.

I mean, I seriously find it hard to understand how you can compare a skill that I have never used to a skill that I use every minute of my life ?!?! I agree with you that one must posit implausible scenarios for personal physical defense to be useful, but I think one must posit even less plausible scenarios for personal mental acuity to not be useful. Anyone can get mugged, but who never needs to make a tough decision affected by standard biases?

As you said, he wrote:

it makes more sense ...

Hence, it is about balance.

EDIT: I'm taking some inferential steps here.

  • When reasonable people say A is more valuable than B, they don't usually mean that you should buy N of A and 0 of B.

  • Robin is a reasonable person.

In the modern world, karate is unlikely to save your life. But rationality can.

The term "bayesian black-belt" has been thrown around a number of times on OB and LW... this, in my mind, seems misleading. As far as I can tell there are two ways in which bayesian reasoning can be applied directly: introspection and academia. Within those domains, sure, the metaphor makes sense... in meatspace life-and-death situations? Not so much.

"Being rational" doesn't develop your quick-twitch muscle fibers or give you a sixth sense.

Perhaps, where you live, you are never in danger of being physically accosted. If so, you are in the minority. Rationality may help you avoid such situations, but never with a 100% success rate. When you do find yourself in such a situation, you may find yourself wishing you'd studied up on a little Tae Kwon Do.

On at least two occasions - one only a year past - my life was at serious risk because I was not thinking clearly. ... As a gambler I don't like counting on luck, and I'd much rather be rational enough to avoid serious mistakes.

Can you give an example of how being "more rational" could have avoided the accidents?

Of course, properly applying rational techniques will bleed over into all areas of your life. Having a more accurate map of the territory means that you will make better decisions. The vast majority of these decisions, however, can be written off as common sense. Just because I drink coffee when I drive at night to stay alert doesn't make me a master of the "martial art of rationality".

By rationality I am not referring to bayesian reasoning. I simply mean making correct decisions even when (especially when) one's hardwired instincts give the wrong answer.

In the first case, I should not have driven. In the second case, I should have told the driver to be more careful. In both cases, I made serious mistakes in life-or-death situations. I call that irrational, and I seek to not replicate such mistakes in the future.

You are welcome to call it "common sense" if you prefer. "Common sense" is rather a misnomer, in my opinion, considering how uncommon a quality it is. But I really don't care what it is called. I simply mean, making better decisions, screwing up less, being less of a monkey and more of a human. I find it baffling that people don't find it blindingly obvious that this is one of the most important skills to develop in life.

I find it baffling that people don't find it blindingly obvious that this is one of the most important skills to develop in life.

I think a big chunk of the explanation is that many people wouldn't see it obviously as a 'skill'.

I agree with the idea that individual rationality is useful. I'm here primarily because I think it'll do me good to learn those tricks and methods.

But I think you missed the point in Robin's post. And when I see people's talk about the martial art of rationality, I think it's fine as that may help them put more energy into learning it, as in, making something funnier and perhaps, more "cool-sounding", putting some healthy feelings and drive in it. But that's just supposed to be a metaphor, as well as a loose comparison, at best. We won't carbon copy it, right ? I'd guess everyone here knows that, but sometimes I have that faintest flicker of doubts...

I get that feeling that some people are genuinely thinking about copying martial art schools patterns as is; also copying some of the cached sentences and ideas they think they read in preceding posts, about martial arts, and are going in a sort of affective death spiral about how cool that sounds.

Dunno, it's more of an impression, so I'll leave it at that. But I thought it was worth noticing.

As to the point of Robin's post, it's not so dissimilar to Eliezer's posts, about how we could learn or developp efficient, rational ways of working as a group. Not just individual, 1337 martial art rationality, but also group rationality.

You can be especially good at something alone, but suck at working in a group. And, so far, for human beings, the average efficiency of a group member is higher than the average efficiency of a loner, for most tasks.

Rationality can be life and death, but that applies to collective and institutional decisions just as much as for our individual ones. Arguably more so: the decisions made by governments, cultures and large institutions have far larger effects than any decision I'll ever make. Investment into improving my individual rationality is more valuable purely due to self-interest - we may invest more to providing a 1% improvement to our own lives than we do to reducing collective decision making mistakes that costs thousands of lives a year. But survival isn't the only goal we have! Even if it were, there are good reasons to put more emphasis on collective rational decision making - the decisions of others can also affect us.

Arguably more so: the decisions made by governments, cultures and large institutions have far larger effects than any decision I'll ever make.

And you have far less impact on them. None, in most cases. When it comes to the transformation of effort applied to impact on your life, developing individual skills has vastly more effect - by orders of magnitude, I would say.

Yes. It seems we should specialize in knowledge - which we can do, in some cases, with prediction markets - but all individuals ought to be more skilled in spotting and adjusting for our biases.