Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Extreme Rationality: It's Not That Great

142 Post author: Yvain 09 April 2009 02:44AM

Related to: Individual Rationality is a Matter of Life and Death, The Benefits of Rationality, Rationality is Systematized Winning
But I finally snapped after reading: Mandatory Secret Identities

Okay, the title was for shock value. Rationality is pretty great. Just not quite as great as everyone here seems to think it is.

For this post, I will be using "extreme rationality" or "x-rationality" in the sense of "techniques and theories from Overcoming Bias, Less Wrong, or similar deliberate formal rationality study programs, above and beyond the standard level of rationality possessed by an intelligent science-literate person without formal rationalist training." It seems pretty uncontroversial that there are massive benefits from going from a completely irrational moron to the average intelligent person's level. I'm coining this new term so there's no temptation to confuse x-rationality with normal, lower-level rationality.

And for this post, I use "benefits" or "practical benefits" to mean anything not relating to philosophy, truth, winning debates, or a sense of personal satisfaction from understanding things better. Money, status, popularity, and scientific discovery all count.

So, what are these "benefits" of "x-rationality"?

A while back, Vladimir Nesov asked exactly that, and made a thread for people to list all of the positive effects x-rationality had on their lives. Only a handful responded, and most responses weren't very practical. Anna Salamon, one of the few people to give a really impressive list of benefits, wrote:

I'm surprised there are so few apparent gains listed. Are most people who benefited just being silent? We should expect a certain number of headache-cures, etc., just by placebo effects or coincidences of timing.

There have since been a few more people claiming practical benefits from x-rationality, but we should generally expect more people to claim benefits than to actually experience them. Anna mentions the placebo effect, and to that I would add cognitive dissonance - people spent all this time learning x-rationality, so it MUST have helped them! - and the same sort of confirmation bias that makes Christians swear that their prayers really work.

I find my personal experience in accord with the evidence from Vladimir's thread. I've gotten countless clarity-of-mind benefits from Overcoming Bias' x-rationality, but practical benefits? Aside from some peripheral disciplines1, I can't think of any.

Looking over history, I do not find any tendency for successful people to have made a formal study of x-rationality. This isn't entirely fair, because the discipline has expanded vastly over the past fifty years, but the basics - syllogisms, fallacies, and the like - have been around much longer. The few groups who made a concerted effort to study x-rationality didn't shoot off an unusual number of geniuses - the Korzybskians are a good example. In fact as far as I know the only follower of Korzybski to turn his ideas into a vast personal empire of fame and fortune was (ironically!) L. Ron Hubbard, who took the basic concept of techniques to purge confusions from the mind, replaced the substance with a bunch of attractive flim-flam, and founded Scientology. And like Hubbard's superstar followers, many of this century's most successful people have been notably irrational.

There seems to me to be approximately zero empirical evidence that x-rationality has a large effect on your practical success, and some anecdotal empirical evidence against it. The evidence in favor of the proposition right now seems to be its sheer obviousness. Rationality is the study of knowing the truth and making good decisions. How the heck could knowing more than everyone else and making better decisions than them not make you more successful?!?

This is a difficult question, but I think it has an answer. A complex, multifactorial answer, but an answer.

One factor we have to once again come back to is akrasia2. I find akrasia in myself and others to be the most important limiting factor to our success. Think of that phrase "limiting factor" formally, the way you'd think of the limiting reagent in chemistry. When there's a limiting reagent, it doesn't matter how much more of the other reagents you add, the reaction's not going to make any more product. Rational decisions are practically useless without the willpower to carry them out. If our limiting reagent is willpower and not rationality, throwing truckloads of rationality into our brains isn't going to increase success very much.

This is a very large part of the story, but not the whole story. If I was rational enough to pick only stocks that would go up, I'd become successful regardless of how little willpower I had, as long as it was enough to pick up the phone and call my broker.

So the second factor is that most people are rational enough for their own purposes. Oh, they go on wild flights of fancy when discussing politics or religion or philosophy, but when it comes to business they suddenly become cold and calculating. This relates to Robin Hanson on Near and Far modes of thinking. Near Mode thinking is actually pretty good at a lot of things, and Near Mode thinking is the thinking whose accuracy gives us practical benefits.

And - when I was young, I used to watch The Journey of Allen Strange on Nickleodeon. It was a children's show about this alien who came to Earth and lived with these kids. I remember one scene where Allen the Alien was watching the kids play pool. "That's amazing," Allen told them. "I could never calculate differential equations in my head that quickly." The kids had to convince him that "it's in the arm, not the head" - that even though the movement of the balls is governed by differential equations, humans don't actually calculate the equations each time they play. They just move their arm in a way that feels right. If Allen had been smarter, he could have explained that the kids were doing some very impressive mathematics on a subconscious level that produced their arm's perception of "feeling right". But the kids' point still stands; even though in theory explicit mathematics will produce better results than eyeballing it, in practice you can't become a good pool player just by studying calculus.

A lot of human rationality follows the same pattern. Isaac Newton is frequently named as a guy who knew no formal theories of science or rationality, who was hopelessly irrational in his philosophical beliefs and his personal life, but who is still widely and justifiably considered the greatest scientist who ever lived. Would Newton have gone even further if he'd known Bayes theory? Probably it would've been like telling the world pool champion to try using more calculus in his shots: not a pretty sight.

Yes, yes, beisutsukai should be able to develop quantum gravity in a month and so on. But until someone on Less Wrong actually goes and does it, that story sounds a lot like when Alfred Korzybski claimed that World War Two could have been prevented if everyone had just used more General Semantics.

And then there's just plain noise. Your success in the world depends on things ranging from your hairstyle to your height to your social skills to your IQ score to cognitive constructs psychologists don't even have names for yet. X-Rationality can help you succeed. But so can excellent fashion sense. It's not clear in real-world terms that x-rationality has more of an effect than fashion. And don't dismiss that with "A good x-rationalist will know if fashion is important, and study fashion." A good normal rationalist could do that too; it's not a specific advantage of x-rationalism, just of having a general rational outlook. And having a general rational outlook, as I mentioned before, is limited in its effectiveness by poor application and akrasia.

I no longer believe mastering all these Overcoming Bias and Less Wrong techniques will turn me into Anasûrimbor Kellhus or John Galt. I no longer even believe mastering all these Overcoming Bias techniques will turn me into Eliezer Yudkowsky (who, as his writings from 2001 indicate, had developed his characteristic level of awesomeness before he became interested in x-rationality at all)3. I think it may help me succeed in life a little, but I think the correlation between x-rationality and success is probably closer to 0.1 than to 1. Maybe 0.2 in some businesses like finance, but people in finance tend to know this and use specially developed x-rationalist techniques on the job already without making it a lifestyle commitment. I think it was primarily a Happy Death Spiral around how wonderfully super-awesome x-rationality was that made me once think otherwise.

And this is why I am not so impressed by Eliezer's claim that an x-rationality instructor should be successful in their non-rationality life. Yes, there probably are some x-rationalists who will also be successful people. But again, correlation 0.1. Stop saying only practically successful people could be good x-rationality teachers! Stop saying we need to start having huge real-life victories or our art is useless! Stop calling x-rationality the Art of Winning! Stop saying I must be engaged in some sort of weird signalling effort for saying I'm here because I like mental clarity instead of because I want to be the next Bill Gates! It trivializes the very virtues that brought most of us to Overcoming Bias, and replaces them with what sounds a lot like a pitch for some weird self-help cult...

...

...

...but you will disagree with me. And we are both aspiring rationalists, and therefore we resolve disagreements by experiments. I propose one.

For the next time period - a week, a month, whatever - take special note of every decision you make. By "decision", I don't mean the decision to get up in the morning, I mean the sort that's made on a conscious level and requires at least a few seconds' serious thought. Make a tick mark, literal or mental, so you can count how many of these there are.

Then note whether you make that decision rationally. If yes, also record whether you made that decision x-rationally. I don't just mean you spent a brief second thinking about whether any biases might have affected your choice. I mean one where you think there's a serious (let's arbitrarily say 33%) chance that using x-rationality instead of normal rationality actually changed the result of your decision.

Finally, note whether, once you came to the rational conclusion, you actually followed it. This is not a trivial matter. For example, before writing this blog post I wondered briefly whether I should use the time studying instead, used normal (but not x-) rationality to determine that yes, I should, and then proceeded to write this anyway. And if you get that far, note whether your x-rational decisions tend to turn out particularly well.

This experiment seems easy to rig4; merely doing it should increase your level of conscious rational decisions quite a bit. And yet I have been trying it for the past few days, and the results have not been pretty. Not pretty at all. Not only do I make fewer conscious decisions than I thought, but the ones I do make I rarely apply even the slightest modicum of rationality to, and the ones I apply rationality to it's practically never x-rationality, and when I do apply everything I've got I don't seem to follow those decisions too consistently.

I'm not so great a rationalist anyway, and I may be especially bad at this. So I'm interested in hearing how different your results are. Just don't rig it. If you find yourself using x-rationality twenty times more often than you were when you weren't performing the experiment, you're rigging it, consciously or otherwise5.

Eliezer writes:

The novice goes astray and says, "The Art failed me."
The master goes astray and says, "I failed my Art."

Yet one way to fail your Art is to expect more of it than it can deliver. No matter how good a swimmer you are, you will not be able to cross the Pacific. This is not to say crossing the Pacific is impossible. It just means it will require a different sort of thinking than the one you've been using thus far. Perhaps there are developments of the Art of Rationality or its associated Arts that can turn us into a Kellhus or a Galt, but they will not be reached by trying to overcome biases really really hard.

Footnotes:

1: Specifically, reading Overcoming Bias convinced me to study evolutionary psychology in some depth, which has been useful in social situations. As far as I know. I'd probably be biased into thinking it had been even if it hadn't, because I like evo psych and it's very hard to measure.

2: Eliezer considers fighting akrasia to be part of the art of rationality; he compares it to "kicking" to our "punching". I'm not sure why he considers them to be the same Art rather than two related Arts.

3: This is actually an important point. I think there are probably quite a few smart, successful people who develop an interest in x-rationality, but I can't think of any people who started out merely above-average, developed an interest in x-rationality, and then became smart and successful because of that x-rationality.

4: This is a terribly controlled experiment, and the only way its data can be meaningfully interpreted at all is through what one of my professors called the "ocular trauma test" - when the data hits you between the eyes. If people claim they always follow their rational decisions, I think I will be more likely to interpret it as lack of enough cognitive self-consciousness to notice when they're doing something irrational than an honest lack of irrationality.

5: In which case it will have ceased to be an experiment and become a technique instead. I've noticed this happening a lot over the past few days, and I may continue doing it.

Comments (270)

Comment author: AnnaSalamon 09 April 2009 12:40:38PM *  47 points [-]

So the second factor is that most people are rational enough for their own purposes. Oh, they go on wild flights of fancy when discussing politics or religion or philosophy, but when it comes to business they suddenly become cold and calculating. This relates to Robin Hanson on Near and Far modes of thinking. Near Mode thinking is actually pretty good at a lot of things, and Near Mode thinking is the thinking whose accuracy gives us practical benefits.

Seems to me that most of us make predictably dumb decisions in quite a variety of contexts, and that by becoming extra bonus sane (more sane/rational than your average “intelligent science-literate person without formal rationalist training”), we really should be able to do better.

Some examples of the “predictably dumb decisions” that an art of rationality should let us improve on:

  • Dale Carnegie says (correctly, AFAIK) that most of us try to persuade others by explaining the benefits from our point of view (“I want you to play basketball with me because I don’t have enough people to play basketball with”), even though it works better to explain the benefits from their points of view. Matches my experiences, and matches also many/most of the local craigslist ads. The gains if we notice and change this one would be significant.
  • Lots of people decide to take a job “to make more money”, but don’t bother to actually research the odds of getting that job, and the average payoff from that job (the latter, at least, is easy to look up on the internet) before spending literally years training for the job. Even in cases like med school. Again, significant payoff here, and in this case fairly minimal willpower requirements.
  • Lots of us tend to mostly stick to our own opinions in conversations, even in cases where our impressions are no better data than our interlocutor’s impressions, and where the correct opinion can actually impact the goodness of our lives (e.g., which course to take on a work project whose outcome matters; which driving route is faster; which carwash to try) (these latter decisions are small, but small decisions add up).
  • Similarly, lots of us decide we’re “good at X and bad at Y”, or that we’re “the sort of people who do A in such-and-such a specific manner”, and quit learning in a particular domain, quit updating our skill-sets, keep suboptimal beliefs or practices glued to our identities instead of looking around to see how others do things and what methods might achieve greater success. Lots of us spend far more of our thinking time noting all the reasons why we’re best off doing what we’re doing than we do looking for new ways to do things, even when such looking has tended to give us useful improvements.
  • Lots of people run more risk of death by car than they would upon consideration choose, e.g. by driving too close to the car in front of them (the half-second earlier that you get home isn’t worth it) or by driving while tired. At the same time, lots of people refrain from enjoyable activities such as walking around at night or swimming off the coast of Florida despite the occasional sharks, in cases where the activities in fact pose nearly negligible danger, but the dangers in question are vivid and easy to over-estimate.
Comment author: John_Maxwell_IV 09 April 2009 10:23:41PM 16 points [-]

I don't think you need the art of rationality much for that stuff. I think just being reminded is almost as good, if not better. Who do you think would do better on them: someone who read all of LW/OB except this post, or someone who read this post only? Now consider that reading all of LW/OB would take at least 256 times longer.

Comment author: loqi 10 April 2009 04:01:32AM 4 points [-]

That was only a sample. Should we really prefer keeping them all in mind over learning the pattern behind them?

Comment author: John_Maxwell_IV 10 April 2009 08:26:38PM 7 points [-]

Learning about rationality won't necessarily help you realize where you're being irrational. If you've got a general method for doing that, I'd be interested, but I don't think it's been discussed much on this blog.

Comment author: [deleted] 28 January 2015 05:16:19PM 2 points [-]

Dale Carnegie says (correctly, AFAIK) that most of us try to persuade others by explaining the benefits from our point of view (“I want you to play basketball with me because I don’t have enough people to play basketball with”), even though it works better to explain the benefits from their points of view. Matches my experiences, and matches also many/most of the local craigslist ads. The gains if we notice and change this one would be significant.

Interesting. But searching a bit this applies to business. Looks nice on a job interview. Don't try this on a date! (no lukeprog allowed)

Thanks for the advice! For completedness, I'd assume this is what you meant: http://www.dalecarnegie.com/communication_effectiveness_-_present_to_persuade/ or at least gives it a deeper point.

Comment author: Nornagest 28 January 2015 05:32:16PM *  4 points [-]

Don't try this on a date! (no lukeprog allowed)

Why not? Lukeprog's mistake, assuming you're talking about what I think you're talking about, seems to have been quite the opposite of trying to explain the benefits of an option from the other person's point of view:

So I broke up with Alice over a long conversation that included an hour-long primer on evolutionary psychology in which I explained how natural selection had built me to be attracted to certain features that she lacked.

I imagine he'd have had better luck, or at least not become the butt of quite so many relationship jokes on LW, if he'd gone with something like "you deserve someone who appreciates you better". Notice that from Alice's perspective, this describes exactly the same situation -- but in terms of what it means to her.

Comment author: Lumifer 28 January 2015 05:39:24PM 0 points [-]

So I broke up with Alice over a long conversation that included an hour-long primer on evolutionary psychology in which I explained how natural selection had built me to be attracted to certain features that she lacked.

ROFL... An hour-long primer to explain "You should have gotten a boob job" X-D

Comment author: [deleted] 28 January 2015 05:44:21PM 1 point [-]

Nah. Just meant that considering his posts on relationships, he might try that, so therefore, no lukeprog allowed.

In truth I was just trying to use reverse psychology to get him to do it and hopefully post some results.

And this is where this silliness ends before I get more downvoetes.

Comment author: lessdazed 07 August 2011 12:39:53PM *  30 points [-]

And this is why I am not so impressed by Eliezer's claim that an x-rationality instructor should be successful in their non-rationality life. Yes, there probably are some x-rationalists who will also be successful people. But again, correlation 0.1. Stop saying only practically successful people could be good x-rationality teachers! Stop saying we need to start having huge real-life victories or our art is useless! Stop calling x-rationality the Art of Winning! Stop saying I must be engaged in some sort of weird signalling effort for saying I'm here because I like mental clarity instead of because I want to be the next Bill Gates! It trivializes the very virtues that brought most of us to Overcoming Bias, and replaces them with what sounds a lot like a pitch for some weird self-help cult...

I think the truth is non-symmetrical: rationalism is the art of not failing, of not being stupid. I agree with you that "rationalists should win big" is not true in the sense Eliezer claims. However, rationalists should be generally above average by virtue of never failing big, never losing too much, e.g. not buying every vitamin at the health food store, not in cults, not bemoaning ancient relationships, etc.

Comment author: NicoleTedesco 15 January 2012 03:52:58PM 1 point [-]

Very good point!

Comment author: HughRistik 10 April 2009 05:38:26AM *  20 points [-]

And for this post, I use "benefits" or "practical benefits" to mean anything not relating to philosophy, truth, winning debates, or a sense of personal satisfaction from understanding things better. Money, status, popularity, and scientific discovery all count.

In my life, I've used rationality to tackle some pretty tough practical problems. The type of rationality I have been successful with hasn't been the debiasing program of Overcoming Bias, yet I have been employing scientific thinking, induction, and heuristic to certain problems in ways that are atypical for the category of people you are calling normal rationalists. I don't know whether to call this "x-rationality" or not, partly because I'm not sure the boundaries between rationality and x-rationality are always obvious, but it's certainly more advanced rationality than what people usually apply in the domains below.

On a general level, I've been studying how to get good (or at least, dramatically better) at things. Here are some areas where I've been successful using rationality:

  • Recovering from social anxiety disorder and depression
  • Social skills
  • Fashion sense
  • Popularity / social status in peer group
  • Dating

I'm not using success necessarily to mean mastery, but around 1-2 standard deviations of improvement from where I started.

I do find it interesting that many people are not achieving practical benefits from their studies of more advanced rationalities. I agree with you that akrasia is a large factor in why they do not get significant practical benefits out of rationality. I am going to hypothesize an additional factor:

The practical benefits of x-rationality are constrained because students of x-rationality (such as the Overcoming Bias / Less Wrong) schools of thought focus on critical rationality, yet critical rationality is only good for solving certain types of problems.

In my post on heuristic, I drew a distinction between what I'm calling "critical rationality" (consisting of logic, skepticism, and bias-reduction) and "creative rationality" (consisting of heuristic and inference). Critical rationality concerns itself with idea validation, while creative rationality concerns itself with idea creation (specifically, of ideas that map onto the territory).

Critical rationality is necessary to avoid many mistakes in life (e.g. spending all your money on lottery tickets, high-interest credit card debt, Scientology), yet perhaps it runs into diminishing returns for success in most people's lives. For developing new ideas and skills that would lead people to success above a mundane level, critical rationality is necessary but not sufficient, and creative rationality is also required.

Comment author: AnnaSalamon 10 April 2009 05:59:39AM *  5 points [-]

It sounds as though you have data and experiences that our community should chew on. Please do share specific stories, anecdotes, strategies or habits for thinking strategically about practical domains, techniques you've found useful within "creative rationality", etc. Perhaps in a top-level post?

Comment author: HughRistik 10 April 2009 06:24:18AM 2 points [-]

Thanks, Anna. Getting more specific is definitely on my list.

Comment author: MBlume 10 April 2009 06:03:56AM 4 points [-]

I would absolutely love to see the development of a rational art of dating. If you've more to say on this I'll definitely look forward to reading it.

Comment author: mattnewport 10 April 2009 06:13:53AM 4 points [-]

This is largely the basis of the whole online sub-community of 'Game' and the 'Seduction Community'. It may well fall under what Eliezer refers to as 'the dark arts' but many participants are fairly explicit about applying a rational/scientific approach to success with women.

Comment author: HughRistik 10 April 2009 06:36:39AM *  13 points [-]

I am highly familiar with the seduction community, and I've learned a lot from it. It's like extra-systemized folk psychology. It has certain elements of a scientific community, yet it is vulnerable to ideologies developing out of:

(a) bastardized versions of evolutionary psychology being thrown around like the proven truth, often leading to cynical and overgeneralized views of female behavior and preferences and/or overly narrow views of what works,

(b) financial biases,

(c) lack of rigor, because controlled experiments are not yet possible in this field (though I would never suggest that people wait until science catches up and gives us rigorous empirical knowledge before trying to improve their dating lives... who knows how long we will have to wait).

Yet there is promise for the community, because it's beholden to real world results. Its descriptions and prescriptions seems to have been improving, and it has gone through a couple paradigm shirts since the mid 80's.

Comment author: mattnewport 10 April 2009 06:49:49AM 5 points [-]

I've also learned some useful things from my more limited familiarity with the community. I'd tend to agree with your criticisms but I think the emphasis on rigorous 'field testing' and on 'doing what works' in much of the community shows some common ground with general efforts at rationality. As you say, this is an area (like many areas of day to day life) that is not easily amenable to controlled scientific experiment for a number of reasons but one of the lessons of Bayesian thinking/'x-rationality' that I've found useful is the emphasis on being comfortable with uncertainty, fuzzy evidence and making the best decisions given limited information.

It's treacherous terrain for anyone seeking truth since, like investment or financial advice or healthcare, there is a lot of noise along with the signal. It's certainly an interesting area with many cross-currents to those interested in applying rationality though.

Comment author: AnnaSalamon 10 April 2009 06:37:53AM *  2 points [-]

Do you think it would benefit from knowing some of the OB/LW rationality techniques?

Or from the general OB/LW picture, where inference is a thing that happens in material systems, and that yields true conclusions, when it does, for non-mysterious reasons that we can investigate and can troubleshoot?

Comment author: mattnewport 10 April 2009 07:20:10AM 11 points [-]

One common theme is recognizing when your theories aren't working and updating in light of new evidence. Many people are so sure that their beliefs about what 'should' work when it comes to dating are correct that they will keep trying and failing without ever considering that maybe their underlying theory is wrong. A common exercise used in the community to break out of these incorrect beliefs is to force yourself to go out and try things that 'can't possibly work' 10 times in a day, and then every day for a week or a month, until the false belief is banished.

I actually think the LW crowd could learn something from this approach - sometimes all the argument in the world is not as convincing as repeated confrontations with real world results. When it comes to changing behaviour (a key aspect of allowing rationality to improve results in our lives), rational argument is not usually the most effective technique. Rational argument may establish the need for change and the pattern for new behaviour but the most effective way to change behavioural habits is to just start consciously doing the new behaviour until it becomes a habit.

Comment author: pjeby 10 April 2009 03:11:26PM 12 points [-]

Or from the general OB/LW picture, where inference is a thing that happens in material systems, and that yields true conclusions, when it does, for non-mysterious reasons that we can investigate and can troubleshoot?

One problem with interfacing formal/mathematical rationality with any "art that works", whether it's self-help or dating, is that when people are involved, there are feed-forward and feed-back effects, similar to Newcomb's problem, in a sense. What you predict will happen makes a difference to the outcome.

One of the recent paradigm shifts that's been happening in the last few years in the "seduction community" is the realization that using routines and patterns leads to state-dependence: that is, to a guy's self-esteem depending on the reactions of the women he's talked to on a given night. This has led to the rise of the "natural" movement: copying the beliefs and mindsets of guys who are naturally good with women, rather than the external behaviors of guys who are good with women.

Now, I'm not actually involved in the community; I'm quite happily married. However, I pay attention to developments in that field because it has huge overlap with the self-help field, and I've gotten many insights about how status perception can influence your behavior -- even when there's nobody else in the room but yourself.

I wandered off point a little there, so let me try and bring it back. The OB/LW approach to rationality -- at least as I've seen it -- is extremely "outside view"-oriented when it comes to people. There's lots of writing about how people do this or that, rather than looking at what happens with one individual person, on the inside.

Whereas the "arts that work" are extremely focused on an inside view, and actually learning them requires a dedication to action over theory, and taking that action whether you "believe" in the theory or not. In an art that works, the true function of a theory is to provide a convincing REASON for you to take the action that has been shown to work. The "truth" of that theory is irrelevant, so long as it provides motivation and a usable model for the purposes of that art.

When I read self-help books in the past, I used to ignore things if I didn't agree with their theories or saw holes in them. Now, I simply TRY what they say to do, and stick with it until I get a result. Only then do I evaluate. Anything else is idiotic, if your goal is to learn... and win.

Is that compatible with the OB/LW picture? The top-down culture here appears to be one of using science and math -- not real-world performance or self-experimentation.

Comment author: AnnaSalamon 10 April 2009 05:39:01PM 3 points [-]

In an art that works, the true function of a theory is to provide a convincing REASON for you to take the action that has been shown to work. The "truth" of that theory is irrelevant, so long as it provides motivation and a usable model for the purposes of that art.... Is that compatible with the OB/LW picture? The top-down culture here appears to be one of using science and math -- not real-world performance or self-experimentation.

Experimenting, implementing, tracking results, etc. is totally compatible with the OB/LW picture. We haven't build cultural supports for this all that much, as a community, but we really should, and, since it resonates pretty well with a rationalist culture and there're obvious reasons to expect it to work, we probably will.

Claiming that a particular general model of the mind is true, just because you expect that claim to yield good results (and not because you have the kind of evidence that would warrant claiming it as "true in general"), is maybe not so compatible. As a culture, we LW-ers are pretty darn careful about what general claims we let into our minds with the label "true" attached. But is it really so important that your models be labeled "true"? Maybe you could share your models as thinking gimmicks: "I tend to think of the mind in such-and-such a way, and it gives me useful results, and this same model seems to give my clients useful results", and share the evidence about how a given visualization or self-model produces internal or external observables? I expect LW will be more receptive to your ideas if you: (a) stick really carefully to what you've actually seen, and share data (introspective data counts); (b) label your "believe this and it'll work" models as candidate "believe this and it'll work" models, without claiming the model as the real, fully demonstrated as true, nuts and bolts of the mind/brain.

In other words: (1) hug the data, and share the data with us (we love data); and (2) be alert to a particular sort of cultural collision, where we'll tend to take any claims made without explicit "this is meant as a pragmatically useful working self-model" tags as meant to be actually true rather than as meant to be pragmatically useful visualizations/self-models. If you actually tag your models with their intended use ("I'm not saying these are the ultimate atoms the mind is made of, but I have reasonably compelling evidence that thinking in these terms can be helpful"), there'll be less miscommunication, I think.

Comment author: pjeby 10 April 2009 06:36:16PM 1 point [-]

we'll tend to take any claims made without explicit "this is meant as a pragmatically useful working self-model" tags as meant to be actually true rather than as meant to be pragmatically useful visualizations/self-models.

Yeah, I've noticed that, which is why my comment history contains so many posts pointing out that I'm an instrumental rationalist, rather than an epistemic one. ;-)

Comment author: AnnaSalamon 10 April 2009 07:44:14PM 9 points [-]

I'm not sure it's about being an epistemic vs. an instrumental rationalist, vs. about tagging your words so we follow what you mean.

Both people interested in deep truths, and people interested in immediate practical mileage, can make use of both "true models" and "models that are pragmatically useful but that probably aren't fully true".

You know how a map of north America gives you good guidance for inferences about where cities are, and yet you shouldn't interpret its color scheme as implying that the land mass of Canada is uniformly purple? Different kinds of models/maps are built to allow different kinds of conclusions to be drawn. Models come with implicit or explicit use-guidelines. And the use-guidelines of “scientific generalizations that have been established for all humans” are different than the use-guidelines of “pragmatically useful self-models, whose theoretical components haven’t been carefully and separately tested”. Mistake the latter for the former, and you’ll end up concluding that Canada is purple.

When you try to share techniques with LW, and LW balks... part of the problem is that most of us LW-ers aren’t as practiced in contact-with-the-world trouble-shooting, and so "is meant as a working model" isn't at the top of our list of plausible interpretations. We misunderstand, and falsely think you’re calling Canada purple. But another part of the problem is it isn’t clear that you’re successfully distinguishing between the two sorts of models, and that you have separated out the parts of your model that you really do know and really can form useful inferences from (the distances between cities) from the parts of your model that are there to hold the rest in place, or to provide useful metaphorical traction, but that probably aren’t literally true. (Okay, I’m simplifying with the “two kinds of models” thing. There’s really a huge space of kinds of models and and of use-guidelines matched to different kinds of models, and maybe none of them should just be called “true”, without qualification as to the kinds of use-cases in which the models will and won’t yield true conclusions. But you get the idea.)

Comment author: Vladimir_Nesov 10 April 2009 08:58:02PM 1 point [-]

In an art that works, the true function of a theory is to provide a convincing REASON for you to take the action that has been shown to work. The "truth" of that theory is irrelevant, so long as it provides motivation and a usable model for the purposes of that art.... Is that compatible with the OB/LW picture? The top-down culture here appears to be one of using science and math -- not real-world performance or self-experimentation.

Trying to interpret this charitably, I'll suggest a restatement: what you call a "theory" is actually an algorithm that describes the actions that are known to achieve the required results. In the normal use of the words, theory is an epistemic tool, leading you to come to know the truth, and a reason for doing something is explanation of why this something achieves the goals. Terminologically mixing opaque heuristic with reason and knowledge is a bad idea, in the quotation above the word "reason", for example, connotes more with rationalization than with anything else.

Comment author: pjeby 11 April 2009 01:57:56AM 2 points [-]

what you call a "theory" is actually an algorithm that describes the actions that are known to achieve the required results.

No, I'm using the term "theory" in the sense of "explanation" and "as opposed to practice". The theory of a self-help school is the explanation(s) it provides that motivate people to carry out whatever procedures that school uses, by providing a model that helps them make sense of what their problems are, and what the appropriate methods for fixing them would be.

In the normal use of the words, theory is an epistemic tool, leading you to come to know the truth, and a reason for doing something is explanation of why this something achieves the goals.

I don't see any incompatibility between those concepts; per DeBono (Six Thinking Hats, lateral thinking, etc.) a theory is a "proto-truth" rather than an "absolute truth". Something that we treat as if it were true, until something better is found.

Ideally, a school of self-help should update its theories as evidence changes. Generally, when I adopt a technique, I provisionally adopt whatever theory was given by the person who created the technique, unless I already have evidence that the theory is false, or have a simpler explanation based on my existing knowledge.

Then, as I get more experience with a technique, I usually find evidence that makes me update my theory for why/how that technique works. (For example, I found that I could discard the "parts" metaphor of Core Transformation and still get it to work, ergo falsifying a portion of its original theoretical model.)

Also, I sometimes read about a study that shows a mechanism of mind that could plausibly explain some aspect of a technique, for example. Recently, for example, I read some papers about "affective asynchrony", and saw that it not only experimentally validated some of what I've been doing, but that it provided a clearer theoretical model for certain parts of it. (Clearer in the sense of providing a more motivating rationale, and not just because I can point to the papers and say, "see, science!")

Similar thing for "reconsolidation" -- it provides a clear explanation for something that I knew was required for certain techniques to work (experiential access to a relevant concrete memory), but had no "theoretical" justification for. (I just taught this requirement without any explanation except "that's how these techniques work".)

There seems to be a background attitude on LW though, that this sort of gradual approximation is somehow wrong, because I didn't wait for a "true" theory in a peer-reviewed article before doing anything.

In practice, however, if I waited for the theory to be true instead of useful, I would never have been able to gather enough experience to make good theories in the first place.

Comment author: MBlume 10 April 2009 06:40:01AM 8 points [-]

In any rational art of dating in which I would be interested, "winning" would be defined to include, indeed to require, respect for the happiness, well-being, and autonomy of the pursued. I don't know enough about these sub-communities to say whether they share that concern -- what is the impression you've gotten?

Comment author: mattnewport 10 April 2009 07:12:04AM 5 points [-]

Many but by no means all in the community share that concern. I'm finding it interesting to note my own reluctance to link to some of the material since even among those who do share that concern there is discussion of some techniques that might be considered objectionable. One of the cornerstones of much of the material is that people are so conditioned by conventional beliefs about what 'should' work that they are liable to find what actually does work highly counter-intuitive at first. Reactions to the challenging of strongly held beliefs can be equally strong and I've often observed this in comment threads on the material.

The most mainstream introduction to the community is probably "The Game" by Neil Strauss. I'm not sure it's the best starting point from the point of view of connections to rationality but it's an entertaining read if nothing else.

I certainly believe it's possible to benefit from some of the ideas while maintaining your definition of 'winning' but equally there are some parts of the community which are less appealing.

Comment author: roland 10 April 2009 07:46:44AM *  -2 points [-]

I have extensive knowledge in that matter and I would say that the techniques are value neutral. To make an analogy, think of Cialdini's science of influence and persuasion(http://en.wikipedia.org/wiki/Robert_Cialdini).

What Evolutionary Psychology, Cialdini and others showed is that we humans can be quite primitive and react in certain predetermined ways to certain stimuli. The dating community has investigated the right stimuli for women and figured out the way to "get" her. You have to push the right buttons in the right order and we males are not different(although the type of buttons is different).

In other words, what you learn in the dating community will teach you how to win the hearts of women. It's up to you how to use this skillset(yes, it's a skillset) IF you manage to acquire it, which btw. is not easy at all. It's just a technique, you can use it for good or bad, although admittedly it lends itself more for selfish purposes IMHO.

Btw, women are also very selfish creatures, so don't make the mistake to hold yourself to a too high moral standard.

I also think that you might be misguided in that you start with the wrong assumption of what dating is all about. Evolutionarily speaking, dating alias mating is not to make the other people better off. On the contrary, having kids is mostly a disadvantage for the parents, but most people do it anyways because we have this desire to have kids. Rationally speaking we all would probably be better off without them. Of course if you factor in emotions it becomes more complicated.

Also there is a fundamental difference between males and females. Males don't get pregnant, they want to have as much sex(pleasure) with as many partners as possible. Women get pregnant(at least before birth control was invented) and so their emotional circuitry is designed to be extremely selective towards which males they will have sex with. Also they want their males to stick around as long as possible(to help them take care of the offspring). So you have to be aware that there is a fundamental difference in the objectives of the two which will make it extremely difficult or impossible to make BOTH happy at the same time. In practice usually one will suffer and/or have to concede some ground and it's usually the "weaker" one. Weak in this context means the one with less options in dating. Usually women are stronger in this respect so the dating community is essentially a way to empower males.

This is getting long, I could write more, if you guys are interested I could start a post on this topic.

Comment author: HughRistik 10 April 2009 05:44:23PM 6 points [-]

In general, I would agree that the teachings are value-neutral. Yet some of these tools are more conducive towards negative uses, while others are more conducive towards positive uses.

I also think that you might be misguided in that you start with the wrong assumption of what dating is all about. Evolutionarily speaking, dating alias mating is not to make the other people better off.

It's true that people are not adapted to necessarily make each other optimally happy. Yet in spite of this, our skills give us the capability to find solutions that make both people at least somewhat happy.

So in my case, winning is "defined to include, indeed to require, respect for the happiness, well-being, and autonomy of the pursued," as MBlume puts it.

Also there is a fundamental difference between males and females.

Yes, but the description in your post is contaminated by the oversimplified presumptions about evolutionary psychology in the community. I think you would get a lot out of reading more of real evolutionary psychologists, not just reading popularizations, or what the community says evolutionary psychologists are saying. I can find some cites when I'm at home.

Males don't get pregnant, they want to have as much sex(pleasure) with as many partners as possible.

Typically, males are more oriented towards seeking multiple partners than women, yet that doesn't mean that they want "as many partners as possible." Some males are wired for short-term mating strategies, and other males are more wired for long-term mating strategies.

Women get pregnant(at least before birth control was invented) and so their emotional circuitry is designed to be extremely selective towards which males they will have sex with.

Yes, and this is well-demonstrated experimentally. I don't have the citations on hand because I'm not at home, but a guy named Fisman has done some interesting work in this area.

Also they want their males to stick around as long as possible(to help them take care of the offspring).

Yet this is again oversimplified, because some present day females follow short-term mating strategies and do not necessarily want males to stick around.

So you have to be aware that there is a fundamental difference in the objectives of the two which will make it extremely difficult or impossible to make BOTH happy at the same time.

True, though pretty good compromises exist. In a lot of cases, dating is like a Prisoner's Dilemma (though many other payoff matrices are possible). Personally, what I like the most about the community is that it gives me the tools to play C while simultaneously raising the chance that the other person will play C.

Even when happiness for both people can't be achieved, it's at least possible for both people to treat each other with respect, even if someone can't give the other person what they would want.

This is getting long, I could write more, if you guys are interested I could start a post on this topic.

Sure, I would find it interesting.

Comment author: ciphergoth 10 April 2009 10:16:53AM *  3 points [-]

A top-level post would be very welcome, I don't want to take this one too far off track. I've slept (and continue to sleep) with a lot of people, and my experience very much contradicts what you say here.

Comment author: pjeby 10 April 2009 03:17:01PM *  6 points [-]

roland:

So you have to be aware that there is a fundamental difference in the objectives of the two which will make it extremely difficult or impossible to make BOTH happy at the same time.

ciphergoth:

my experience very much contradicts what you say here.

That's because it's a great example of theory being used to persuade people to take a certain set of "actions that work". There are other theories that contradict those theories, that are used to get other people to take action... even though the specific actions taken may be quite similar!

People self-select their schools of dating and self-help based on what theories appeal to them, not on the actual actions those schools recommend taking. ;-)

In this case, the theory roland is talking about isn't theory at all: it's a sales pitch, that attracts people who feel that dating is an unfair situation. They like what they hear, and they want to hear more. So they read more and maybe buy a product. The writer or speaker then gradually moves from this ev-psych "hook" to other theories that guide the reader to take the actions the author recommends.

That people confuse these sales pitches with actual theory is a well-understood concept within the Marketing Conspiracy. ;-) Of course, the gurus don't always know themselves what parts of their theories are hook vs. "real"... I just found out recently that a bunch of stuff I thought was "real" was actually "hook", and had to go through some soul-searching before deciding to leave it in the book I'm writing.

Why? Because if I change the hook, I won't be able to reach people who have the same wrong beliefs that I did. Better to hook people with wrong things they already believe, and then get them to take the actions that will get them to the place where they can throw off those beliefs. (And of course, believing those things didn't stop me from making progress.) But I've restricted it to being only in chapter 1, and the revelation of the deeper model will happen by chapter 5.

Anyway. Actually helping people change their actions and beliefs -- as opposed to merely telling them what they should do or think -- is the very Darkest of the Dark arts.

Perhaps we should call it "The Coaching Conspiracy". ;-)

Comment author: roland 10 April 2009 10:47:27AM 1 point [-]

What exactly would you like to know? The subject is very broad, it would be easier if you made me a list of questions that are relevant to LW. There are already TONS of sites about this topic so please don't ask me to write another post about seduction in general.

Comment author: ciphergoth 10 April 2009 11:33:29AM 2 points [-]

I think a post tailored to the particular interests and language of LW/OB readers would be fairly different from the ones already out there, but if you have a pointer that you think would be particularly appealing to us lot I'm interested.

Comment author: moshez 14 February 2012 06:27:29PM 1 point [-]

I'm not really sure how you can claim "techniques are value-neutral" without assuming what values are. For example, if my values contain a term for someone else's self-esteem, a technique that lowers their self-esteem is not value-neutral. If my values contain a term for "respecting someone else's requests", techniques for overcoming LMR are not value-neutral. Since I've only limited knowledge of the seduction techniques advanced by the community, I did not offer more -- after seeing some of the techniques, I decided that they are decidedly not value neutral, and therefore chose to not engage in them.

Comment author: AnnaSalamon 10 April 2009 06:28:37AM 3 points [-]

I would personally love to see more cross-fertilization between that sub-community and LW, "dark arts" or no. (At least, I think I would; I don't know the community well and might be mistaken.) We need to make contact between abstract techniques for thinking through difficult issues, and on the ground practical strategicness. Importing people who've developed skilled strategicness in any domain that involves actual actions and observable success/failure, including dating (or sales, or start-ups, or ... ?), would be a good way to do this. If you could link to specific articles, or could create discussion threads that both communities might want to participate in, mattnewport, that would be good.

Comment author: Hans 13 April 2009 10:14:30AM *  5 points [-]

I second that. Here in the LW/OB/sci-fi/atheism/cryonics/AI... community, many of us fit quite a few stereotypes. I'll summarize them in one word that everybody understands: we're all nerds*. This means our lives and personalities introduce many biases into our way of thinking, and these often preclude discussions about acting rationally in interpersonal situations such as sales, dating etc. because we don't have much experience in these fields. Anything that bridges this gap would be extremely useful.

*this is not a value judgment. And not everybody conforms to this stereotype. I know, I know, but this is not the point. I'm talking averages here.

Comment author: PhilosophyTutor 24 January 2012 04:04:23AM *  0 points [-]

I would say that it is largely the ostensible basis of the seduction community.

As you can see if you read this subthread, they've got a mythology going on that renders most of their claims unfalsifiable. If their theories are unsupported it doesn't matter, because they can disclaim the theories as just being a psychological trick to get you to take "correct" actions. However they've got no rigorous evidence that their "correct" actions actually lead to any more mating success than spending an equivalent amount of time on personal grooming and talking to women without using any seduction-community rituals. They also have such a wide variety of conflicting doctrines and gurus that they can dismiss almost any critique as being based on ignorance, because they can always point to something written somewhere which will contradict any attempt to characterise the seduction community - not that this ever stops them making claims about the community themselves.

They'll claim that they develop such evidence by going out and picking up women, but since they don't do any controlled tests this cannot even in theory produce evidence that the techniques they advocate change their success rate, and even if they did conduct controlled studies their sample sizes are tiny given the claimed success rates. I believe one "guru" claims to obtain sex in one out of thirty-three approaches. I do not believe that anyone's intuitive grasp of statistics is so refined that they can spot variations in such an infrequent outcome and determine whether a given technique increases or decreases that success rate. To do science on such a phenomenon would take a very big sample size. Ergo anyone claiming to have scientific evidence without having done a study with a very big sample size is a fool or a knave.

The mythology of the seduction community is highly splintered and constantly changes over time, which increases the subjective likelihood that we are looking at folklore and scams rather than any kind of semi-scientific process homing in on the truth.

It's also easy to see how it could be very appealing to lonely nerds to think that they could download a walkthrough for getting women into bed the way they can download a walkthrough for Mass Effect or Skyrim. It's an empowering fantasy, to be sure.

If that's what it takes to get them to groom themselves and go talk to women it might even work in an indirect, placebo-like way. So if you prioritise getting laid over knowing the scientific truth about the universe it might be rational to be selectively irrational about seduction folklore. However if you want to know the truth about the universe there's not much to be gained from the seduction community. If they are doing better than chance it's because a stopped clock is right twice a day.

My own view is that the entire project is utterly misguided. Instead of hunting for probably-imaginary increases in their per-random-stranger success at getting sex they should focus on effectively searching the space of potential mates for those who are compatible with them and would be interested in them.

Comment author: wedrifid 24 January 2012 04:42:07AM 0 points [-]

As you can see if you read this subthread, they've got a mythology going on that renders most of their claims unfalsifiable.

This is an absurd claim. Most of the claims can be presented in the form "If I do X I can expect to on average achieve a better outcome with women than if I do Y". Such claims are falsifiable. Some of them are even actually falsified. They call it "Field Testing".

Your depiction of the seduction community is a ridiculous straw man and could legitimately be labelled offensive by members of the community that you are so set on disparaging. Mind you they probably wouldn't bother doing so: The usual recommended way to handle such shaming attempts is to completely ignore them and proceed to go get laid anyway.

Comment author: PhilosophyTutor 24 January 2012 05:15:13AM 1 point [-]

This is an absurd claim. Most of the claims can be presented in the form "If I do X I can expect to on average achieve a better outcome with women than if I do Y". Such claims are falsifiable. Some of them are even actually falsified. They call it "Field Testing".

If they conducted tests of X versus Y with large sample sizes and with blinded observers scoring the tests then they might have a basis to say "I know that if I do X I can expect to on average achieve a better outcome with women than if I do Y". They don't do such tests though.

They especially don't do such tests where X is browsing seduction community sites and trying the techniques they recommend and Y is putting an equal amount of time and effort into personal grooming and socialising with women without using seduction community techniques.

Scientific methodology isn't just a good idea, it's the law. If you don't set up your tests correctly you have weak or meaningless evidence.

Your depiction of the seduction community is a ridiculous straw man and could legitimately be labelled offensive by members of the community that you are so set on disparaging. Mind you they probably wouldn't bother doing so: The usual recommended way to handle such shaming attempts is to completely ignore them and proceed to go get laid anyway.

Or as the Bible says, "But if any place refuses to welcome you or listen to you, shake its dust from your feet as you leave to show that you have abandoned those people to their fate". It's good advice for door-to-door salespersons, Jehova's Witnesses and similar people in the business of selling. If you run into a tough customer don't waste your time trying to convince them, just walk away and look for an easier mark.

However in science that's not how you do things. In science if someone disputes your claim you show them the evidence that led you to fix your claim in the first place.

Are you sure you meant to describe my post as a "shaming attempt"? As pejoratives go this seems like an ill-chosen one, since my critique was strictly epistemological. It seems at least possible that you are posting a standard talking point which is deployed by seduction community members to dismiss ethical critiques, but which makes no sense in response to an epistemological critique.

(There are certainly concerns to be raised about the ethics of the seduction community, but that would be a different post).

Comment author: Lethalmud 23 May 2014 01:54:06PM 1 point [-]

I'm curious, how did you use rationality to develop fashion sense?

Comment author: mattnewport 09 April 2009 10:24:58PM 13 points [-]

An understanding of 'x-rationality' has helped me find the world a little less depressing and a little less frustrating. Previously when observing world events, politics and some behaviours in social interactions that seemed incomprehensible without assuming depressing levels of stupidity, incompetence or malice I despaired at the state of humanity. An appreciation of human biases and evolutionary psychology (some of which stems from an interest in both going back well before I ever started reading OB) gives me a framework in which to understand events in the world which I find a lot more productive and optimistic.

An example from politics: it is hard to make any rational sense of drug prohibition when looking at the evidence of the costs and benefits. This would tend to lead to an inevitable conclusion that politicians and the voting public are either irredeemably stupid or actively seeking negative outcomes. Understanding how institutional incentives to maintain the status quo, confirmation bias and signaling effects (politicians and voters needing to be 'seen to care' and/or 'seen to disapprove') can lead to basically intelligent and well meaning people maintaining catastrophically wrong beliefs at worst allows for accepting the status quo without assuming the worst about one's fellow man and at best maps out plausible paths for achieving political change by recognizing the true nature of the obstacles.

An example from social interactions: I suffered a fair amount of personal emotional stress reconciling what I had been led to believe 'ought' to work when interacting with others and the apparently much less pleasant realities of what seemed to be successful in reality. The only conclusion I could draw was that everyone deliberately lied about the way human interactions worked for their own mysterious and possibly malicious reasons. Coming to an understanding of evolutionary psychology and signaling explanations for many common patterns of human behaviour allows me to reconcile 'doing what works' with a belief that most people are not consciously misleading or malicious most of the time. Many people don't appear to be aware of the contradictions inherent in social interactions but as someone who saw them but could not explain them without assuming the worst, discovering explanations that did not require imputing conscious malice to others allowed for a much more positive outlook on the world.

I could give a number of examples of how 'regular' rationality rigorously applied to areas of life where it is often absent have also directly helped me in my life but they seem slightly off topic for this thread.

Comment author: PhilGoetz 10 April 2009 03:43:21AM 11 points [-]

Sometimes, people do worse when they try to be rational because they have a poor model of rationality.

One error I commonly see is the belief that rationality means using logic, and that logic means not believing things unless they are proven. So someone tries to be "rational" by demanding proof of X before changing their behavior, even in a case where neither priors nor utilities favor not X. The untrained person may be doing something as naive as argument-counting (how many arguments in favor of X vs. not X), and is still likely to come out ahead of the person who requires proof.

A related error is using Boolean models where they are inappropriate. The most common error of this type is believing that a phenomenon, or a class of phenomena, can have only one explanation.

Comment author: AnnaSalamon 09 April 2009 07:46:28AM *  11 points [-]

I’m partly echoing badger here, but it’s worth distinguishing between three possible claims:
(1) An “art of rationality” that we do not yet have, but that we could plausibly develop with experimentation, measurements, community, etc., can help people.
(2) The “art of rationality” that one can obtain by reading OB/LW and trying to really apply its contents to one’s life, can help people.
(3) The “art of rationality” that one is likely to accidentally obtain by reading articles about it, e.g. on OB/LW, and seeing what happens to rubs off, can help people.

There are also different notions of “help people” that are worth distinguishing. I’ll share my anticipations for each separately. Yvain or others, tell me where your anticipations match or differ.

Regarding claim (3):
My impression is that even the art of rationality one obtains by reading articles about it for entertainment, does have some positive effects on the accuracy of peoples’ beliefs. A couple people reported leaving their religions. Many of us have probably discarded random political or other opinions that we had due to social signaling or happenstance. Yvain and others report “clarity-of-mind benefits”. I’d give reasonable odds that there’s somewhat more benefit than this -- some unreliable improvement in peoples’ occasional, major, practical decisions, e.g. about which career track to pursue, and some unreliable improvement in peoples’ ability to see past their own rationalizations in interpersonal conflicts -- but (at least with hindsight bias?) probably no improvements in practical skills large enough to show up on Vladimir Nesov’s poll. Does anyone’s anticipations differ, here?

Regarding claim (2):
I’d a priori expect better effects from attempts to really practice rationality, and to integrate its thinking skills into one’s bones, than from enjoying chatting about rationality from time to time. A community that reads articles about skateboarding, and discusses skateboarding, will probably still fall over when they try to skateboard twenty feet unless they’ve also actually spent time on skateboards.

As to the empirical data: who here has in fact practiced (2) (e.g., has tried to integrate x-rationality into their actual practical decision-making, as in Yvain’s experiment/technique, or has used x-rationality to make major life decisions, or has spent time listing out their strengths and weaknesses as a rationalist with specific thinking habits that they really work to integrate in different weeks, or etc.)? This is a real question; I’d love data. Eliezer is an obvious example; Yvain cites the impressiveness of Eliezer’s 2001 writings as counter-evidence (and it is some counter-evidence), but: (1) Eliezer, in 2001, had already spent a lot of time learning rationality (though without the heuristics and biases literature); and (2) Eliezer was at that time busy with a course of action that, as he now understands things, would have tended to destroy the world rather than to save it. Due to insufficient rationality, apparently.

I’ve practiced a fair amount of (2), but much less than I could imagine some practicing; and, as I noted in the comment Yvain cited, it seems to have done me some good. Broadly similar results for the handful of others I know who try to get rationality into their bones. Less impressive than I’d like, but I tend to interpret this a a sign we should spend more time on skateboards, and I anticipate that we’ll see more real improvement as we do.

The most important actual helps involve that topic we’re not supposed to discuss here until May, but I’d say we were able to choose a much higher-impact way to help the world than people without x-rationality standardly choose, and that we’re able to actually think usefully about a subject where most conversations degenerate into storytelling, availability heuristics, attaching overmuch weight to specific conjunctions, etc. Which, if there’s any non-negligible chance we’re right, is immensely practical. But we’re also somewhat better at strategicness about actually exercising, about using social interaction patterns that work better than the ones we were accidentally using previously (though far from as well as the ones the best people use), about choosing college or career tracks that have better expected results, etc.

Folks with more data here (positive or negative), please share.

Regarding claim (1):
I guess I wouldn’t be surprised by anything from “massive practical help, at least from particular skilled/lucky dojos that get on good tracks” to “not much help at all”. But if we do get “not much help at all”, I’ll feel like there was a thing we could have done, and we didn’t manage to do it. There are loads of ridiculously stupid kinds of decision-making that most people do, and it would be strange if there were no way we could get visible practical benefit from improving on that. Details in later comments.

Comment author: Yvain 09 April 2009 12:27:38PM *  5 points [-]

I agree with almost everything here, with the following caveats:

I. The practical benefits we get from (3) are (I think I'm agreeing with you here) likely to be so small as to be difficult to measure informally; i.e. anyone who claims to have noticed a specific improvement is as likely to be imagining it as really improving. Probably some effects that could be measured in a formal experiment with a very large sample size, but this is not what we have been doing.

II. (2) shows promise but is not something I see discussed very often on Overcoming Bias or Less Wrong. Using the Boyle metaphor, this would be the technology of rationality, as opposed to the science of it. I've seen a few suggestions for "techniques", but they seem sort of ad hoc (I will admit, in retrospect, that many of the times I was proposing 'techniques' were more of an attempt to sound like I was thinking pragmatically, than soundly based on good experimental evidence). I've tried to apply specific methods to specific decisions, but never gone so far as to set aside a half hour each day to "rationality practice", nor would I really know what to do with that half hour if I did. I'd like to know more about what you do and what you think has helped.

III. You list a greater appreciation of transhumanism as one of the benefits of x-rationality, but the causal linkage doesn't impress me. Many of the transhumanists here were transhumanists before they were rationalists, and only came to Overcoming Bias out of interest in reading what transhumanist leaders Eliezer and Robin had to say. I think my "conversion" to transhumanism came about mostly because I started meeting so many extremely intelligent transhumanists that it no longer seemed like a fringe crazy-person belief and my mind felt free to judge it with the algorithms it uses for normal scientific theories rather than the algorithms it uses for random Internet crackpottery. Many other OB readers came to transhumanism just because EY and RH explicitly argued for it and did a good job. Still others probably felt pressure to "convert" as an in-group identification thing. And finally, I think transhumanists and x-rationalists are part of that big atheist/libertarian/sci-fi/et cetera personspace cluster Eliezer's been talking about: we all had a natural vulnerability to that meme before ever arriving here. AFAIK Kahneman and Tversky are not transhumanists, Aumann certainly isn't, and I would be surprised if x-rationalists not associated with EY and RH and our group come to transhumanism in numbers greater than their personspace cluster membership predicts.

IV. Given fifty years to improve the Art, I also wouldn't be surprised with anything from "massive practical help" to "not much help at all". I don't know exactly what you mean by "ridiculously stupid decision-making that most people do", but are you sure it's something that should be solved with x-rationality as opposed to normal rationality?

Comment author: AnnaSalamon 09 April 2009 12:52:51PM *  2 points [-]

I don't know exactly what you mean by "ridiculously stupid decision-making that most people do", but are you sure it's something that should be solved with x-rationality as opposed to normal rationality?

I'm sure it's something that could be helped with techniques like The Bottom Line, which most intelligent, science-literate, trying to be “rational” people mostly don’t do nearly enough of. Also something that could be helped by paying attention to which thinking techniques lead to what kinds of results, and learning the better ones. Dojos could totally teach these practices, and help their students actually incorporate them into their day-to-day, reflexive decison-making (at least more than most "intelligent, science-literate" people do now; most people hardly try at all). As to heuristics and biases, and probability theory... I do find those helpful. Essential for thinking usefully about existential risk; helpful but non-essential for day to day inference, according to my mental but not written (I’ve been keeping a written record lately, but not for long enough, and not systematically enough) observations. The probability theory in particular may be hard to teach to people who don’t easily think about math, though not impossible. But I don’t think building an art of rationality needs to be solely about the heuristics and biases literature. Certainly much of the rationality improvement I’ve gotten from OB/LW isn’t that.

Comment author: AnnaSalamon 09 April 2009 02:13:52PM *  1 point [-]

You list a greater appreciation of transhumanism as one of the benefits of x-rationality, but the causal linkage doesn't impress me.

The benefit I’m trying to list isn’t “greater appreciation of transhumanism” so much as “directing one’s efforts to ‘make the world a better place’ in directions that actually do efficiently make the world a better place”.

As to the evidence and its significance:

Even if we skip transhumanism, and look fully outside the Eliezer/Robin/Vassar orbit, folks like Holden Karnofsky of Givewell are impressive, both in terms of ability to actually analyze the world, and in terms of positive impact. You might say it’s just traditional rationality Holden is using -- certainly he didn’t get it from Eliezer -- but it’s beyond the level common among “intelligent, science-literate people” (who mostly donate their money in much less effective ways).

Within transhumanism... I agree that the existing correlation between transhumanism and rationality-emphasis will tend to create future correlation, whether or not rationality helps one see merits in transhumanism. And that’s an important point. But it’s also bizarrely statistically significant that when people show up and say they want to spend their lives reducing AI risks, they’re often people who spent unusual effort successfully becoming better thinkers before they ever heard of Eliezer or Robin, or met anyone else working on this stuff. It’s true that maybe we’re just recognizing “oh, someone who cares about actually getting things right, that means I can relax and believe them” (or, worse, “oh, someone with my brand of tennis shoes, let me join the in-group”). But...

  1. Recognizing that someone else has good epistemic standards and can be believed is rationality working, even without independently deriving the same conclusions (though under the tennis shoe interpretation, not so much);
  2. Many of us (independently, before reading or being in contact with anyone in this orbit) said we were looking for the most efficient use of some time/money, and it’s probably not an accident that trying to become a good thinker, and asking what use of time/money will actually help the world, tend to correlate, and tend to lead to modes of action that actually do help the world.
Comment author: Douglas_Knight 10 April 2009 05:05:45PM 10 points [-]

Michael Vassar:

nerds, scientists, skeptics and the like who like to describe their membership in terms of rationality are [not] noticibly better than average at behavioral rationality, as opposed to epistemic rationality where they are obviously better than average but still just hideously bad.

Simply applying "ordinary rationality" to behavior is extreme. People don't use reason to decide if fashion is important, they just copy. Eliezer's Secret Identities post seems to make a very similar point, which seemed to largely match this post. One point was to get rationality advice from people who actually found it useful, rather than ordinary nerds who fetishize it.

Comment author: mathemajician 11 April 2009 11:02:51AM 37 points [-]

Imagine a world where the only way to become really rich is to win the lottery (and everybody is either risk averse or at least risk neutral). With an expected return of less than $1 per $1 spent on tickets, rational people don't buy lottery tickets. Only irrational people do that. As a result, all the really rich people in this world must be irrational.

In other words, it is possible to have situations where being rational increases your expected performance, but at the same time reduces your changes of being a super achiever. Thus, the claim that "rationalists should win" is not necessarily true, even in theory, if "winning" is taken to mean being among the top performers. A more accurate statement would be, "In a world with both rational and irrational agents, the rational agents should perform better on average than the population average."

Comment author: ciphergoth 11 April 2009 12:28:43PM 9 points [-]

There's an extent to which we live in such a world. Many people believe you can achieve your wildest dreams if you only try hard enough, because by golly, all those people on the TV did it!

Comment author: Hans 13 April 2009 09:55:51AM 8 points [-]

But many poor/middle-class people also believe that they can never become rich (except for the lottery) because the only ways to become rich are crime, fraud, or inheritance. And this leads them to underestimate the value of hard work, education, and risk-taking.

The median rationalist will perform better than these cynics. But his average wealth will also be higher, assuming he accurately observes his chances at becoming succesful.

Comment author: NicoleTedesco 15 January 2012 03:57:33PM 1 point [-]

It can be rational to accept the responsibility of high risk/high reward behavior, on specific occasions and under specific circumstances. The trick is recognizing those occasions and circumstances and also recognizing when your mind is fooling you into believing "THIS TIME IS DIFFERENT".

A rational agent is Warren Buffet. An irrational agent is Ralph Cramden. Both accept high risk/high reward situations. One is rational about that responsibility. The other is not.

Also, in a world of both rational and irrational agents, in a world where the rational agent must depend upon the irrational, it is sometimes rational to think irrationally!

Comment author: AspiringKnitter 17 January 2012 07:41:50AM 8 points [-]

By "decision", I don't mean the decision to get up in the morning, I mean the sort that's made on a conscious level and requires at least a few seconds' serious thought.

Consider yourself lucky if that doesn't describe getting up in the morning for you.

Anyway, not that this counts at all (availability bias), but I made a rational decision a couple of days ago to get some sleep instead of working later into the night on homework. I did exactly that.

In fact, I just made a rational decision-- just now-- to quit reading the article I was reading, work on homework for a few minutes and then go to bed. I haven't gotten to bed yet. Otherwise, that's going well.

Comment author: [deleted] 18 January 2012 04:36:54AM 2 points [-]

Consider yourself lucky if that doesn't describe getting up in the morning for you.

Can you rig your mornings so that staying in bed just doesn't work? I use two alarm clocks, one set for two minutes after the other; the one that goes off two minutes later is out of arm's reach, so I have to either get out of bed, or sleep through it.

Comment author: AspiringKnitter 18 January 2012 07:32:46AM *  0 points [-]

Not really worth it, but thanks. :) My current strategy is just to wait a few minutes, which essentially always does the trick unless I'm totally exhausted and need more sleep. I appreciate the thought, though.

Comment author: Desrtopa 23 January 2011 09:55:39PM 8 points [-]

...but you will disagree with me. And we are both aspiring rationalists, and therefore we resolve disagreements by experiments. I propose one.

I'm surprised you expected most of your readers to disagree. I think it's pretty clear that the techniques we work on here aren't making us much more successful than most people.

Humans aren't naturally well equipped to be extreme rationalists. The techniques themselves may be correct, but that doesn't mean we can realistically expect many people to apply them. To use the rationality-as-martial art metaphor, if you taught Shaolin kung fu to a population of fifty year old couch potatoes, they would not be able to perform most of the techniques correctly, and you should not expect to hear many true accounts of them winning fights with their skills.

Perhaps with enough work we could refine the art of human instrumental rationality into something much better than what we've got, maybe achieve a .3 correlation with success rather than a .1, but while a fighting style developed explicitly for 50 year old couch potatoes might give your class better results than other styles, you can only expect so much out of it.

Comment author: tjohnson314 08 May 2015 12:04:43AM 7 points [-]

Here's one example of a change I've made recently, which I think qualifies as x-rationality. When I need to make a decision that depends on a particular piece of data, I now commit to a decision threshold before I look at the data. (I feel like I took this strategy from a LW article, but I don't remember where now.)

For example, I recently had to decide whether it would be worth the potential savings in time and money to commute by motorcycle instead of by car. I set a threshold for what I considered an appropriate level of risk beforehand, and then looked up the accident statistics. The actual risk turned out to be several times larger than that.

Had I looked at the data first, I would have been tempted to find an excuse to go with my gut anyway, which simply says that motorcycles are cool. (I'm a 23-year-old guy, after all.) A high percentage of motorcyclists experience a serious or even fatal accident, so there's a decent chance that x-rationality saved me from that.

Comment author: Fossegrimen 08 May 2015 07:28:23AM *  5 points [-]

Huh.

I did the same thing and came to the exact opposite conclusion and have been commuting by two-wheeler for 15 years now.

What swayed me was:

A huge proportion of the accidents involved really excessive speed.

A similarly huge proportion happened to untrained motorcyclists.

So: If I don't speed (much) and take the time to practice regularly on a track, preferably with an instructor, I have eliminated just about all the serious accidents. In actuality I have had zero accidents outside the track, and the "accidents" on the track has been to deliberately test the limits of myself and the bike. (and on a bike designed to take slides without permanent damage)

The cash savings are higher in Europe due to taxes on fuel and vehicles and the size of the bike is more appreciated in cities that are designed in the middle ages, so the upside is larger too, but it seems that we don't have anything like the same risk tolerance.

edit: also it is possible that motorcycling is a lot safer in Europe than the US? assuming you are from the US ofc.

Comment author: tjohnson314 14 May 2015 05:42:48PM 1 point [-]

I'm from California, where it's legal to split lanes. Most places don't allow that.

I could just decide not to, but the ability to skip traffic that way is probably the single largest benefit of having a motorcycle.

Comment author: Fossegrimen 28 May 2015 07:51:38AM 0 points [-]

Most states don't allow that, but in Europe it's standard practice. I probably wouldn't bother with the bike if I couldn't.

Comment author: AlexU 10 April 2009 01:52:57PM *  7 points [-]

I have yet to hear what anyone even means by "rationalism" or "rationalist," let alone "x-rationality." People often refer to the "techniques" or "Art of rationality" (a particularly irksome phrase), though as best I can tell, these consist of Bayes theorem and a half-dozen or so logical fallacies that were likely known since the time of Aristotle. Now, I've had an intuitive handle on Bayes theorem since learning of it in high school pre-calc, and spotting a logical fallacy isn't particularly tough for anyone accustomed to close reading of philosophy or doing science (or who's studied for the LSAT). So apart from simply calling oneself a "rationalist" and feeling really good about being a part of some "rationalist community" (much like Dennett's tone-deaf coining of the term "brights" to describe atheists), is there actually anything to this?

Comment author: pjeby 09 April 2009 05:29:39AM 34 points [-]

I'm not sure if it was your intent to point this out by contrast, but I would like to point out that a reasonable art of "kicking" would not rely on you making conscious decisions, let alone explicitly rational ones. Rather, it would rely on you ensuring that your subconscious has been freed from sources of bias ahead of time, and is therefore able to safely leap to conclusions in its usual fashion. An art that requires you to think at the time things are actually happening is not much of an art.

Case in point: when reading "Stuck In The Middle With Bruce", I became aware of a subconsciously self-sabotaging behavior I'd done recently. So I "kicked" it out by crosslinking the behavior with its goal-satisfaction state. It would be crazy to wait until the next occasion for that behavior to strike, and then try to reason my way around it, when I can just fix the bloody thing in the first place. (Interestingly, I mentioned the story to my wife, and described how it related to my own behavior... and she thought of a different sort of self-sabotage she was doing, and applied the same mindhack. So, as of now, I'd say that story was one of the top 5 most valuable things I've gotten from LW.)

Now, in the case of extinguishing a behavior, there's no way you can absolutely prove you've fixed something permanently; the best you can do is show that the thought process that you use to produce an autonomous response before applying a technique, no longer produces the same response afterward. Also, sometimes you catch a break: you find yourself in a situation, expecting yourself to do the same old stupid thing you've been doing before, and then you find you don't need to, or notice a few seconds later that you already did something completely different, and a much better choice.

Truth is, our brains really aren't that bad at making decisions, once you take out the "priority overrides" that mess things up.

Anyway, I'm rambling a bit now. The point is, "kicking" is generally not something you do at the time -- you do it in advance of the next time....

Because your brain is faster than you are.

Comment author: [deleted] 09 April 2009 05:20:10PM 22 points [-]

I voted this up, but I'm replying because I think it's a critical point.

Our brains are NOT designed to make conscious decisions about every thing that crosses our path. Trying to do that is like trying to walk everywhere instead of driving: it's technically possible, but it will take you forever and will be exhausting.

Our brains seem to work more like this: our brains process whatever it is we're doing at the time, and then feed that processed data into our subconscious for use later. Sure it jumps in every once in a while for something important, but generally it sits back and lets your subconscious do the driving.

Rationality should be about putting the best processed information down into your subconscious, so it works the way you'd like it too. Trying to do everything consciously is a poor use of your brain, as it 1) ignores the way your brain is designed to function and 2) forgoes the use of the powerful subconscious circuitry that makes up an enormous part of it.

Comment author: Jonathan_Graehl 09 April 2009 07:37:35PM 9 points [-]

What does "crosslinking the behavior with its goal-satisfaction state" mean? Specifically, I'm unable to guess what you mean by "crosslinking" and "the goal-satisfaction state" (of a behavior).

Comment author: pjeby 09 April 2009 07:43:39PM 1 point [-]

What does "crosslinking the behavior with its goal-satisfaction state" mean? Specifically, I'm unable to guess what you mean by "crosslinking" and "the goal-satisfaction state" (of a behavior).

More details can be found in this comment.

Comment author: roland 10 April 2009 12:21:26AM 2 points [-]

I had the same question as Jonathan and I've read the comment you mentioned. Where can we read/learn more about this technique?

Comment author: pjeby 10 April 2009 01:50:06AM *  8 points [-]

I had the same question as Jonathan and I've read the comment you mentioned. Where can we read/learn more about this technique?

It's based on a technique called "Core Transformation", developed by Connirae Andreas and Tamara Andreas, and it's discussed in a book of the same name. (I linked to it once before when someone asked about this a few weeks ago, and was severely downmodded for some reason, so you'll have to find it yourself.)

My own version of the technique is a streamlined and stripped-down variation that removes a certain amount of superstition and ritual. (Among other things, I drop the "parts" metaphor, which some schools of NLP now consider to have been a bad idea in the first place.)

The technique works by using imagination to elicit the reward states associated with a behavior, going to higher and higher levels of abstraction to reach the top (or root?) of a person's reward tree -- usually a quasi-mystical state like inner peace, oneness, compassion, or something like that. (These "core states" are a good candidate for the "god-shaped hole" in humans, btw.)

Anyway, once you have access to such a state, it can be used as a reinforcer for alternative behaviors, as it's stronger than the diluted intermediate versions found at other levels of the person's goal tree. (More precisely, it can be used to extinguish the conditioned appetite that drives the problem behavior.)

I teach this method and use it in coaching; my wife and I also use it personally. I'd link to my own workshops and recordings on the subject as well, but since I was downmodded for referring to a site where you could buy someone else's book, I shudder to imagine what would happen if I linked to a site where you could buy my products or services. ;-)

Comment author: roland 10 April 2009 02:12:44AM 3 points [-]

I teach this method and use it in coaching; my wife and I also use it personally. I'd link to my own workshops and recordings on the subject as well, but since I was downmodded for referring to a site where you could buy someone else's book, I shudder to imagine what would happen if I linked to a site where you could buy my products or services. ;-)

Please post the link. And why should you be afraid of downmodding? I have been downmodded for saying things that are true(at least IMHO). Don't give that much importance to the mods!

Comment author: pjeby 10 April 2009 02:58:10AM 2 points [-]

And why should you be afraid of downmodding?

I'm not. I'm simply attempting to respect the wishes of others regarding what should or should not be posted here.

Please post the link

Googling "Core Transformation" and "Gateway of Desire" (as phrases in quotes) will get you the links. Don't be confused by something else called "Quantum Touch - Core Transformation"; it's something unrelated (thank goodness).

Comment author: MBlume 10 April 2009 03:03:01AM 5 points [-]

People are trying to eliminate spam. Spammers tend to include links to outside services which cost money. Thus, your providing such a link gives you the superficial appearance of a spammer, and you got downmodded accordingly. You are not a spammer, you have participated in good faith in this community, at great personal effort, and contributed many useful insights as a result. I think by now, most people are aware of this, and you should not need to worry about giving the appearance of spamming.

Comment author: ciphergoth 10 April 2009 09:32:58AM *  2 points [-]

http://coretransformation.org/ appears to be the main website. This Google search finds related materials. All I could find on Wikipedia was this article on Steve Andreas.

Comment author: gjm 10 April 2009 07:58:45PM 8 points [-]

The fact that everything I can find on the web carefully avoids giving details and instead takes the form "We have these fantastic techniques that can solve most of your problems; sign up for our seminars and we'll teach them to you" is ... not promising.

Promising the world, giving few details, and insisting on being paid before saying anything more, seems to me to be strongly correlated with dishonesty and cultishness. Since pjeby seems like a valuable member of this community, I hope this case happens to be different; but I'd like to see some evidence.

Comment author: roland 10 April 2009 03:31:35AM 1 point [-]

I'm not. I'm simply attempting to respect the wishes of others regarding what should or should not be posted here.

Well, you didn't grant my wish for a simple link, I have to google now. How sad. As for the wishes of others would you rather not post a truth then to be downvoted by the majority?

Comment author: Emile 10 April 2009 08:20:26AM 1 point [-]
Comment author: MendelSchmiedekamp 09 April 2009 03:51:30PM *  2 points [-]

Absolutely, learning to work with your subconscious is a necessity. After all it does far more computation than your conscious mind does.

Of course, you ought to explore the techniques that let you take positive advantage of it too.

Comment author: roland 09 April 2009 10:26:29PM 6 points [-]

If I was rational enough to pick only stocks that would go up, I'd become successful regardless of how little willpower I had, as long as it was enough to pick up the phone and call my broker.

Rationality is not enough to pick the right stocks. You need to have the willpower to read the vast amount of material to enable you to do that pick.

Comment author: AnnaSalamon 09 April 2009 11:08:13AM *  6 points [-]

Eliezer considers fighting akrasia to be part of the art of rationality; he compares it to "kicking" to our "punching". I'm not sure why he considers them to be the same Art rather than two related Arts.

Remember your post on haunted rationalists, and Eliezer’s reply about how it’s possible to successfully work to accept rational beliefs even with the not-so-conscious, not-so-verbal parts of oneself that might be continue to believe in ghosts after one rationally understands the arguments against?

It sounds like maybe you mean “rationality” (or “x-rationality”) to include only “conscious processes that one employs to route around natural biases, with one’s verbal centers, on purpose”, while Eliezer is using “rationality” to mean “extra bonus sanity” or “trying to get one’s whole mind, impression-making-systems, decision-making-systems, etc., in good contact with all the evidence and with one’s own real concerns” (e.g., in the manner RichardKennaway describes changing his decision-making). It’s this latter art that I’d like to improve in, at least.

Comment author: simpleton 09 April 2009 04:22:51AM *  17 points [-]

If in 1660 you'd asked the first members of the Royal Society to list the ways in which natural philosophy had tangibly improved their lives, you probably wouldn't have gotten a very impressive list.

Looking over history, you would not have found any tendency for successful people to have made a formal study of natural philosophy.

Comment author: Yvain 09 April 2009 04:48:08AM *  21 points [-]

It would be overconfident for me to say rationality could never become useful. My point is just that we are acting like it's practically useful right now, without very much evidence for this beyond our hopes and dreams. Thus my last sentence - that "crossing the Pacific" isn't impossible, but it's going to take a different level of effort.

If in 1660, Robert Boyle had gone around saying that, now that we knew Boyle's Law of gas behavior, we should be able to predict the weather, and that that was the only point of discovering Boyle's Law and that furthermore we should never trust a so-called chemist or physicist except insofar as he successfully predicted the weather - then I think the Royal Society would be making the same mistake we are.

Boyle's Law is sort of helpful in understanding the weather, sort of. But it's step one of ten million steps, used alone it doesn't work nearly as well as just eyeballing the weather and looking for patterns, and any attempt to judge applicants to the Royal Society on their weather prediction abilities would have excluded some excellent scientists. Any attempt to restrict gas physics itself to things that were directly helpful in predicting the weather would have destroyed the science, ironically including the discoveries two hundred years down the road that were helpful in weather prediction.

Summed up: With luck, (some) science can result in good practical technology. But demanding the technology too soon, or restricting science to only the science with technology to back it up, hurts both science and technology.

(there is a difference between verification and technology. Boyle was able to empirically test his gas law, but not practically apply it. This may be fuzzier in rationality)

Comment author: badger 09 April 2009 06:00:19AM *  10 points [-]

I'm confused about this article. I agree with most you've said, but I'm not sure the point is exactly. I thought the entire premise of this community was that more is possible, but we're only "less wrong" at the moment. I didn't think there was any promise of results for the current state of the art. Is this post a warning, or am I overlooking this trend?

I agree we shouldn't see x-rationality as practically useful now. You don't rule out rationality becoming the superpower Eliezer portrays in his fiction. That is certainly a long ways off. Boyle's Law and weather prediction is an apt analogy. Just trying harder to apply our current knowledge won't go very far, but there should be some productive avenues.

I think I'd understand your purpose better if you could answer these questions: In your mind, how likely is it that x-rationality could be practically useful in, say, 50 years? What approaches are most likely to get us to a useful practice of rationality? Or is your point that any advances that are made will be radically different from our current lines of investigation?

Just trying to understand.

Comment author: Eliezer_Yudkowsky 09 April 2009 12:10:10PM 18 points [-]

The above would be component 1 of my own reply.

Component 2 would be (to say it again) that I developed the particular techniques that are to be found in my essays, in the course of solving my problem. And if you were to try to attack that or a similar problem you would suddenly find many more OB posts to be of immensely greater use and indeed necessity. The Eliezer of 2000 and earlier was not remotely capable of getting his job done.

What you're seeing here is the backwash of techniques that seem like they ought to have some general applicability (e.g. Crisis of Faith) but which are not really a whole developed rationalist art, nor made for the purpose of optimizing everyday life.

Someone faced with the epic Challenge Of Changing Their Mind may use the full-fledged Crisis of Faith technique once that year. How much benefit is this really? That's the question, but I'm not sure the cynical answer is the right one.

What I am hoping to see here is others, having been given a piece of the art, taking that art and extending it to cover their own problems, then coming back and describing what they've learned in a sufficiently general sense (informed by relevant science) that I can actually absorb it. For that which has been developed to address e.g. akrasia outside the rationalist line, I have found myself unable to absorb.

Comment author: thomblake 09 April 2009 12:53:31PM 9 points [-]

The Eliezer of 2000 and earlier was not remotely capable of getting his job done.

Are you more or less capable of that now? Do you have evidence that you are? Is the job tangibly closer to being completed?

Comment author: Annoyance 09 April 2009 05:16:35PM 0 points [-]

I wouldn't bother with those questions if I were you, thomblake. They've never been answered here, and are unlikely ever to be answered, here or elsewhere.

The goal here is to talk about being rational, not actually being so; to talk about building AIs, not show progress in doing so or even to define what that would be.

It's about talking, not doing.

Comment author: gjm 10 April 2009 08:27:16PM 2 points [-]

There are many different people here. I think talking about "the goal" is nonsense.

Comment author: Yvain 09 April 2009 01:55:35PM *  13 points [-]

But you're not a good test case to see whether rationality is useful in everyday life. Your job description is to fully understand and then create a rational and moral agent. This is the exceptional case where the fuzzy philosophical benefits of rationality suddenly become practical.

One of the fundamental lessons of Overcoming Bias was "All this stuff philosophers have been debating fruitlessly for centuries actually becomes a whole lot clearer when we consider it in terms of actually designing a mind." This isn't surprising; you're the first person who's really gotten to use Near Mode thought on a problem previously considered only in Far Mode. So you've been thinking "Here's this nice practical stuff about thinking that's completely applicable to my goal of building a thinking machine", and we've been thinking, "Oh, wow, this helps solve all of these complicated philosophical issues we've been worrying about for so long."

But in other fields, the rationality is domain-specific and already exists, albeit without the same thunderbolt of enlightenment and awesomeness. Doctors, for example, have a tremendous literature on evidence and decision-making as they relate to medicine (which is one reason I get so annoyed with Robin sometimes). An x-rationalist who becomes a doctor would not, I think, necessarily be a significantly better doctor than the rest of the medical world, because the rest of the medical world already has an overabundance of great rationality techniques and methods of improving care that the majority of doctors just don't use, and because medicine takes so many skills besides rationality that any minor benefits from the x-rationalist's clearer thinking would get lost in the noise. To make this more concrete: I don't think good doctors are more likely to be atheists than bad doctors, though I do think good AI scientists are more likely to be atheists than bad AI scientists. I think this paragraph about doctors also applies to businessmen, scientists, counselors, et cetera.

When I said that we had a non-trivial difference of opinion on your secret identity post, this was what I meant: that a great x-rationalist might be a mediocre doctor; that maybe if you'd gone into medicine instead of AI you would have been a mediocre doctor and then I wouldn't be "allowed" to respect you for your x-rationality work.

Comment author: Eliezer_Yudkowsky 09 April 2009 02:18:26PM 23 points [-]

An x-rationalist who becomes a doctor would not, I think, necessarily be a significantly better doctor than the rest of the medical world, because the rest of the medical world already has an overabundance of great rationality techniques and methods of improving care that the majority of doctors just don't use

Evidence-based medicine was developed by x-rationalists. And to this day, many doctors ignore it because they are not x-rationalists.

Comment author: Yvain 09 April 2009 09:00:01PM 10 points [-]

...huh. That comment was probably more helpful than you expected it to be. I'm pretty sure I've identified part of my problem as having too high a standard for what makes an x-rationalist. If you let the doctors who developed evidence-based medicine in...yes, that clears a few things up.

Comment author: Eliezer_Yudkowsky 09 April 2009 09:10:52PM 13 points [-]

One thinks particularly of Robyn Dawes - I don't know him from "evidence-based medicine" per se, but I know he was fighting the battle to get doctors to acknowledge that their "clinical experience" wasn't better than simple linear models, and he was on the front lines against psychotherapy shown to perform no better than talking to any bright person.

If you read "Rational Choice in an Uncertain World" you will see that Dawes is pretty definitely on the level of "integrate Bayes into everyday life", not just Traditional Rationality. I don't know about the historical origins of evidence-based medicine, so it's possible that a bunch of Traditional Rationalists invented it; but one does get the impression that probability theorists trying to get people to listen to the research about the limits of their own minds, were involved.

Comment author: Yvain 10 April 2009 02:59:04AM 23 points [-]

After thinking on this for a while, here are my thoughts. This should probably be a new post but I don't want to start another whole chain of discussions on this issue.

  1. I had the belief that many people on Less Wrong believed that our currently existing Art of Rationality was sufficient or close to sufficient to guarantee practical success or even to transform its practioner into an ubermensch like John Galt. I'm no longer sure anyone believes this. If they do, they are wrong. If anyone right now claims they participate in Less Wrong solely out of a calculated program to maximize practical benefits and not because they like rationality, I think they are deluded.

  2. Where x-rationality is defined as "formal, math-based rationality", there are many cases of x-rationality being used for good practical effect. I missed these because they look more like three percent annual gains in productivity than like Brennan discovering quantum gravity or Napoleon conquering Europe. For example, doctors can use evidence-based medicine to increase their cure rate.

  3. The doctors who invented evidence-based medicine deserve our praise. Eliezer is willing to consider them x-rationalists. But there is no evidence that they took a particularly philosophical view towards rationality, as opposed to just thinking "Hey, if we apply these tests, it will improve medicine a bit." Depending on your view of socialism, the information that one of these inventors ran for parliament on a socialist platform may be an interesting data point.

  4. These doctors probably had mastery of statistics, good understanding of the power of the experimental method, and a belief that formalizing things could do better than normal human expertise. All of these are rationalist virtues. Any new doctor who starts their career with these virtues will be in a better position to profit from and maybe expand upon evidence-based medicine than a less virtuous doctor, and will reap great benefits from their virtues. Insofar as Less Wrong's goal is to teach people to become such doctors, this is great...

  5. ...except that epidemiology and statistics classes teach the same thing with a lot less fuss. Less Wrong's goal seems to be much higher. Less Wrong wants a doctor who can do that, and understand their mental processes in great detail, and who will be able to think rationally about politics and religion and turn the whole thing into a unified rationalist outlook.

  6. Or maybe it doesn't. Eliezer has already explained that a lot of his OB writing was just stuff that he came across trying to solve AI problems. Maybe this has turned us into a community of people who like talking about philosophy, and that really doesn't matter much and shouldn't be taught at rationality dojos. Maybe a rationality dojo should be an extra-well-taught applied statistics class and some discussion of important cognitive biases and how to avoid them. It seems to me that a statistics class plus some discussion of cognitive biases would be enough to transform an average doctor into the kind of doctor who could invent or at least use evidence-based medicine and whatever other x-rationality techniques might be useful in medicine. With a few modifications, the same goes for business, science, and any other practical field.

  7. I predict the marginal utility of this sort of rationality will decline quickly. The first year of training will probably do wonders. The second year will be less impressive. I doubt a doctor who studies this rationality for ten years will be noticeably better off than one who studies it for five, although this may be my pessimism speaking. Probably the doctor would be better off spending those second five years studying some other area of medicine. In the end, I predict these kinds of classes could improve performance in some fields 10-20% for people who really understood them.

  8. This would be a useful service, but it wouldn't have the same kind of awesomeness as Overcoming Bias did. There seems to be a second movement afoot here, one to use rationality to radically transform our lives and thought processes, moving so far beyond mere domain-specific reasoning ability that even in areas like religion, politics, morality, and philosophy we hold only rational beliefs and are completely inhospitable to any irrational thoughts. This is a very different sort of task.

  9. This new level of rationality has benefits, but they are less practical. There are mental clarity benefits, and benefits to society when we stop encouraging harmful political and social movements, and benefits to the world when we give charity more efficiently. Once people finish the course mentioned in (6) and start on the course mentioned in (8), it seems less honest to keep telling them about the vast practical benefits they will attain.

  10. This might have certain social benefits, but you would have to be pretty impressive for conscious-level social reasoning to get better than the dedicated unconscious modules we already use for that task.

  11. I have a hard time judging opinion here, but it does seem like some people think sufficient study of z-rationality can turn someone into an ubermensch. But the practical benefits beyond those offered by y-rationality seem low. I really like z-rationality, but only because I think it's philosophically interesting and can improve society, not because I think it can help me personally.

  12. In the original post, I was using x-rationality in a confused way, but I think to some degree I was thinking of (8) rather than (6).

Comment author: pjeby 09 April 2009 08:35:58PM 1 point [-]

For that which has been developed to address e.g. akrasia outside the rationalist line, I have found myself unable to absorb.

Why do you suppose that is?

Comment author: Yvain 09 April 2009 12:55:41PM 1 point [-]

I'll admit I might be attacking a straw man, but if you read the posts linked to on the very top, I think there are at least a few people out there who believe it, or who don't consciously believe it but act as if it's true.

How likely is it that x-rationality could be practically useful in, say, 50 years.

Depends how you reduce "practically useful". Reduce it to "a person randomly assigned to take rationality classes two hours a week plus homework for a year will make on average ten percent more money than a similar person who doesn't", my wild completely unsubstantiated guess is 50% likely. But I'd give similar numbers to other types of self-improvement classes like Carnegie seminars and that sort of thing.

What approaches are most likely to get us to a useful practice of rationality? Or is your point that any advances that are made will be radically different from our current lines of investigation?

If by "useful practice of rationality" you mean the way Eliezer imagines it, I think there should be more focus on applying the rationality we have rather than delving deeper and deeper into the theory, but if I could say more than that, I'd be rich and you'd be paying me outrageous hourly fees to talk about it :)

I do think non-godlike levels of rationality have far more potential to help us in politics than in daily life, but that's a minefield. In terms of easy profits we should focus the movement there, but in terms of remaining cohesive and credible it's not really an option.

Comment author: TheOtherDave 05 December 2010 08:58:55PM 4 points [-]

Yes, yes, yes, yes, yes. And also yes.

I had a similar reaction to the fictional rationalist initiation ceremony.

That said, on further consideration, I'm not sure the "Bayesian Conspiracy" has a choice, given its goals.

It's possible that, even though these sorts of policies do turn away perfectly competant rationalists, they are the only alternative to ending up with a comfortable community of one-or-two-sigmas-above-the-mean rationalists rather than an ultra-elite x-rationality club that can bootstrap itself into the sort of excellence that we enjoy labelling in Japanese.

This has nothing to do with rationality, extreme or otherwise. For any property P, if you want to maintain a minimum standard of P within in a group, you need some way to test for P. If you have a highly reliable test for P that has negligible false positives and few false negatives, great, use that! But lacking one, a test with negligible false positives and lots of false negatives might be good enough, if you can afford to exclude a lot of potential. (Or even a series of mostly independent tests might work well enough, even when no individual test is highly reliable, as long as you exclude anyone who fails any of the tests... which also raises your false-negative rate.)

So, hey, if the ultra-elite rationality club is meeting somewhere and plotting optimized utility, great. More power to them. The most sensible thing for them to do is not even let me waste their time by knowing they exist. LessWrong certainly isn't that club; it's at best a self-selecting group of people who think that club would be a good thing if it existed, plus some others who think they ought to be in it.

Which is OK with me.

Unrelatedly, a specific quibble:

If I was rational enough to pick only stocks that would go up, I'd become successful regardless of how little willpower I had, as long as it was enough to pick up the phone and call my broker.

Well, and enough to actually do all the necessary research to obtain the data from which you could rationally conclude that a given stock will go up. And to continue attending to that data (and researching additionally relevant data) so that you can make rational decisions about whether to sell or hold those stocks tomorrow, next week, next month, etc.

Comment author: xamdam 30 June 2010 09:01:16PM 4 points [-]

If I was rational enough to pick only stocks that would go up, I'd become successful regardless of how little willpower I had, as long as it was enough to pick up the phone and call my broker.

Availability of investing is NOT a disproof that akrasia is NOT the complete explanation. Successful investing is rationality+financial education+a lot of work (Buffett is rumored to read an incredible amount of accounting statements), and hence subject to akrasia.

Comment author: Z_M_Davis 24 May 2009 06:46:09AM 4 points [-]

I can't think of any people who started out merely above-average, developed an interest in x-rationality, and then became smart and successful because of that x-rationality.

I'm working on this.

Comment author: MrShaggy 24 April 2009 05:07:54AM 4 points [-]

"Eliezer considers fighting akrasia to be part of the art of rationality; he compares it to "kicking" to our "punching". I'm not sure why he considers them to be the same Art rather than two related Arts."

I don't understand the implications of seeing it as part of the same art or a different art altogether.

Comment author: AnnaSalamon 09 April 2009 01:10:20PM *  8 points [-]

This experiment seems easy to rig4; merely doing it should increase your level of conscious rational decisions quite a bit. And yet I have been trying it for the past few days, and the results have not been pretty. .... [O]ne way to fail your Art is to expect more of it than it can deliver.... Perhaps there are developments of the Art of Rationality or its associated Arts that can turn us into a Kellhus or a Galt, but they will not be reached by trying to overcome biases really really hard.

To make a somewhat uncharitable paraphrase: you read many articles on rationality, did not actually use them to change the way you make decisions, and found that the rationality hasn’t changed the results of your decisions much. You conclude, not that you aren’t practicing rationality, but that rationality can’t deliver practical goods at all, at least not as taught.

I agree we need practices for better incorporating OB/LW/new techniques of rationality into our actual practice of inference and decision-making. But it seems like the “and I’m not actually using this stuff much” result of your experiment should prevent “it hasn’t made my life much better” from telling you all that much about whether the OB/LW inference or decision-making techniques, if one does practice them, could make one’s life better.

Comment author: Yvain 09 April 2009 08:56:26PM *  38 points [-]

I accept that to some degree my results say more negative things about me than about rationality, but insofar as I may be typical we need to take them into account when considering how we're going to benefit from rationality.

...my inability to communicate clearly continues to be the bane of my existence. Let me try a strained metaphor.

Christianity demands its adherents "love thy enemy", "turn the other cheek", "judge not lest ye be judged", "give everything to the poor", and follow many other pieces of excellent moral advice. Any society that actually followed them all would be a very nice place to live.

Yet real-world Christian societies are not such nice places to live. And Christians say this is not because there is anything wrong with Christianity, but because Christians don't follow their religion enough. As the old saying goes, "Christianity has not been tried and found wanting, it has been found difficult and left untried." There's some truth to this.

But it doesn't excuse Christianity's failure to make people especially moral. If Christianity as it really exists can't translate its ideals into action, then it's gone wrong somewhere. At some point you have to go from "Christianity is perfect but most people can't apply it" to "Christianity is flawed because most people can't apply it."

The Christians' problem isn't that there aren't enough Christians. And it's not that Christians aren't devout and zealous enough. And it's not even that Christians don't understand what their faith expects of them. Their problem is that the impulse to love thy neighbor gets lost somewhere between theory and action. My urge as an outsider is to blame it on a combination of akrasia, lack of sufficient self-consciousness, and people who accept Christianity 100% on the conscious level but don't "feel it in their bones".

If I were a theologian, I would be recommending to my fellow Christians one of two things:

First, that they spend a whole lot less time in Bible study than they do right now, because they already know a whole lot more Bible than they actually use, and teaching them more Bible isn't going to solve that problem. Instead they need to be spending that time thinking of ways to solve their problem with applying the Bible to real life.

Or second, that they stop talking about how moral Christianity makes them and how a Christian society will always be a moral society and so on and so it's beneficial that everyone learn Christianity, and just admit that Christians probably aren't that much more moral than atheists and that they're in it because they like religion. In that case they could go on talking about the Bible to their hearts' content.

Now, to some degree, we can blame individual Christians for the failure of Christianity to transform morality for the better. But we also have to wonder if maybe it's not even addressing the real problem, which is less of a lack of moral ideals than a basic human inability to translate moral ideals into action.

Right now I find myself in the same situation as a devout Christian who really wants to be good, but has noticed that studying lots of Bible verses doesn't help him. Less Wrong has thus far seemed to me like a Bible study group where we all get together and talk about how with all this Bible studying we'll all be frickin saints soon. Eliezer's community-building posts seem like Catholics and Episcopalians arguing on the best way to structure the clergy. It's all very interesting, but...

...but I feel like there is insufficient appreciation that the Art of Knowing the Bible and the Art of Becoming a Saint are two very different arts, that we haven't really begun developing the second, and that religion has a bad track record of generating saints anyway.

Your objection sounds too much like saying that since I'm not a saint yet, I must simply not be applying my Bible study right. Which is in one sense true, but centuries of Christians telling each other that hasn't created any more saints. So people need to either create an Art of Becoming A Saint worthy of the name, or stop insisting that we will soon be able to create saints on demand.

Comment author: roland 09 April 2009 10:52:16PM 6 points [-]

but because Christians don't follow their religion enough.

Well, as a former christian(now atheist thanks to OB/Yudkowsky) I have to disagree. Christianity doesn't work regardless if you live by it or not. I don't claim that I lived 100% as expected but I implemented some things quite literally like "turn the other cheek"(btw, taking this literally is a misinterpretation of the real meaning http://en.wikipedia.org/wiki/Turn_the_other_cheek#Figurative_interpretation). I can say: it's nonsense, it doesn't work, it only makes other people take advantage of you and yes, I'm talking from experience.

Comment author: [deleted] 18 January 2012 04:30:35AM 0 points [-]

"Turn the other cheek" is a phrase with a natural figurative meaning—"expose yourself to further aggression". Are you saying that this figurative meaning should itself be taken figuratively, or just that "turn the other cheek" should not be interpreted literally literally?

Comment author: roland 18 January 2012 06:04:35PM 2 points [-]

Here is the whole:

Matthew 5:39

But I tell you, Do not resist an evil person. If someone strikes you on the right cheek, turn to him the other also.

"Turn the other cheek" can only be understood if you know the cultural context of the time which goes as follows:

The left hand was considered unclean so people used the right hand and for a person to strike your right cheek with his right hand implies that he is giving you a backhand slap. This was understood as a humiliating gesture that a higher ranking person would dish out to someone lower in status, e.g. a master to his servant. Now, if you received such a slap and proceed to offer the other cheek you would put the higher ranking person in a conundrum. He can no longer reach your right cheek with a backhand slap, the only option he has left is attacking you on the left cheek. But attacking on the left didn't have the same social connotation, it probably would just be interpreted as a de facto aggressive behavior, implying that the higher ranking person is acknowledging you as socially equal and also giving you the right to fight back.

The same logic is also present in "walking another mile" and "leaving the undergarmnet"(which are part of the same biblical passage).

So we can see that offering the other cheek puts the other in check and has nothing to do with "exposing oneself to further aggression" or being meek and humble, it is in fact a gesture of defiance, a very clever one.

Comment author: lavalamp 23 January 2012 08:06:42PM 6 points [-]

Former christian here. Every once in a while, I catch myself about to--or worse, in the middle of--recounting an explanation like the one you just gave for which I have no evidence other than some pastor's word. On more than one of those occasions, the recalled explanation was just wrong. I haven't googled your explanation here, so it's possible that there's lots of evidence for it, but my prior for that is fairly low (it seems like a really specific piece of cultural information, and it pattern matches against "story that reinterprets well known biblical passage in a way that makes the inconvenient and obvious interpretation incorrect").

I'm incredibly pessimistic about the abilities of the average christian pastor at weighing the evidence for multiple competing historical hypotheses and coming up with the most correct answer (it's basically their job to be bad at this). I know that reversed stupidity is not intelligence, but as a rule I no longer repeat things I "learned" in a church setting unless I've independently verified it.

(Oh, and: my apologies if you came by that story via a more rigorous process.)

Comment author: Caspian 24 January 2012 01:35:05AM 3 points [-]

I was interested enough to google, and found some relevant links.

http://en.wikipedia.org/wiki/Turning_the_other_cheek has (unlinked, presumably offline) references for an explanation like that.

http://www.ekklesia.co.uk/node/9385 has more of the argument and says "resist not evil" is a biased or incorrect translation invented by King James' bible translators.

From the above page (by Walter Wink): "Jesus did not tell his oppressed hearers not to resist evil. His entire ministry is at odds with such a preposterous idea." - I had noticed that a lot of his behaviour described in the bible was inconsistent with this doctrine. He makes more sense without it.

Comment author: JoshuaZ 03 February 2012 05:02:33AM 0 points [-]

http://www.ekklesia.co.uk/node/9385 has more of the argument and says "resist not evil" is a biased or incorrect translation invented by King James' bible translators.

This seems strange. I don't know Greek so I can't look at the closest to original text, but I can read some Latin. So I looked at the Vulgatus which is both a) Catholic and b) predating the KJV by many centuries. That uses the phrase here "Non resistere malo" means something like "don't resist the bad" but might be closer to "don't fight bad things".

Comment author: lavalamp 24 January 2012 03:31:27AM 0 points [-]

Alright, wikipedia has better evidence than I expected, although I'm also not going to read the referenced book.

Wink's piece is coherent and well-put, but doesn't seem like great evidence-- I cannot tell if he mentally wrote his conclusion before or after making those arguments, and I can't tell which elements are actual features of ANE culture identified by historians and which are things that just sounded reasonable to him.

Comment author: wedrifid 24 January 2012 02:42:05AM *  0 points [-]

I'm incredibly pessimistic about the abilities of the average christian pastor at weighing the evidence for multiple competing historical hypotheses and coming up with the most correct answer (it's basically their job to be bad at this).

There are specific things that pastors are required to be wrong about yet when it comes to adding mere details for the sake of little more than curiosity there is little reason to believe they would be worse than average. For most part, of course, they will be simply teaching what they were taught and theological college - the evidence weighing is done by others. This is how most people operate.

Comment author: lavalamp 24 January 2012 03:02:04AM 4 points [-]

What you say is true for competent pastors. I've probably been exposed to more than my fair share of the incompetent ones.

...I noticed a long time before I deconverted that when pastors said something about a subject I knew something about, they were totally wrong some ridiculously high percentage of the time. Should have tipped me off.

Comment author: wedrifid 24 January 2012 04:12:00AM 0 points [-]

What you say is true for competent pastors. I've probably been exposed to more than my fair share of the incompetent ones.

I've been fortunate in as much as several of my pastors and most of my lay preachers had science degrees. Mind you I suspect I've selected out most of the bad ones since I do recall I used to spend time with my family absolutely bagging the crap out of those preachers who said silly things.

Comment author: roland 23 January 2012 11:36:52PM 0 points [-]

I didn't learn that in a church setting, I read it on the internet in a page that claimed this to be the result of some scholar. What I liked most about the explanation is that it makes sense of the weird examples: cheek slapping(usually men use their fists if they mean to be aggressive) and forcing someone to walk a mile(makes sense if you assume the roman occupation context). So it is the best explanation I heard up to date, sigh.

Comment author: Bugmaster 18 January 2012 06:13:50PM *  3 points [-]

This explanation is neat, but it sounds quite contrived to me, especially since the previous sentence clearly says, "do not resist an evil person". Is there any reason to believe that your interpretation is the one that the writers of the Bible originally intended ?

Comment author: roland 18 January 2012 08:55:19PM 1 point [-]

Writers of the bible? Who wrote the bible? It is a collection of folklore that at first was transmitted orally and some day some people starting writing it all down. The people who wrote it down were not necessarily the originators or even first witnesses of the stories. As always different people will try to extract different teachings from the same stories. Maybe there was originally the parable of the cheek and later someone added "do not resist an evil person" trying to make a general teaching out of it and disregarding or not knowing the original context.

To really find out you would have to go back to the origin of the whole and understand what cultural context was present there at that time. That there is a lot of confusion nowadays is an indicator that a lot of the context got lost.

Did you ever find anyone who forced you to go a mile with you? Isn't that weird that such a thing is in the bible? It is until you understand that there was a roman occupation and that soldiers had the right to demand you carry their pack for a mile(but not more, a soldier could be punished if he forced you for more than that hence the second mile thing).

Comment author: Bugmaster 20 January 2012 12:19:29AM 1 point [-]

Writers of the bible? Who wrote the bible? It is a collection of folklore that at first was transmitted orally and some day some people starting writing it all down.

Sure, that's true, but:

To really find out you would have to go back to the origin of the whole and understand what cultural context was present there at that time.

I agree with you there. I kind of assumed that you have already accomplished this task, though, since you are pretty confident about your interpretation of the "other cheek" concept. All I was asking for is some evidence that your interpretation is the more correct one. I agree that it sounds neat, but that's not enough; you also need to show that this was the passage's original, intended meaning. Same thing goes for miles and undergarments.

Comment author: MBlume 09 April 2009 09:05:29PM *  1 point [-]

Christianity has not been tried and found wanting, it has been found difficult and left untried.

HT G.K. Chesterton

(I was sure it would be Lewis, so I'm glad I decided to Google anyway)

Comment author: [deleted] 21 January 2012 06:47:20PM 0 points [-]

On the other hand, I once read that certain influences of religion are found across societies even among non-explicitly-religious people, e.g. people from historically-predominantly-Catholic regions are usually more likely to turn a blind eye to minor rule violations, or people from historically-predominantly-Calvinist regions are usually more likely to actively seek economic success (whether they self-identify as Catholic/Calvinist or not). And my experience (of having lived almost all my life in Italy, but having studied one year in Ireland among lots of foreigners) doesn't disconfirm this.

Comment deleted 09 April 2009 06:49:58AM [-]
Comment author: badger 09 April 2009 10:45:13PM 1 point [-]

I'm of the same opinion.

Comment author: gaffa 09 April 2009 01:51:36PM *  13 points [-]

Am I the only one who is isn't entirely positive towards the heavy use of language identifying the LW community as "rationalists", including terms like "rationalist training" etc.? (Though he is by far the heaviest user of this kind of language, I'm not really talking about Eliezer here, his language use is whole topic on its own - I'm restricting this particular concern to other people, to the general LW non-Eliezer jargon). Is strongly self-identifying as a "rationalist" really such a good thing? Does it really help you solve problems? (I second the questions raised by Yvain). Though perhaps small, isn't there still a risk that the focus becomes too much on "being a rationalist" instead of on actually solving problems?

Of course, this is a blog about rationality and not about specific problems, so this kind of language is not suprising and sometimes might even be necessary. I'm just a bit hesitant towards it when the community hasn't actually shown that it's better at solving problems than people who don't self-identify as rationalists and haven't had "rationalist training", or shown that the techniques fostered here have such a high cross-domain applicability as seems to be assumed. Maybe after it has been shown that "rationalists" do better than other people, people who just solve problems, I would feel better about this kind of jargon.

Comment author: DanielLC 14 January 2012 08:45:14PM 3 points [-]

I define "rationalist" to be "someone who tries to become more rational". I'm fine with calling this a community of rationalists. I don't like it when people use "rationalist" to refer exclusively to members of this community.

Comment author: CarlShulman 09 April 2009 03:51:34PM 8 points [-]

I find it much more tolerable when 'aspiring' is added.

Comment author: HughRistik 10 April 2009 05:41:58AM *  3 points [-]

X-Rationality can help you succeed. But so can excellent fashion sense. It's not clear in real-world terms that x-rationality has more of an effect than fashion. And don't dismiss that with "A good x-rationalist will know if fashion is important, and study fashion." A good normal rationalist could do that too; it's not a specific advantage of x-rationalism, just of having a general rational outlook.

Yet many highly intelligent people with normal rationality have terrible fashion sense, particularly males, at least in my anecdotal experience. Ditto for social skills, dating skills, etc... (fashion is really a subset of social skills, combined with aesthetics). (a) Are these people not really rationalists, because they haven't figured out how to improve themselves in those areas, or (b) do ordinary rationalists have trouble figuring out that they would benefit from improvement in those areas, and how to do it? Or perhaps (c), they recognize the benefits of greater social abilities, but they do not believe that the effort is worth it?

In principle, normal intelligent rationalists could figure out how to improve their fashion skills and social skills deliberately and systematically. But if indeed so few people in that category do so, I would take it as evidence that a systematic approach to developing interpersonal skills and style actually requires a higher level of rationality that what normal rationalists possess (perhaps x-rationality, depending on what we mean by that).

Comment author: moshez 14 February 2012 06:42:20PM 4 points [-]

"Yet many highly intelligent people with normal rationality have terrible fashion sense"

Hrm, I'm not sure what evidence there is that highly intelligent people worse fashion sense than equivalent people [let's stick to the category of males, with which I'm most familiar]. It seems to me like "fashion" for males comes down to a few simple rules, that a monkey (or, for that matter any programmer or mathematician) can master. The problem seems to be that (1) one does need to master these rules (2) sometimes, it means one does not dress comfortably.

I would like to offer a competing hypothesis: nerds have just as much "innate" fashion sense as non-nerds, but they feel that fashion is beneath them, that dressing comfortably is more important than following fashion, or that they would prefer to dress to impress nerds (with T-shirts that say "P(H|E) = P(E|H)*P(H)/P(E)" for example) than to impress non-nerds. In other words, the much simpler hypothesis "dress is usually worn to self-identify as a member of a tribe" is enough to explain nerds' perceived lack of fashion sense.

[For the record, here is how a nerd male can "simulate" a reasonable facsimile of fashion sense: for semi-formal occasions, get a couple of nice suits and wear them. If nobody else would wear a tie, wear a suit without the tie (if your ability to predict whether people will wear a tie is that bad, improve it with explicit Bayesian approximation). For all other occasions, wear dark colored slacks and a button down shirt with a compatible color (ask a person you trust about which colors go with which, and keep a table glued to the inside of your closet. Any "nerd" has mastered skills tremendously more complicated than that (hell, correctly writing HTML is more complicated). One can only assume it is lack of motivation, not of ability.]

For myself as an example of nerd, I can definitely say the reason I dress "with a horrible fashion sense" is as a tribal identification scheme. In situations where my utility function would actually suffer because of that, I do the rational thing, and wear the disguise of a different tribe... (For example, when going on sales pitches to customers, I let the sales rep in charge of the sale to tell me what to dress down to the socks, on my wedding I let my wife pick out my clothes, etc.)

Comment author: Bugmaster 14 February 2012 07:51:20PM 0 points [-]

Personally, I've been able to get away with just dark slacks and a dark formal shirt. That said, I usually dress quite "horribly" by fashion standards, because there's no one in my day-to-day life who'd be impressed by my mad fashion skills, so I might as well dress comfortably at no penalty.

Comment author: AlexU 10 April 2009 02:10:34PM 3 points [-]

I've talked before in this same vein about the limits of rationality. One can be a perfect rationalist and always know what to do in a given situation, yet still be unable to do it for whatever reason. This suggests that pretty strongly that good "rationalists" would be wise to invest their time into other areas as well, since rationalism alone won't turn you into the ubermensch. It won't make you healthy and fit, it won't enable you to talk to girls any better or make friends any easier. (And I object to any conception of "rationalism" so sweepingly broad that it manages to subsume every possible endeavor you'd set out on, e.g., the old "a good rationalist would realize the importance of these things and figure out meta-techniques for developing these skills.")

Comment author: Nick_Tarleton 10 April 2009 02:02:58PM *  3 points [-]

Three other suggestions:

(d) they've let "bad at fashion", "bad social skills", and the like become part of their identities, rationalized by the belief that those things are shallow, non-intellectual, whatever;

(e) they didn't practice those skills at a young enough age (because they were too young to realize the importance, they were socially excluded, ...) to deeply learn them, also reinforcing both (d) and a (destructive, hard to break) sense of being low-status;

(f) high intelligence + interest/aptitude in rationality correlates with mild autism-spectrum traits (not necessarily sufficient to be diagnosed, but enough to cause social problems, particularly in childhood).

Comment author: HughRistik 10 April 2009 11:50:09PM *  5 points [-]

I think all of those are highly plausible factors (all of which applied to me, btw).

(d) they've let "bad at fashion", "bad social skills", and the like become part of their identities, rationalized by the belief that those things are shallow, non-intellectual, whatever;

Additionally, they may have internalized the stereotype that rational people should act like Spock. And conversely, they may associate those skills with people they dislike: "those are the shallow kinds of things the popular people do, whereas I am deep."

(e) they didn't practice those skills at a young enough age (because they were too young to realize the importance, they were socially excluded, ...) to deeply learn them, also reinforcing both (d) and a (destructive, hard to break) sense of being low-status;

I like the interactionist perspective between nature and nurture you are taking here. It's not necessarily destiny that these people grow up with social deficits, it's just a common outcome of the interaction of their individual characteristics with a negative formative social environment.

(f) high intelligence + interest/aptitude in rationality correlates with mild autism-spectrum traits (not necessarily sufficient to be diagnosed, but enough to cause social problems, particularly in childhood).

This is a can of worms that I was thinking about opening up. Our normal intelligent rationalists would also tend to be high on "systemizing" rather than "empathizing" in Simon Baron-Cohen's theory, and more interested in "things" on the "people vs things" dimension.

The result is that the kind of neurotypical cognition required for social skills and fashion sense may seem non-intuitive or even alien to the category of people we are talking about. For instance, fashion and social skills often involve doing things simply because other people are doing them, which may defy one's sense of individualism, and belief that behaviors should have objective purpose.

Furthermore, this type of individual may feel that people should be accorded status based on "objective merit," which means being good at the things that matter to our intelligent rationalists. They may find it nauseating that status often depends on things like clothing, body language and voice tonality, who you hang out with, etc... rather than on actual intelligence or competence.

90% of social communication will seem meaningless to them, because it is based on emoting, status ploys, or pointing out things that are obvious, in contrast to the type of communication that is "really" meaningful, such as exchanging of ideas, factual information, or practical processes.

For this type of intelligent rationalist to build social skills from the ground up is an impressive feat, because they have to get over their own biases and past a bunch of developmental barriers (whether biological or social). A higher level of rationality may be a necessary, though not sufficient, condition for accomplishing this feat. (Yet of course, a higher level of rationality may be linked to even more social deficits, semi-autistic "thing-oriented" personality traits, etc... Perhaps this is why the world is not ruled by an over-caste of charismatic, fashionable people with 150+ IQ.)

Comment author: mattnewport 10 April 2009 06:08:26AM 3 points [-]

I agree that there's some level missed by the distinction between 'normal' rationality and 'x-rationality' and it's in that middle ground that I feel I've derived the most practical benefits from rationality. The examples you give are good ones. Other examples I could give from my own experience are personal finance and weight loss.

Using personal finance as an example: I consider myself to have always possessed an above average level of intelligence and 'normal' rationality. I have a scientific education and make my living as a computer programmer. Until fairly recently though I let my emotional dislike of form filling get in the way of organizing my personal finances effectively. A general desire to more rigorously apply 'normal' rationality in my life to improve my outcomes led me to recognize that I was irrationally allowing my negative reaction to paperwork to have a significant financial impact. By comparing the marginal utility of a few hours of unpleasant labour optimizing my tax situation to a few hours of tedious paid employment I realized I was making an irrational choice and recognizing that was an aid in overcoming the obstacle. Recognizing the logical flaws in the kinds of rationalizations I'd used to justify my previous lack of organization was also helpful. Often I would use clever-sounding arguments to justify avoiding a task which was simply unpleasant.

Comment author: AnnaSalamon 10 April 2009 05:54:41AM 1 point [-]

I would take it as evidence that a systematic approach to developing interpersonal skills and style actually requires a higher level of rationality that what normal rationalists possess.

HughRistik, this is only evidence if people with a higher level of rationality do better at improving their fashion skills, social skills, etc. My impression is that we do do somewhat better, but it's not obvious, and more data would be good.

Comment author: HCE 09 April 2009 06:19:21PM *  3 points [-]

as robin has pointed out on numerous occasions, in many situations it is in our best interest to believe, or profess to believe, things that are false. because we cannot deceive others very well, and because we are penalized for lying about our beliefs, it is often in our best interest to not know how to believe things more likely to be true. refusing to believe popular lies forces you to either lie continually or to constantly risk your relative status within a potentially useful affiliative network by professing contrarian beliefs or, almost as bad, no beliefs at all. you're better off if you only apply ''epistemic rationality techniques'' within domains where true beliefs are more frequently or largely rewarded, i.e., where they lead to winning strategies.

trying to suppress or correct your unconscious judgments (often) requires willpower. indiscriminately applying ''epistemic rationality techniques'' may have the unintended consequence of draining your willpower more quickly (and needlessly).

Comment author: nazgulnarsil 09 April 2009 03:26:17PM 3 points [-]

winning takes time. few of us have gotten rich yet.

Comment author: Eliezer_Yudkowsky 09 April 2009 11:54:06AM 7 points [-]

Would Newton have gone even further if he'd known Bayes theory? Probably it would've been like telling the world pool champion to try using more calculus in his shots: not a pretty sight.

An interesting choice of example, given that Bayesian probability theory as we know it (inverse inference) was more or less invented by Laplace and used to address specific astronomical controversies surrounding the introduction of Newton's Laws, having to do with combining multiple uncertain observations.

Comment author: Yvain 10 April 2009 03:16:53AM 5 points [-]

In the spirit of concrete reductions, I have a question for everyone here:

Let's say we took a random but very large sample of students from prestigious colleges, split them into two groups, and made Group A take a year-long class based on Overcoming Bias, in which students read the posts and then (intelligent, engaging) professors explained anything the students didn't understand. Wherever a specific technique was mentioned, students were asked to try that technique as homework.

Group B took a placebo statistics class similar to every other college statistics class, also with intelligent and engaging professors.

Twenty-five years later, how would you expect the salaries of students in Group A to compare to the salaries of students in Group B? The same? 1.1 times greater? Twice as great? What about self-reported happiness? Amount of money donated to charity per year?

Comment author: AnnaSalamon 10 April 2009 03:22:55AM *  4 points [-]

Does the course use CBT-like techniques, where e.g. when "Leave a line of retreat" is taught, participants specifically list out all the possibilities where fear might be preventing them from thinking carefully, and build themselves lines of retreat for those possibilities? And learn cached heuristics for noticing, through the rest of their lives, when leaving a line of retreat would be a good idea, together with habits for actually doing so? Also, does the course have a community spirit, with peers asking one another how things went, and pushing one another to experiment and implement?

If so, I'd give 50% odds (for each separate proposition, not the conjunction) that the group A salaries are higher variance than the group B's, and that the 98th percentile wealthiest / most famous / most impactful of group A is significantly wealthier / more famous / more successful at improving their chosen fields than the 98th percentile of group B. Significantly, like... times five, say (though I'd expect a larger multiplier from the "changing their chosen fields to work well" than from the "making more money"; strategicness is more rarely applied to the former, and there's lower hanging fruit). (I would not expect such a gap between the two groups' medians.)

Comment author: prase 10 April 2009 12:37:45PM 3 points [-]

I would expect very little correlation with salaries. And about self-reported happiness - I often think that knowing about all biases, memory imperfections and all that stuff, and about how difficult it is to decide correctly, makes me substantially less happy.

Comment author: AnnaSalamon 11 April 2009 03:32:56AM *  1 point [-]

prase, is happiness much of a goal for you? If so, have you tried to apply rationality toward it, e.g. by reading the academic research on happiness (Jonathan Haidt's "The Happiness Hypothesis" is a nice summary) and thinking through what might work for you?

Comment author: mathemajician 09 April 2009 01:28:57PM *  4 points [-]

The most effective way for you to internally understand the world and make good decisions is to be super rational. However, the most effective way to get other people to aid you on your quest for success is to practice the dark arts. The degree to which the latter matters is determined by the mean rationality of the people you need to draw support from, and how important this support is for your particular ambitions.

Comment author: SoullessAutomaton 09 April 2009 10:35:13AM 4 points [-]

I strongly suspect that it is unreasonable to expect people to actively apply x-rationality on a frequent, conscious basis--to do so would be to fight against human cognitive architecture, and that won't end well.

Most of our decisions are subconscious. We won't be changing this. The place of x-rationality is not to make on-the-spot decisions, it's to provide a sanity check on those decisions and, as necessary, retrain the subconscious decision making processes to better approximate rationality.

Comment author: jimrandomh 09 April 2009 03:27:38AM *  4 points [-]

Extreme rationality is for important decisions, not for choosing your breakfast cereal. Really important decisions - by which I mean those that you'd sleep on, and allocate more than ten minutes of thought - typically coincide with changes in habits and routine, which don't happen more often than once in several months. For more common decisions, we only have time and energy for ordinary rationality.

Comment author: Yvain 09 April 2009 03:46:59AM 4 points [-]

I agree with this, but I also think that our big important decisions probably determine a lot less of our success than we like to think. A very large part of success probably comes from either the sum of our smaller decisions, or from decisions that didn't seem too important at the time but ended up making a very large difference in retrospect. The experiment I mentioned has raised my awareness of this.

I also think the big decisions are the ones it's hardest to apply extreme rationality to, both because the emotional stakes are so high and because by the time we make them we've already made a pile of smaller decisions that have tipped us in one or the other direction. See http://www.overcomingbias.com/2007/10/we-change-our-m.html . I predict not-significantly-different statistics for people who have trained in extreme rationality, though without a very high degree of confidence.

Comment author: Eliezer_Yudkowsky 09 April 2009 12:20:09PM 4 points [-]

I also think the big decisions are the ones it's hardest to apply extreme rationality to, both because the emotional stakes are so high and because by the time we make them we've already made a pile of smaller decisions that have tipped us in one or the other direction.

I spend a fair amount of time taking aim at directly this phenomenon, y'know. Summarized in Crisis of Faith.

I predict not-significantly-different statistics for people who have trained in extreme rationality, though without a very high degree of confidence.

Because the technique as described is too hard for mortals to use, or because the technique as described is inadequate?

Comment author: RichardKennaway 09 April 2009 07:05:06AM 9 points [-]

Practice creates facility. Facility lowers the bar to practice. Repeat. There is no time at which rationality may not be applied, and without practice at small things, how will you apply it to big things?

But besides, isn't it altogether just more fun to think clearly? When I notice myself not doing so, it is as painfui as watching a beautiful machine labouring with leaking pipes and rust.

I don't keep fit just to catch trains or eke out a few more years from the meat.

Comment author: AnnaSalamon 09 April 2009 07:11:49AM 1 point [-]

Can you give examples of what your practice looks like?

Comment author: RichardKennaway 09 April 2009 07:35:18AM 3 points [-]

It begins with noticing, and continues by doing. Just from systematically noticing what you are doing, in any sphere, what you do changes even without making a special effort to change. Yvain mentioned this happening for him in footnote 5.

Once you see, clearly, that there is a choice in front of you, and what it is, it is no more possible to choose what you think is wrong than believe what you think is false.

Comment author: AnnaSalamon 09 April 2009 07:37:58AM 6 points [-]

This comment is helpful, but if you could include some examples that use concrete nouns, it would be more helpful.

Comment author: RichardKennaway 09 April 2009 10:08:41AM *  21 points [-]

Thank you for pressing me for concrete details.

Some of what follows goes way back before OB, which is one of various things I have studied or done -- a major one, but there are others -- on the matter of how to think better. The first, for example, I describe as inside vs. outside view, because that is what it is. The practice goes back longer; OB gave it a name.

I. Getting out of bed in the morning. That may seem a trifle, but there is no time at which rationality does not matter, and an hour a day is more than a trifle. The inside view whispers seductively to just laze on half-awake, or drift off to sleep again. The outside view reminds me that it has been my invariable experience that lazing on does not wake me up, that the only thing that does is getting up and moving around, and that in twenty minutes after getting up (my typical boot time for both mind and body) I will be more satisfied with myself, the sooner I did so.

The more clearly I can contemplate the outside view, the easier it becomes to make a move. I can't claim expert proficiency in this. I still get up much faster when I have a specific three-alarm-clock reason, the moment the wristwatch pinger goes off.

II. I began taking much better care of my money after I instituted the simple exercise of recording every transaction on a spreadsheet, and estimating all of my expenses month by month out to a year ahead. And this without having to make any particular resolutions to limit my spending on this or that, or to save some fixed amount. I just have to look at my savings account, and other stores of money not to be casually drawn on, to see the difference. Sometimes noticing is all it takes, and the doing takes care of itself.

You cannot fix errors that you do not know you are making. That includes errors that you are looking straight at, without realising that they are errors. (Our chief weapon is noticing. Noticing, and discernment. Our two weapons...)

III. I have learned that whatever the person in front of me is saying, it makes sense to them, no matter confused or wrong it may seem to me. Even if they are lying, there is still a reason. Therefore, I look for the greatest possible sense and address that, whether I'm dealing with a student, a colleague, someone being wrong on the Internet, or anyone else. Or as it was put, "you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse." The application is wider than fighting.

As I said, my experiences of a lot of this go back way before reading OB. Most of what is said on OB can also be found elsewhere -- a significant part of it is links to elsewhere. But that is only because the truth is constant and discoverable by anyone, so it is unsurprising when it has been. A lot of what is valuable in OB/LW is to draw its material together in a coherent body.

(Edited to defeat the software's too-clever handling of the originally Arabic-numbered paragraphs.)

Comment author: arundelo 09 April 2009 06:34:46PM *  4 points [-]

You can backslash the period to defeat automatic list formatting:

2\. Two
Foo
1\. One

looks like:

2. Two

Foo

1. One

More details here.

Edited to add: Excellent comment, by the way.

Comment author: MatthewBaker 30 November 2011 02:37:01AM *  1 point [-]

Thank you for the link.

Comment author: MugaSofer 11 December 2012 10:47:45AM 2 points [-]

2: Eliezer considers fighting akrasia to be part of the art of rationality; he compares it to "kicking" to our "punching". I'm not sure why he considers them to be the same Art rather than two related Arts.

Winning.

Comment author: Hul-Gil 10 April 2012 05:51:06AM *  2 points [-]

I think one reason might be that the vast majority of the decisions we make are not going to make a significant difference as to our overall success by themselves; or rather, not as significant a difference as chance or other factors (e.g., native talent) could. For example, take the example about not buying into a snake-oil health product lessdazed uses above: you've benefited from your rationality, but it's still small potatoes compared to the amount of benefit you could get from being in the right place at the right time and becoming a pop star... or getting lucky with the stock market... or starting a software company at just the right time. These people, who have to have varying degrees of something besides luck and a small amount of rationality to capitalize on it, are much more visible; even if their decisions were less-than-optimal, the other factors make up for it. Nothing's stopping them from poisoning themselves with a scam miracle-elixir, though.

This ties in with the point lessdazed was making, that the rational person most likely loses less rather than wins big - that is, makes a large number of small decisions well, rather than a single important one extremely well. That's not to be despised; I wonder what the results if we ask about overall well-being and happiness rather than fame and fortune.

Comment author: dlthomas 18 January 2012 10:07:15PM 2 points [-]

[W]e should generally expect more people to claim benefits than to actually experience them.

I don't think this claim is supported. There are reasons (some presented) why we should expect this. There are also reasons (a few listed below) why we should expect the opposite. I don't see at all why we should expect either set to dominate.

Reasons I might not post a benefit I've accrued:

1) I'm too busy out enjoying my improved life. 2) The self-congratulatory thread smells too much of an affective death spiral. 3) I am unsure how much of the benefit was actually from x-rationality and how much was from other sources. 3.1) 3, plus overcompensation for cognitive dissonance and similar biases. 4) It feels like bragging - and in fact, it seems to sometimes be interpreted this way; look at some of the reaction Luke has got for some of his posts. 5) I'm busy focusing on improving further; posting a comment listing benefits I've derived so far might not be an effective means to this goal.

Comment author: Douglas_Knight 10 April 2009 05:11:52PM 2 points [-]

study evolutionary psychology in some depth, which has been useful in social situations

Could you elaborate on this?

I doubt that it directly told you anything useful, but it was more likely helpful in telling you to pay attention and not to interpret things through your usual beliefs.

Comment author: cousin_it 09 April 2009 09:28:56PM *  2 points [-]

Your post is a great improvement on mine. Thanks, esp. for the "limiting factor" riff.

Am I alone in thinking the word "akrasia" doesn't quite describe our problem? Isn't it more like "apathy"? Some people wish to be able to do the things they want; lucky them! Me, I just wish to want to do the things I'm able to do.

Comment author: infotropism 09 April 2009 09:54:46AM 2 points [-]

I will list the only example that comes to my mind : better x-rationality techniques have actually helped me get my university diploma : not a few times getting out of a difficult situation, where I used what I knew of heuristics, biases, the limits and usual mistakes in normal rationality, how one can sound rational regardless of whether he really is ... to give off that impressive aura of someone who knows what he's doing at little cost. To sound rational when facing an audience.

To my defense, I actually faked the cues and tells of my rationality, skill etc. to adapt them to what I did estimate to be my real level of rationality and skill, but which I also decided wouldn't be signaled correctly if I didn't actively do the job.

Now, that'd be the only time I used that knowledge in real life. But I'm not really an x-rationalist either. Even the best of us is still but a student in x-rationality. Personally, I'd just consider myself as a normal rationalist with some x-rationality ideas, someone on the transition, getting there. I mean, I can't even do the math, after all. It's all very intuitive so far, to me, not really formalized. But what I have so far, intuitively, tells me that I should seek that conversion, so as to eventually be able to use the math, and formalize that rationality. I ain't even saying that once (if ever) I'm there, I'll only used formalized x-rationality. But I'll have a new, powerful tool, maybe the most powerful, to help me wherever it may (should it be everywhere ? Perhaps not, for as long as we're meatware human; it'll still be easier to hit the ball when it feels right, rather than when you've solved its differential equations.)

And so in our present days, I wonder just how much of our art could be really be used formally, so far, as opposed to all that is still only present and usable on an intuitive level. And what part of it is really more a part of x-rationality, rather than something borrowed from someplace else.

Comment author: Eliezer_Yudkowsky 09 April 2009 12:22:37PM 3 points [-]

better x-rationality techniques have actually helped me get my university diploma

I didn't quite understand your description of what happened here, but it sounds interesting and possibly ominous. Please rephrase?

Comment author: infotropism 09 April 2009 03:57:55PM 6 points [-]

Ok, giving it another go. Let's say you had to perform a set of experiments. You didn't know much about, nor studied a lot of, the background science. The results, and the data that can be extrapolated from those, are weak, and it's in part your fault. How would you keep going, without failing either your overall experimental work (which should be the only important matter at hand), neither your co workers' trust in your capabilities ?

The first most important thing would be to put things back into context. Being intellectually honest, with a genuine will towards truth, about just how much your work so far is worth, what you are capable of, how motivated you are, and what you can expect to achieve next. Putting confidence bounds on things like "this experiment set will be done by next week", "I'll falsify this hypothesis with this experiment", "this theory seems to apply here", "I'll assume this explanation to be the right one", etc. Planning your future work based on that.

Mostly in a bad case, it amounts to admitting "I don't know". Having admitted to that, you can start working towards better results, improving yourself at your own pace, and eventually accomplish your work.

Now I don't usually trust people to accept that I'll work at my own pace. In quite a few cases it seems like there's a gap I can't cross in explaining to them how working that way will be optimal (on a case by case basis, and for me). I especially don't expect it when I am working well under what I know is my normal work output, or even below what is the average, expected work output for anyone who'd be in my shoes.

The next step - which was quite automatic most of the time - would be where I'd for instance explain my work or present results - the powerpoint presentations given to the team, or informal meeting with the lab director - where I'd include the meta information about how I rationally evaluated my work, and planned the next steps. But only selectively so. In order to show that I was intellectually honest, but not so much as to shoot my own foot in the process. Casually throwing here and there information which while correct would still draw on affect heuristics, halo effects, anchoring, and probably others I don't even remember, to make it sound even better than it would have otherwise. Is that similar to what is called "becoming a more sophisticated arguer" ?

Some of the comments I'd receive then were like "ok you need to work more on that, but you seem to understand the problem well" "your presentation was very good, very easy to understand, it put everything back in place" etc. when my own estimate told me that not only my work wasn't all that good, but that what was being praised wasn't the right thing, and missed the point. I didn't ever mention those doubts though.

I can't tell how much of my final "success" was deserved. I don't know how much of my final marks were due to the value of the science done, how much for the intellectual honesty, and how much for how I played on those to help it seem better than it was. I personally think my work wasn't worth that much, and I know I underperformed. That I had good reasons to underperform myself at that time, doesn't change the fact that I was graded better than I would have expected, or graded myself, even with benefit from hindsight.

As a caveat, I maybe shouldn't have said "x-rationality" in that first comment. A small part of what I used was x-rationality. Most of the rest was normal rationality. But I learned about both at the same time. About the latter, I could throw more examples in. For instance, I only understood what science, and the scientific method was really about, on my last year, not as a result of my courses, but as a result of reading from the sl4 mailing list as well as some of your other writings. This helped me succeed too, a lot.

Comment author: conchis 09 April 2009 12:50:40PM *  5 points [-]

Better decisions are clearly one possible positive outcome of rationality training. But another significant positive outcome is reaching the same decision faster. In my work, there are a number of rationality techniques that I have learned that have not necessarily changed the end result I have come to, but that have contributed to me spending less time confused, and getting to the right result more quickly than I otherwise would have.

Anything that frees up time in this way, has real, positive, and measurable effects on my life. (Also, confusion, and things-not-working are frustrating and stressful; so the less time I spend confused, the better)

Comment author: AnnaSalamon 09 April 2009 02:36:11PM 7 points [-]

In my work, there are a number of rationality techniques that I have learned that have ... contributed to me spending less time confused, and getting to the right result more quickly than I otherwise would have.

Could you please tell us the specific techniques and/or situations? (I'm sorry to keep asking this of everyone, but the answers are really interesting/useful. We need to figure out what different peoples' practice actually looks like, and what mileage people do and don't get from it. In detail.)

Comment author: conchis 13 April 2009 10:52:31AM *  3 points [-]

[Sorry for the slow response. Have been away for the weekend.]

No need to apologize, it's an excellent question. And to be honest, because my work involves a lot of data analysis, and using such analysis to inform decision-making, I may be cheating somewhat here. There are times when remembering that "probability is in the mind" has stopped me getting confused and helped me reach the right answer more quickly, but they're probably not particularly generalizable. ;)

Here's a quick list of some techniques that have helped that might be more generally applicable. They're not necessarily techniques that I always manage to apply consistently, but I'm working on it, and when I do, they seem to make a difference.

(Listing them like this actually makes them seem pretty trivial; I'll leave others to decide whether they really warrant the imprimatur of "rationality techniques".)

(1) Avoiding confirmation bias in program testing: I'm not a great programmer by any stretch of the imagination, but it is something I have to do a fair amount of. Almost every time I write a moderately complicated program, I have to fight the urge to believe that this time I've got it basically right on the first go, to throw a few basic tests at it, and get on with using it as soon as possible, without really testing it properly. The times I haven't managed to fight this urge have almost always resulted in much more time wasted down the line than taking a little more time at the outset to test properly.

(2) Leaving a line of retreat. Getting myself too attached to particular hypotheses has also wasted a fair amount of my time. In particular, there's always a temptation, when data happens not to fit your preconceived ideas, to keep trying slightly different analyses to see whether they'll give you the answer you expected. This can sometimes be reasonable, but if you're not careful, can lead to wasting an enormous amount of time chasing something that's ultimately a dead end. I think that forcing myself to reassess hypotheses sooner rather than later has helped to cut down on that sort of dead end analysis.

(3) Realizing that some decisions don't matter (aka not being Buridan's ass): I'm something of a perfectionist, and have a tendency to want every decision to be optimal. In any sort of analysis, you have to make numerous, more or less arbitrary choices about exactly how to proceed. Some of these choices appear difficult because the alternatives are finely balanced; so you keep searching for some factor that could make the crucial difference between them. But sweating every decision like this (as I used to do) can kill a lot of time for very little reward (especially, though not only when the stakes are small.)

But to be honest, the biggest time-saver I've encountered is taking the outside view to avoid the planner's fallacy. Over the years, I've taken on a number of projects that I would not have taken on, had I realized at the the outset how much time they would actually take. Usually, these have both taken up time that could better have been spent elsewhere, and have created a great deal of unnecessary stress. The temptation to take the inside view, and to be overly optimistic in time estimates is something I always have to consciously fight (and that, per Hofstadter's law, I've never managed to fully overcome), but is something I've become much better at.

Z_M_Davis' recent post on the sunk cost fallacy, reminded me that being willing to give up unproductive projects can also be a time-saver, although the issues here are somewhat more complicated for reasons some have mentioned in the comments (e.g. the reverse sunk cost fallacy, and reputational costs involved in abandoning projects).

Comment author: knb 09 April 2009 04:32:01PM 3 points [-]

In the case of Hubbard, preaching irrationality and being irrational is different. Hubbard went genuinely crazy in his later years, but when he knew what he was doing when he invented Scientology. He even said in an interview once "I'm tired of writing for a penny a page. If a man really wanted to make a million dollars, he would invent a religion."

Comment author: Annoyance 09 April 2009 05:21:59PM 6 points [-]

If you're going to craft memetic weapons, you'd better make damn sure you've developed a resistance to your own products before you begin peddling them.

Hubbard ended up spending lots of his time around people who had been infected with his viral religious propaganda... and inevitably, he became infected himself.

People with high Int and Cha tend to believe their own propaganda. They're also the ones who tend to have unrealistically positive beliefs about their own intellectual competence, and little concern about going through the tedious and uncomfortable process of examining their own beliefs and practices.

Comment author: Technologos 09 April 2009 01:53:36PM 3 points [-]

It actually just occurred to me that the intelligence professions might benefit greatly from some x-rationality. We may not have to derive gravity from an apple, but the closer we come to that ideal, the less likely failures of intelligence become.

Intelligence professionals are constantly engaged a very Bayesian activity, incorporating new data into estimates of probabilities and patterns. An ideal Bayesian would be a fantastic analyst.

Comment author: Eliezer_Yudkowsky 09 April 2009 02:33:41PM 3 points [-]

Ja, in particular modern intelligence professionals seem to have problems with separating out the information they get from others and the information they're trying to pass on themselves, reporting only their final combined judgment instead of their likelihood-message, which any student of Bayes nets knows is Wrong.

Comment author: ciphergoth 09 April 2009 09:23:39AM 3 points [-]

Reading OB/LW forced me to look hard at my contradictory beliefs about politics, and admit that I no longer believed certain things I used to believe, particularly about the market. If I don't get anything else out of it, that alone would be a large bonus.

Comment author: Eliezer_Yudkowsky 09 April 2009 02:36:56PM 3 points [-]

May I humbly suggest changing the title to "Extreme Rationality: It's Not That Great"? (This will not break any links!)

Comment author: smoofra 09 April 2009 03:40:48AM *  3 points [-]

I think you are right that x-rationality doesn't help an individual win much on a day to day basis. But there are some very important challenges that humanity as a whole is failing for lack of x-rationality.

The current depression. The fact that we aren't adequately protecting the earth from asteroids. DDT being banned. Nobody's getting froze. Religion. First-past-the post elections. Most wars.

Comment author: ciphergoth 09 April 2009 09:13:24AM *  4 points [-]

DDT isn't banned, never has been. I'm with you on most everything else.

At some stage we're going to have to work out how to talk about politics here. I've wondered about a top-level post to find out what we practically all agree on - I suspect for example that few of us think the drug war is a good idea.

Comment author: Tyrrell_McAllister 09 April 2009 05:03:22PM 3 points [-]

DDT isn't banned, never has been.. I'm with you on most everything else.

From a 1972 Environmental Protection Agency press release entitled "DDT Ban Takes Effect":

The general use of the pesticide DDT will no longer be legal in the United States after today, ending nearly three decades of application during which time the once-popular chemical was used to control insect pests on crop and forest lands, around homes and gardens, and for industrial and commercial purposes.

Comment author: gjm 09 April 2009 10:49:26AM 1 point [-]

Religion, FPTP elections and wars are irrational even according to non-x rationality. (With all sorts of caveats, which apply just as much to x-rationality.) The DDT ban thing is a myth, as ciphergoth points out. Asteroids and cryogenics, maybe, in so far as making the right decisions there probably involve a large element of Shut Up And Multiply; but actually we are making some effort to spot asteroids early enough and the probabilities governing whether one should sign up for cryogenics are highly debatable.

Perhaps more x-rationality would help humanity as a whole to address those issues, but mostly they come about because so many people aren't even rational, never mind x-rational.

Comment author: Technologos 09 April 2009 01:48:09PM 2 points [-]

Perhaps--but many a logician has believed in God. Take somebody like Thomas Aquinas--he was for a long time the paradigm of rationality. I'd suggest it takes x-rationality to truly shatter your pre-existing losing framework and re-examine your priors.

Comment author: gjm 09 April 2009 03:35:46PM 5 points [-]

Do you have evidence that it was lack of x-rationality that enabled Aquinas to believe in God, rather than (1) different evidence from what we have now (e.g., no long track record of outstandingly successful materialistic science; no evolutionary biology to provide an alternative explanation for the adaptation of living things; no geological investigations to show that the earth is very much older than Aquinas's religious beliefs said it was) and (2) being embedded in a culture that pushed him much harder towards belief in God than ours does to us?

Robert Aumann, to take an example Eliezer's used a few times, is pretty expert in at least some aspects of the art of x-rationality, and is also Orthodox Jewish.

Comment author: AnnaSalamon 09 April 2009 03:54:42AM 2 points [-]

...but you will disagree with me. And we are both aspiring rationalists, and therefore we resolve disagreements by experiments.

Your suggested experiment is good. But in this particular case, let's also try to employ the power of positivist thinking on your thesis as a whole. That is, let's break it up into a bunch of specific anticipations, and see what parts there is and isn't disagreement on, before we try to resolve those disagreements. I'll take my own stab at this with a number of short comments in a moment.

Comment author: MichaelBishop 09 April 2009 03:49:12AM 2 points [-]

If people typically found great personal benefits from reading OB/LW type material, then we would not be such a minority.

We hope that that rationality is increasing, and it could be, but I don't have much confidence that 30 years from now people, even people in positions of power, will be much more rational than they are now.

Comment author: MugaSofer 02 February 2014 12:08:38AM *  1 point [-]

1: Specifically, reading Overcoming Bias convinced me to study evolutionary psychology in some depth, which has been useful in social situations. As far as I know. I'd probably be biased into thinking it had been even if it hadn't, because I like evo psych and it's very hard to measure.

Oooh! I realize this is an old post, but I'm desperately curious for some concrete examples of this.

Comment author: JulianMorrison 09 April 2009 07:12:27AM -2 points [-]

A guy takes some golf lessons. Convinced he's got the mechanics of the swing down, he takes on a pro at a golf course, and has his ass handed to him. "Those golf lessons did me no good", he says. "Do golf lessons even correlate with being good at the sport?".

Comment author: gjm 09 April 2009 10:58:23AM 2 points [-]

Whether that's a good analogy depends on whether the reasoning challenges we face from day to day are more like playing golf against a seasoned pro, or playing golf against casual amateurs. (If someone takes golf lessons and, after a reasonable time, he isn't doing any better against other people at roughly his own level, then I think he is entitled to ask whether the lessons are helping.)

Do you have reasons to think that we're in the former rather than the latter situation? If so, what are they?

Comment author: cousin_it 09 April 2009 12:01:47PM *  1 point [-]

His words are justified if most pros never took any lessons of this particular kind.

Comment author: [deleted] 19 January 2012 01:59:52AM 1 point [-]

The question isn't "will studying and honing your rationality make you a better rationalist?" Obviously it will. Likewise practicing and refining your golf swing will probably make you a better golfer; but that's not analogous to Yvain's point at all.

The real question is whether or not becoming a better rationalist will likely make you more successful.

Comment author: ajayjetti 22 July 2009 08:57:14PM 0 points [-]

fantastic !!!

Comment author: JulianMorrison 09 April 2009 09:25:26AM *  0 points [-]

X-rationality is the kind you do with math, and humans are crap at casual math, so no surprise it becomes a weapon of last resort. (We ought to be using a damn sight more math for more or less everything - the fact our cognitive architecture doesn't support it will not persuade the universe to let us off lightly).

(Edit: I removed the second half of this comment because if after a day of thinking I can't pin down what I thought I was referring to, then I'm talking nonsense. Sorry. Next time: engage brain, open mouth, in that order.)

Comment author: adamzerner 04 May 2014 04:12:17PM *  1 point [-]

You say that rationality only slightly correlates with winning. I think that's because incremental increases in rationality don't necessarily lead to incremental increases in winning. Winning is governed by lots of factors, and sometimes you have to get over a critical threshold of rationality to see the results you want.

Comment author: AshwinV 06 May 2014 07:34:58AM 0 points [-]

True. If you look at it that way, x-rationality wont have a standard co-relation of 0.1. The co-relation shall be dependent on how much of a limiting factor x-rationality really is.

Comment author: Dmytry 15 January 2012 04:08:47PM *  1 point [-]

I think the problem with practising rationality as on LessWrong is that people end up not doing perfectly rational actions and strategies the rationale behind which they did not understand or had explained to them (usually people pick those up from environment without the explanation attached). Furthermore intelligence (as in e.g. ability to search in a big solution space for solutions) is the key requirement as well, and intelligence is hard to improve with training, especially for already well trained individuals.

Comment author: NicoleTedesco 15 January 2012 03:40:39PM 1 point [-]

I want to master x-rationality because I want to teach it. I value rational behavior in my fellow human because the historical record is clear: rational behavior is correlated with increased safety, health, and wealth of a society. I want to live in an increasingly safe, healthy, and wealthy society. I understand that "rational" behavior has a saturating plateau, or that it is only so effective, but the masters of rationality must continue to exist in every society, scientific skeptics must be cultivated. I enjoy working with the rational arts because, frankly, I grew up with a very irrational mother and I have lived ever since to do everything in my power (of almost a half a century now) to be everything she was (and still is) NOT. I want to be one of the rationality masters and teachers in my society because I enjoy those arts, find value in them, and find value in safety, health, and wealth.

Pretty selfish, but there it is.

Comment author: [deleted] 18 January 2012 02:23:16AM 1 point [-]

Right. You're basically in the position Yvain describes; you assign value to clarity of mind. However, this doesn't necessarily correlate to practical gains for you beyond that which could be acquired from pedestrian rationality (or at least specialized "business-rationality").

Comment author: xamdam 01 July 2010 01:48:19PM *  1 point [-]

"techniques and theories from Overcoming Bias, Less Wrong, or similar deliberate formal rationality study programs, above and beyond the standard level of rationality possessed by an intelligent science-literate person without formal rationalist training."

Here is my definition, I attempted to rely less on CapitalNames in it.

X-Rationality:

Ability to behave according to dictates of rationality in situations where such behaviors would be highly discomforting/counter-intuitive.

As an aside the discomforting part provides an escape hatch for people not to be rational, because they can always claim high utility value of being emotionally comfortable.

Comment author: jason1stlegion 09 December 2016 07:20:06PM *  0 points [-]

As far as I can tell, epistemic and instrumental rationality are two related Arts, even to EY, but are both under the banner of "Rationality" because they both work towards the same goal, that of optimal thinking (I can't cite any specific examples right now, but I'll throw it out there anyway).

Also, another reason for the comparative inefficiency of x-rationality could be lack of information. Epistemic rationality is the Art of filtering/modifying information for greater accuracy. Instrumental rationality is the Art of using all available information to maximize your values. So both techniques increase the amount of benefit you gain from information. But when you don't know all that much, the fine-tuning techniques, x-rationality, would have an extremely low return since they increase your benefit by such a small percentage. There IS an element of akrasia here, in that we could go learn more if we weren't so lazy, but it's not really the same thing.

Goals are yet another problem, which you mentioned already. People just don't need rationality in routine tasks, that's what we have habits for! Would you think rationally about how to brush your teeth? More than once, then? And many of our plans for the future take large amounts of patience but not much thinking to get a 'good enough' result, so most of our focus is on being patient, the rational course of action.

There's no reason for you to change your goals just for the sake of getting to use rationality, but some other ways of getting more out of it (not necessarily the best ones, of course) could be:

  • Low-"short-term"-investment tasks that would force you to study (like installing a productivity program that only allows you to access certain sites, as many people have already done)

  • Increasing the entertainment value of studying, the clichéd option (OpenStax CNX has made textbooks that are slightly more interesting than normal, but I don't think it will be enough for most of the population)

  • Meditation, another cliché. It increases patience, and you can work on analyzing and fixing stray beliefs you find floating around your brain

  • Recording your thoughts, observations, actions, reasons for those actions, etc. in some sort of portable device (like a notebook or phone). I know this was already mentioned by Yvain, but I just want to make a single list here.

  • (If you're willing to do so) Putting those recorded thoughts on LessWrong, especially the actions and their reasons, for critical review

Any other ideas?

Note: Markdown was acting up, but I've fixed it now

Comment author: TheAncientGeek 05 September 2015 03:51:03PM 0 points [-]
Comment author: keen 15 May 2014 12:14:11AM 0 points [-]

We simply don't have the time and computing power to use full rigor on our individual decisions, so we need an alternative strategy. As I understand it, the human brain operates largely on caching. X-rationality allows us to clean and maintain our caches more thoroughly than does traditional rationality. At first, it seems reasonable to expect this to yield higher success rates.

However, our intuition vastly underestimates the size of our personal caches. Furthermore, traditional rationality is simply faster at cleaning, even if it leaves a lot of junk behind. So it would appear that we should do most of the work with traditional rationality, then apply the slower x-rationality process for subtle refinement. But since x-rationality is so much slower and more difficult to run, it takes a whole giant heap of time and effort to get through a significant portion of the cache, and along the way many potential corrections will have already been achieved in the traditional rationality first pass.

But if we leave out the more rigorous methods entirely, deeming them too expensive, we're doomed to hit a pitfall where traditional rationality will not save us from thirty years of pursuing a bad idea. If we can notice these pitfalls quickly, we can apply the slow x-rationality process to that part of the cache right away, and we might only pursue the bad idea for thirty minutes instead.

We need to be able to reason clearly, to identify opportunities for clearer reasoning, and to identify our own terminal goals. A flaw in any of these pieces can limit our effectiveness, in addition to the limits of just being human. What other limiting factors might there be? What methods can we use to improve them? I keep coming back to Less Wrong because I imagine this is the most likely site to provide me with discourse on the matter.

Comment author: TheAncientGeek 15 May 2014 11:57:49AM 2 points [-]

Clear reasoning: there is no evidence that humans have some fixed set of terminal goals (afortiori, there is no unchanging essence of personhood). There is also no discernible difference between discovering "true" goals and changing goals. Am you can't fix reasoning by fixing reasoning, you need to fix emotion and habit too.