by [anonymous]
2 min read

39

I've been on Less Wrong since its inception, around March 2009. I've read a lot and contributed a lot, and so now I'm more familiar with our jargon, I know of a few more scientific studies, and I might know a couple of useful tricks. Despite all my reading, however, I feel like I'm a far cry from learning rationality. I'm still a wannabe, not an amateur. Less Wrong has tons of information, but I feel like I haven't yet learned the answers to the basic questions of rationality.

I, personally, am a fan of the top-down approach to learning things. Whereas Less Wrong contains tons of useful facts that could, potentially, be put together to answer life's important questions, I really would find it easier if we started with the important questions, and then broke those down into smaller pieces that can be answered more easily.

And so, that's precisely what I'm going to do. Here are, as far as I can tell, the basic questions of rationality—the questions we're actually trying to answer here—along with what answers I've found:

Q: Given a question, how should we go about answering it? A: By gathering evidence effectively, and correctly applying reason and intuition.

  • Q: How can we effectively gather relevant evidence? A: I don't know. (Controlled experiments? Asking people?)
  • Q: How can we correctly apply reason? A: If you have infinite computational resources available, use probability theory.
    • Q: We don't have infinite computational resources available, so what now? A: I don't know. (Apply Bayes' rule anyway? Just try to emulate what a hypercomputer would do?)
  • Q: How can we successfully apply intuition? A: By repairing our biases, and developing habits that point us in the right direction under specific circumstances.
    • Q: How can we find our biases? A: I don't know. (Read Less Wrong? What about our personal quirks? How can we notice those?)
    • Q: Once we find a bias, how can we fix it? A: I don't know. (Apply a correction, test, repeat? Figure out how the bias feels?)
    • Q: How can we find out what habits would be useful to develop? A: I don't know. (Examine our past successes and rationalize them?)
    • Q: Once we decide on a habit, how can we develop it? A: I don't know. (Sheer practice?)
We could answer some of these questions ourselves, though simple practice and straightforward methods. The method "apply a correction, test, repeat", for example, is so generally useful that it deserves to be called the Fundamental Algorithm of Control. Nevertheless, since Less Wrong is devoted to developing human rationality, surely it contains answers to these questions somewhere. Where are they?

 

New Comment
33 comments, sorted by Click to highlight new comments since:

Q: How do you notice when there is a question to ask?

An answer to my own question: by learning to notice the tiny sensation that something is not quite right, and attending to it as urgently as you would search out the source of an unexpected smell of burning.

Rationality has many tools, but they are all grasped with the handle of attention.

Good post.

How can we effectively gather relevant evidence?

Break it down:

What is evidence? How do we gather that?

Just try to emulate what a hypercomputer would do?

No. RYK is a terrible chess program, even worse than Yudkowsky.

Every so often, answer a question twice, once using formalisms in some place, the other times using intuition in that place. If they output different answers, notice your confusion. Do not be confident in any course of action (I do not say do not take action) until you think you know why they gave different answers. Learn about formalisms and intuition until you know where you went wrong - and if you got different answers you certainly went wrong somewhere, if you got the same answers you possibly went wrong somewhere.

Apply formalisms and intuition separately.

This seems wrong, in exactly the same way that "apply reasoning and observation separately" is wrong.

[-][anonymous]60

Let me put words in lessdazed's mouth: "Every so often, answer a question twice, once using formalisms in some place, the other times using intuition in that place. If they output different answers, . . ."

Permission granted, edited.

I think this should be on the front page.

[-][anonymous]00

I guess I should discuss, think, and revise.

I'll stick a deadline in my calendar a week from now. If that day comes and I'm not making progress, I'll post without putting in any more work.

The phrase "Given a question" skips what may be the most important (meta-)question:

Q: For what questions should we try to find answers, spending how much effort on each?

For so many mistakes that people make, the failure isn't that they would come up with the wrong answer to a simple decisive question, the failure is that they didn't think to ask the right question. Sometimes problems look easier in hindsight because of a bias, but sometimes solving the problem really is easy, but the hard part is realizing before it's too late that the problem exists.

[-][anonymous]20

Yes, I agree that these are important questions. I think I would break your question up into two:

Q: How can we find good questions to think about?

Q: When should we stop thinking about a question?

Now, I must confess, I'm a fan of the root question that I've already come up with. But after I think about it some, I begin to think it may make sense to ask multiple root questions, all of them fully general, and all referring to each other. The problem of rationality can be looked at in different ways, and I imagine these ways can complement each other.

NTS: the "apply a correction, test, repeat" algorithm may make a good root question.

Q: Given a question, how should we go about answering it? A: By gathering evidence effectively, and correctly applying reason and intuition.

This is a very broad question, and although its been described in more detail than "gather evidence well then think well", there are still a lot of steps before applying it to any given problem, and given how much questions differ, it kinda has to be this way.

Q: How can we effectively gather relevant evidence? A: I don't know. (Controlled experiments? Asking people?)

Depends on the question. Lukeprog wrote a post about how to do scholarship, often self experimentation is a quick and easy way of getting data, and sometimes you can just asking someone or google it.

It really depends on the problem, but I think if you have a list of good ways to gather data it should be pretty obvious which is relevant.

Q: How can we correctly apply reason? A: If you have infinite computational resources available, use probability theory. Q: We don't have infinite computational resources available, so what now? A: I don't know. (Apply Bayes' rule anyway? Just try to emulate what a hypercomputer would do?)

Train your intuition to make sense (start using good heuristics automatically), and frontal override with explicit math where it matters. You want it to feel obvious whether you should switch on a some variant of the Monty Hall problem without having to get out pencil and paper.

Also take multiple approaches to the same problem where you can.

Q: How can we successfully apply intuition? A: By repairing our biases, and developing habits that point us in the right direction under specific circumstances.

By trying to dodge biases, not confront them. If you realize that you dislike someone and that is affecting your judgement, the wrong way to go about it is to try to fudge factor it back like "I hate that bob is trying to convince me that X is small. I still think its big, but maybe I should subtract 50 because I'm biased". The problem with this approach is that you cant reverse stupidity. The way you came up with the inital answer has to do with disliking a person, not the territory, so in order to have an accurate and narrow response your fudge factor would have to have to depend on reality anyway.

Its better to just let the "I hate bob. Screw that guy!" process run in the background and practice dissociating it from the decision mechanism, which is something completely different. Don't even ask yourself what that part thinks.

If you notice yourself being loss averse, reframe the problem such that it goes away- don't do the right thing and then cringe about it.

Q: How can we find our biases? A: I don't know. (Read Less Wrong? What about our personal quirks? How can we notice those?)

Noticing them can be tougher sometimes. Do more metacognition in general and develop a habit of responding positively when people point them out to you.

Q: Once we find a bias, how can we fix it? A: I don't know. (Apply a correction, test, repeat? Figure out how the bias feels?)

Again, figure out what heuristic you're using ("this is what X was supposed to feel like, does that explain things?"), and if its no good, find another one and use that. Fudge factors aren't good if you can avoid them.

Q: How can we find out what habits would be useful to develop? A: I don't know. (Examine our past successes and rationalize them?)

Other than the obvious first pass (metacognition, frontal override, training intuitions..), you might want to ask what kind of habits would have prevented this "type" (in the broadest sense you can) of errors after you catch yourself making one.

Q: Once we decide on a habit, how can we develop it? A: I don't know. (Sheer practice?)

Decide to do it and that it's going to be easy. Identify with the habit, and the metahabit of installing habits "I want to be the kind of person that picks up the habits I want to have, so I'm just going to do it". Practice with focused intent of it becoming automatic and test it.

Put yourself in an environment that reinforces it. Use intermittent reinforcement with nicotine if you have to.

[-][anonymous]00

Q: How can we effectively gather relevant evidence? A: I don't know. (Controlled experiments? Asking people?)

Depends on the question. Lukeprog wrote a post about how to do scholarship, often self experimentation is a quick and easy way of getting data, and sometimes you can just asking someone or google it.

I should find that post by lukeprog, since it definitely sounds like the sort of thing I'm looking for here. My chain of questions can end either by saying "I don't know", or by linking to another post, and other reading material is obviously more useful than the phrase "I don't know".

Q: How can we successfully apply intuition? A: By repairing our biases, and developing habits that point us in the right direction under specific circumstances.

By trying to dodge biases, not confront them. If you realize that you dislike someone and that is affecting your judgement, the wrong way to go about it is to try to fudge factor it back like "I hate that bob is trying to convince me that X is small. I still think its big, but maybe I should subtract 50 because I'm biased". The problem with this approach is that you cant reverse stupidity. The way you came up with the inital answer has to do with disliking a person, not the territory, so in order to have an accurate and narrow response your fudge factor would have to have to depend on reality anyway.

Its better to just let the "I hate bob. Screw that guy!" process run in the background and practice dissociating it from the decision mechanism, which is something completely different. Don't even ask yourself what that part thinks.

I'm not sure I agree with this part so much. Given a biased heuristic, reversing stupidity would mean reversing the heuristic. (For example, reversing the availability heuristic would mean judging that a phenomenon is more frequent when examples of it come to mind less easily.) Applying a fudge factor isn't reversing stupidity, because the biases themselves are systematically wrong.

So, given a biased heuristic, I can imagine two ways of dealing with it: you can use other heuristics instead, or you can attempt to correct the bias. I think both ways can be useful in certain circumstances. In particular, correcting the bias should be a useful method as long as two things are true: you understand the bias well enough to correct it successfully; and, once you've corrected the bias, you end up with a useful heuristic.

"I don't like Bob, so things he says are probably wrong" is simply an example of a heuristic that, once de-biased, no longer says anything at all, and is thus useless.

I should find that post by lukeprog

I believe that the grandparent is referring to this one: http://lesswrong.com/lw/5me/scholarship_how_to_do_it_efficiently/

I'm not sure I agree with this part so much. Given a biased heuristic, reversing stupidity would mean reversing the heuristic. (For example, reversing the availability heuristic would mean judging that a phenomenon is more frequent when examples of it come to mind less easily.) Applying a fudge factor isn't reversing stupidity, because the biases themselves are systematically wrong.

So, given a biased heuristic, I can imagine two ways of dealing with it: you can use other heuristics instead, or you can attempt to correct the bias. I think both ways can be useful in certain circumstances. In particular, correcting the bias should be a useful method as long as two things are true: you understand the bias well enough to correct it successfully; and, once you've corrected the bias, you end up with a useful heuristic.

"I don't like Bob, so things he says are probably wrong" is simply an example of a heuristic that, once de-biased, no longer says anything at all, and is thus useless.

I think we actually mostly agree. I'll see if I can make my points clearer.

The first was that if you notice that what you're actually doing is a lot like "Bob bad, so he wrong", then the better solution is to cut that part of your thinking out and separate it from your decision making, not to try to keep it there but add a fudge factor so that the total algorithm has less of this bias.

The way you carved it, I would suggest "use 'other' heuristics" or correct the bias through excision not through addition of a cancelling bias. When I said "other heuristics" I would count it as a different heuristic if you took the same heuristic and excised the bias from it.

The second was that even if you could perfectly cancel it, you haven't added any substance. You don't want to congratulate yourself on canceling a bias and then fail to notice that you just hold maxent beliefs.

For me there is always the lurking suspicion that my biggest reason for reading LessWrong is that it's a whole lot of fun.

I think the larger question of rationality is, When is it good for us, and when is it bad for us?

I suffer more from too much rationality than too little. I have a hard time making decisions. I spend too much time thinking about things that other people handle competently without much thought. Rationality to the degree you desire may not be an evolutionary stable strategy - your rationality may provide a net benefit to society, and a net cost to you.

On the level of society, we don't know whether a society of rational personal utility maximizers could out-compete a society of members biased in ways that privileged the society over the individual. Defining "rational" as "rational personal utility" is a more radical step than most people realize.

On the even higher level of FAI, we run into the question of whether rationality is a good thing for God to have. Rationality only makes sense if you have values to maximize. If God had many values, it would probably makes the universe a more-homogenous and less-interesting place.

It's possible that one can learn the wrong kind of rationality first, but I disagree with the idea that rationality can be a bad thing in general.

The first skill ought to be efficient usage of computational resources. For humans, that means calories (no longer a factor outside our ancestral environment) and time. LessWrong has taught me exceptional methods of distinguishing my exact preferences and mapping out every contour of my own utility function. Thus, I can decide exactly which flavor Ice Cream will give me more utilons. This is neat, but useless.

But rationality is more than just thinking. It's thinking about thinking. And thinking about thinking about thinking. And it keeps recursing. In learning the difference in utility between Ice Cream, I learned to never care about which flavor I will get; the utility saved by not consciously calculating it is worth more than the utility of thinking about it. Before learning about rationality, I had been the sort to whinge and waver over flavors at coldstone. After learning some rationality, I was even worse. But learning a bit more of rationality, well learning the exact right bit that is, made me perform at peak speed and peak utility. It's a lesson I've learned that carries over into more than just food, has saved me countless hours and utilons, and will probably stay with me until the end of my days.

There are many valleys of bad rationality. Whether they take the form of being just clever enough to choose the 'rational' position in game theory versus the 'super-rational', or just smart enough to catch a mimetic autoimmune disease without fully deconverting from your religion, rationality valleys can be formidable traps. As always, though the valley may be long and hard, the far side of the valley is always better than where you started.

[-][anonymous]60

In short: a method of answering questions should be judged not only on its benefits, but on its costs. So, another basic question of rationality is:

Q: When should we stop thinking about a question?

Definitely when:

  • You are only going in circles. ** You need more data, to do so, you should preform an experiment.
  • You can no longer remember/track your best created strategies.
  • You can not judge value difference between new strategies and existing strategies.
  • You spend x percentage of your time tracking/remember your created strategies. Where x is significant.
  • There are better questions to consider.
  • The value of answering the question will diminish greatly if you spend more time trying to optimize it. ** "It is great you finished the test and got all the right answers but the test was over a week ago" -- extreme example some times .../years/months/weeks/days/hours/minutes/seconds/... count.

It can be a hard question to get right in my experience.

I think you mean, "When is it irrational to study rationality explicitly?"

[-][anonymous]20

We could cover this by a specifically geared "how to learn to think" FAQ or a short sequence.

But I don't know, what I've been missing on LessWrong, what makes me sometimes doubt I'm actually becoming less wrong, is something like series of "rationality tests", perhaps a written exam an interactive online test or learning software or something. I think it would compliment useful projects (like say levelling IRL, ect.) and regular debate rather nicely.

[-][anonymous]00

We could cover this by a specifically geared "how to learn to think" FAQ or a short sequence.

I think that's what I'm going to do. I'm going to re-tool this post so that it is immediately useful as a source of basic information, and also provokes discussion of these questions.

Q: Given a question, how should we go about answering it? A: By gathering evidence effectively, and correctly applying reason and intuition.

An important point omitted in the proposed answer: Reduce the question into subquestions stated in primitively testable terms. Try to dispel confusions. Make sure the question even makes sense. For a large class of questions (all classical philosophical and ethical questions, most political ones) starting to gather evidence before the question is reduced is a mistake. This is perhaps the most important idea which can be learned from LW.

Q: How can we effectively gather relevant evidence?

When it is clear what relevant evidence is (whatever fact whose probability strongly depends on the tested hypothesis) and when a reliable intuitive understanding of probability is at hand, this should be easy. (Of course, depending on what level of effectivity are you aiming at.) Most common errors in reasoning don't stem from lack of evidence, but from incorrect intuitive probabilistic analysis. Creationists often know a lot of relevant facts.

Q: We don't have infinite computational resources available, so what now? A: I don't know. (Apply Bayes' rule anyway? Just try to emulate what a hypercomputer would do?)

Apply Bayes' rule anyway. The result will not be perfect and you should be aware of that, but in majority of situations it's still an improvement over intuitive guesses.

Q: How can we find our biases? A: I don't know. (Read Less Wrong? What about our personal quirks? How can we notice those?)

It's hard to do perfectly, of course. But simply by learning about several standard biases I was able to spot such patterns of reasoning in my thoughts and I believe I don't commit them now so often as in the past. Personal quirks? Listen to feedback you get from others.

Q: Once we find a bias, how can we fix it?

Retreat to more formalised reasoning, if possible.

[-][anonymous]00

Apply Bayes' rule anyway. The result will not be perfect and you should be aware of that, but in majority of situations it's still an improvement over intuitive guesses.

How do you determine the relevant probabilities? What if you're looking for, say, the probability of a nuclear attack occurring anywhere in the world in the next 20 years?

Q: Once we find a bias, how can we fix it?

Retreat to more formalised reasoning, if possible.

Yes, but that doesn't remove the bias. Surely if it's at all possible to remove a bias, that's better than circumventing it through formal reasoning, because formal reasoning is much slower than intuition.

How do you determine the relevant probabilities? What if you're looking for, say, the probability of a nuclear attack occurring anywhere in the world in the next 20 years?

What information you are updating from?

[-][anonymous]00

All information available to me.

[-]prase-20

Such as? I am probably unable to give you a wholly general prescription for P(X | nuclear war is going to happen) valid for all X; I have even no idea how such a prescription should look like even if infinite computation power was available, if you don't want me to classify all sorts of information and all sorts of hypotheses relevant to a nuclear attack. Of course it would be nice to have some general prescription allowing to mechanically detect what information is relevant, but I think this a problem different from Bayesian updating.

[-][anonymous]00

"Apply Bayes' rule anyway" is not a method of reasoning unless we have some way of determining what the numbers are. If we don't have a method for finding the numbers, then we still have work to do before calling Bayes' rule a method of reasoning.

[-]prase-20

I haven't said we haven't some way of determining the numbers. I have said that I can't concisely formulate a rule whose domain of definition is the set of all possible information. What you are asking for is basically outlining a large part of the code of a general artificial intelligence. This is out of reach, but it doesn't mean we can't update at all. Some probabilities plugged in will almost certainly be generated by intuition, but I don't think method of reasoning has to remove all arbitrariness to be called such.

[-][anonymous]00

What you are asking for is basically outlining a large part of the code of a general artificial intelligence.

Kind of! I'm asking for the best algorithm for human intelligence we can come up with. I guess that indeed, the phrase "apply Bayes' rule" is significantly better than nothing at all.

If we had a formal and efficient answer to this we would have FAI by now* or determined it impossible!

*Answering the question "Given a question, how should we go about answering it?" seems AI complete if not super human AI complete. People answer questions without doing it the right bayesian bless-ed way all the time with a bit of luck and brute force.

[-][anonymous]00

Answering the question "Given a question, how should we go about answering it?" seems AI complete if not super human AI complete.

Well, yes, that's kind of the point. Rationality is about finding the answers to all useful questions; we can achieve this by finding good answers to fully general questions, like that one.

Q: Once we find a bias, how can we fix it?

Knowing in which situations we should shut up and multiply should help for a large subset of these problems.