Dr_Manhattan comments on Rationality is about pattern recognition, not reasoning - LessWrong

25 Post author: JonahSinick 26 May 2015 07:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (82)

You are viewing a single comment's thread.

Comment author: Dr_Manhattan 27 May 2015 04:47:45PM 2 points [-]

Some contrary evidence about usefulness of explicit models: http://www.businessinsider.com/elon-musk-first-principles-2015-1

My take is that you need both, some things are understood better "from first principles" (engineering) others are more suitable for pattern matching (politics).

Comment author: JonahSinick 27 May 2015 06:41:39PM 1 point [-]

Yes, as I say in another comment, my sense had been that what works best is 50% intuition and 50% explicit reasoning, and now I think it's more like 95% vs 5%. If you're spending all of your time thinking, that still leaves roughly an hour a day for explicit reasoning, which is substantially more than usually.

Comment author: btrettel 29 May 2015 01:37:46AM *  0 points [-]

I think there might be some confusion over terms here. I don't think "pattern matching" is the best way to phrase this.

Musk seems to be arguing for "rule learning" (figuring out the underlying rule) as opposed to "example learning" (interpolating to the nearest example in your collection). In the book Make it Stick, the authors mention that rule learners tend to be better learners. (These terms come from the psychological literature.)

I don't think this observation is incompatible with the importance of recognizing patterns. You need to "pattern match" which rule to invoke. You also need to recognize the pattern that is the rule in the first place. Recognizing which examples to use also could be pattern matching, too, so this is why I don't think the term is right.

In the same book mentioned previously, the authors write about Kahneman's systems 1 and 2, and I got the impression that mastery often is moving things from system 2 (more careful reasoning) to system 1 (automatic pattern matching, which might simply be precomputed). Here's an example: Vaniver suggested to me before that (if I recall correctly) when playing chess, someone might not explicitly consider a certain number of moves; their brain just has a map that goes from the current state of the board and other information to their next move. Developing this ability requires recognizing the right patterns in the game, which could come from simply having a large library of examples to interpolate from, or whatnot. This is precisely what I thought of when I read that it took (the famous) 10,000 hours for JonahSinick to see the patterns.

(To be fair, you do need both, but it seems that if you can develop good rules, you should use them. Also, developing accurate intuition is useful, whether it uses explicit rules or not.)

Comment author: ChristianKl 27 May 2015 04:59:34PM 0 points [-]

Musk is very interesting in his regard. He didn't start SpaceX and Tesla because he reasoned himself into those projects having a high chance of commercial success.

He choose them because he believed in those goals. He's driven by passion towards those goals.

Comment author: Dr_Manhattan 27 May 2015 07:20:59PM 0 points [-]

Even if I agree with you on the goals (I can claim he used meta-rationality here, in the sense that someone should try to make humans interplanetary species, even if he thought his chance of success was less than 50%) a lot the thinking that made him arrive at SpaceX seemed to be "one can actually do this way cheaper than the currently accepted standards, based on cost of materials etc"

Comment author: ChristianKl 27 May 2015 11:46:06PM 0 points [-]

I don't think Jonah or I argues that you should never make calculations. Musks did make many decisions on that path and from the outside it's hard to get an overview of what drives which decision.