An interesting post. You started with the assumption that formal reasoning is the right way to go and found out that it's not necessarily so. Let me start from the opposite end: the observation that the great majority of people reason all the time by pattern-matching, this is the normal, default, bog-standard way of figuring things out.
You do not need to "retrain" people to think in patterns -- they do so naturally.
Looking at myself, I certainly do think in terms of patterns -- internal maps and structures. Typically I carry a more-or-less coherent map of the subject in my head (which certain areas being fuzzy or incomplete, that's fine) and the map is kinda-spatial. When a new piece of data comes in, I try to fit it into the existing (in my head) structure and see if it's a good fit. If it's not a good fit, it's like a pebble in a shoe -- an irritant and an obvious problem. The problem is fixed either by reinterpreting the data and its implications, or by bending and adjusting the structure so there is a proper place for the new data nugget. Sometimes both happen.
Formal reasoning is atypical for me, that's why I'm not that good at math. I find situations where you have ...
Very high level epistemic rationality is about retraining one's brain to be able to see patterns in the evidence in the same way that we can see patterns when we observe the world with our eyes.
Can you explain a bit more why you think that the way people with very high level epistemic rationality process evidence is analogous to how we recognize visual patterns? Do you think these two mental processes are fundamentally using the same algorithms, or just that both are subconscious computations that we don't understand very well?
I agree that a picture of many weak arguments supporting or undermining explicit claims does not capture whet humans do---the inferences themselves are much more complex than logical deductions, such that we don't yet know any way of representing the actual objects that are being manipulated. I think this is the mainstream view, certainly in AI now.
I don't know what it means to say that our pattern recognition capabilities are stronger than our logical reasoning; they are two different kinds of cognitive tasks. It seems like saying that we are much better ...
I'd be glad to offer what help I can. Based on other posts of yours, I would definitely practice brevity. This post is over 1000 words long and easily could be condensed to 250 or less.
Per our email exchange, here is the condensed version that uses only your original writing:
..."Our brains' pattern recognition capabilities are far stronger than our ability to reason explicitly. Most people can recognize cats across contexts with little mental exertion. By way of contrast, explicitly constructing a formal algorithm that can consistently cats across contexts requires great scientific ability and cognitive exertion.
Very high level epistemic rationality is about retraining one's brain to be able to see patterns in the evidence in the same way that we can see patterns when we observe the world with our eyes. Reasoning plays a role, but a relatively small one. Sufficiently high quality mathematicians don't make their discoveries through reasoning. The mathematical proof is the very last step: you do it to check that your eyes weren't deceiving you, but you know ahead of time that it's your eyes probably weren't deceiving you.
I have a lot of evidence that this way of thinking is how the most effective people think about the world. I would like to share what I learned. I think that what I've learned is something that lots of people are capable of learning, and that
While I agree that there's value to being able to state the summary of the viewpoint, I can't help but feel that brevity is the wrong approach to take to this subject in particular. If the point is that effective people reason by examples and seeing patterns rather than by manipulating logical objects and functions, then removing the examples and patterns to just leave logical objects and functions is betraying the point!
Somewhat more generally, yes, there is value in telling people things, but they need to be explained if you want to communicate with people that don't already understand them.
This is interesting. I have found that when you are like 16, you often want everything to be super logical and everything that is not feels stupid. And growing up largely means accepting "common sense", which at the end of the day means relying more on pattern recognition. (This is also politically relevant - young radicalism is often about matching everything with a logical sounding ideology, while people when they grow and become more moderate simply care about what typical patterns tend to result in human flourishing more than about ideology.)...
If I understand correctly, Carl correctly estimated Mark Zuckerberg's future net worth as being $100+ million upon meeting him as a freshman at Harvard, before Facebook.
Well, if I understand the post correctly, even as a freshman, Mark apparently had previous experience with owning/running a business, and was deliberately trying to become a tech entrepreneur. Now, given that someone is from a privileged family, is attending school at (almost) the maximally privileged and well-connected institution (at least on the East Coast) for wannabe rich guys, ha...
You have not understood correctly regarding Carl. He claimed, in hindsight, that Zuckerberg's potential could've been distinguished in foresight, but he did not do so.
Thanks for the post Jonah.
In medical school, I was taught that when you're a novice doctor, you'll make diagnoses and plans using deliberative reasoning, but that experts eventually pattern-match everything.
If that's true, then pattern-matching might arise naturally with experience, or it might be something that's difficult to achieve in many domains at once.
When I read your article, the reasons that I might doubt that you deserve collaborators are:
1) that enthusiastic self-reports of special perceptual-cognitive abilities have a low prior probability 2) ...
What you are describing is my native way of thinking. My mind fits large amounts of information together into an aesthetic whole. I took me a while to figure out that other people don't think this way, and they can't easily just absorb patterns from evidence.
This mode of thinking has been described as Introverted Thinking in Ben Kovitz's obscure psychology wiki about Lenore Thomson's obscure take on Jungian psychology. Some of you are familiar with Jungian functions through MBTI, the Myers-Briggs Type Indicator. Introverted Thinking (abbreviated Ti) is the...
I'd welcome any suggestions for how to find collaborators.
Keep posting the material here. Post to Main. Don't worry about it not being polished enough: you'll get plenty of feedback. Ignore feedback that isn't useful to you.
Some contrary evidence about usefulness of explicit models: http://www.businessinsider.com/elon-musk-first-principles-2015-1
My take is that you need both, some things are understood better "from first principles" (engineering) others are more suitable for pattern matching (politics).
A new paper may give some support to arguments in this post:
The smart intuitor: Cognitive capacity predicts intuitive rather than deliberate thinking
Cognitive capacity is commonly assumed to predict performance in classic reasoning tasks because people higher in cognitive capacity are believed to be better at deliberately correcting biasing erroneous intuitions. However, recent findings suggest that there can also be a positive correlation between cognitive capacity and correct intuitive thinking. Here we present results from 2 studies that directly con...
Do you really think that this is something that can be taught through writing?
Most intuitive pattern recognition comes through repeated practice, and I think that it might make more sense to create some sort of training regimen/coaching that allows others to have that practice, instead of writing a post about it.
If you did create this training, I'd be incredibly interested in taking it (probably up to about $300 or so, which is admittedly small for this type of thing).
algorithms that people have been constructed (within the paradigm of deep learning) are highly nontransparent: nobody's been able to interpret their behavior in intelligible terms.
Not quite true Jonah: http://arxiv.org/pdf/1311.2901.pdf
Does this capture any of what you're talking about? This is my intuitive take away from the post so I want to check if it's not what is intended. An analogy: we know that the lens has flaws and we can learn specific moves to shift the lens a bit so that we can see the flaws more easily. For those with high levels of epistemic rationality, bumping the lens around in just the right ways is, or has become, an automatic process such that they seem to have a magic ability to always catch the flaws right away. We ask them for an algorithm to do that and they poi...
How many bad ideas or ambiguously true ideas do mathematicians have for every good idea they produce? How many people feel "deep certainties" about hypotheses that never pan out? Even when sometimes correct, do their hunches generally do better than chance alone would suggest? I agree with the idea that pattern recognition is important, but think your claims are going too far. My opinion is that successful pattern recognition, even in the hands of the best human experts, relies heavily on explicit reasoning that takes control over the recognition...
Is this what you were referring to in "Is Scott Alexander bad at math?" when you said that being good at math is largely about "aesthetic discernment" rather than "intelligence"? Because if so that seems like an unusual notion of "intelligence", to use it to mean explicit reasoning only and exclude pattern recognition. Like it would seem very odd to say "MIT Mystery Hunt doesn't require much intelligence," even if frequently domain knowledge is more important to spotting its patterns.
Or did you mean somet...
On my return to Caen, for convenience sake, I verified the result at my leisure.
Why? Because...proofs are needed to persuade other people, I suppose. You don't need proofs and arguments if you reasoning solipsistically.
Your observation that
the most effective people in the world have a very specific way of thinking. They use their brain's pattern-matching abilities to process the world, rather than using explicit reasoning
is the subject of Malcolm Gladwell's book Blink.
I don't remember that Gladwell gave any tips for actually developing one's skills for this type of thinking, but he does have a number of interesting stories and analysis about this type of thinking. It also makes the observation that this type of non-explicit reasoning can lead us astray.
I suspect that...
An exploration of the unknown through known first-principles seems to be a good balance between order and chaos.
Oh this is nice. I've also come to realise this over time, ,in different words, and my mind is extremely tickled by how your formulation puts it on an equal footing with other non-explicit-rationality avenues of thought.
I would love to help you. I am very interested in a passion project right now. And we seem to be classifying similar things as hard-won realisations, though we have very different timelines for different things; talking to you might be all-round interesting for me.
Hi Jonah, this article is very intriguing since I might be going through a similar phase as you. Please add me to any list of collaborators you're drawing up.
This seems valuable—I'm interested in helping (will email).
I want to highlight that "communicating how to do it" might not make sense as a frame. Pattern-matching is closely related to chunking. Ctrl+F yields other people who've mentioned chess, so I'll just point at that and then note that we actually know exactly how to communicate the skill of chunking chessboards: you get the person to practice chess in a certain way. There are of course better and worse ways to do this, but it seems like rather than looking for an insight to communicate you want to look for a learning process and how to make it more efficient by (e.g.) tightening feedback loops.
I have a lot of evidence that this way of thinking is how the most effective people think about the world. Here I'll give two examples. Holden worked under Greg Jensen, the co-CEO of Bridgewater Associates, which is the largest hedge fund in the world.
BW also uses a lot of explicit models, https://www.youtube.com/watch?v=PHe0bXAIuk0
Holden working under Greg is also generally weak evidence about how Greg thinks.
I personally agree with your core thesis that pattern matching is central. I invested a lot of effort into Quantified Self community building and gave press interviews praising the promise of QS. I think at the time I overrated straight data over pattern matching. Today I consider pattern matching much more important. I'm happy to collaborate on developing this line of thought.
I'm weary of whether using the word 'rationality' in this context is useful. Webster defines the word as: 'the quality or state of being agreeable to reason'. Wikipedia says: 'Ratio...
The coolest possible output of a collaboration like this would be some kind of browser-based game you could play that would level up your rationality.
Also, what characteristics/skills does your ideal collaborator have? Maybe what you want to do is find an effective altruist whose work could benefit very strongly from the skills you describe, tutor them in the skills, and having taught 1 person, see if you can replicate the most effective bits of teaching bits and scale them to a larger audience.
This sounds like an explanation for the old adage: "Go with your gut". If your brain is a lot better at recognizing patterns than it is at drawing conclusions through a chain of reasoning, it seems advisable to trust that which your brain excels at. Something similar is brought up in The Gift of Fear, where the author cites examples where the pattern-recognition signaled danger, but people ignored them because they could not come up with a chain of reasoning to support that conclusion.
...Sufficiently high quality mathematicians don't make their di
What would be the goal of any such collaboration: LessWrong posts, a book, a podcast series? Knowing what you will produce will help you sell yourself to potential collaborators.
Short version (courtesy of Nanashi)
Long version
For most of my life, I believed that epistemic rationality was largely about reasoning carefully about the world. I frequently observed people's intuitions leading them astray. I thought that what differentiated people with high epistemic rationality is Cartesian skepticism: the practice of carefully scrutinizing all of one's beliefs using deductive-style reasoning.
When I met Holden Karnofsky, co-founder of GiveWell, I came to recognize that Holden's general epistemic rationality was much higher than my own. Over the course of years of interaction, I discovered that Holden was not using my style of reasoning. Instead, his beliefs were backed by lots of independent small pieces of evidence, which in aggregate sufficed to instill confidence, even if no individual piece of evidence was compelling by itself. I finally understood this in 2013, and it was a major epiphany for me. I wrote about it in two posts [1], [2].
After learning data science, I realized that my "many weak arguments" paradigm was also flawed: I had greatly overestimated the role that reasoning of any sort plays in arriving at true beliefs about the world.
In hindsight, it makes sense. Our brains' pattern recognition capabilities are far stronger than our ability to reason explicitly. Most people can recognize cats across contexts with little mental exertion. By way of contrast, explicitly constructing a formal algorithm that can consistently cats across contexts requires great scientific ability and cognitive exertion. And the best algorithms that people have been constructed (within the paradigm of deep learning) are highly nontransparent: nobody's been able to interpret their behavior in intelligible terms.
Very high level epistemic rationality is about retraining one's brain to be able to see patterns in the evidence in the same way that we can see patterns when we observe the world with our eyes. Reasoning plays a role, but a relatively small one. If one has developed the capacity to see in this way, one can construct post hoc explicit arguments for why one believes something, but these arguments aren't how one arrived at the belief.
The great mathematician Henri Poincare hinted at what I finally learned, over 100 years ago. He described his experience discovering a concrete model of hyperbolic geometry as follows:
Sufficiently high quality mathematicians don't make their discoveries through reasoning. The mathematical proof is the very last step: you do it to check that your eyes weren't deceiving you, but you know ahead of time that your eyes probably weren't deceiving you. Given that this is true even in math, which is thought of as the most logically rigorous subject, it shouldn't be surprising that the same is true of epistemic rationality across the board.
Learning data science gave me a deep understanding of how to implicitly model the world in statistical terms. I've crossed over into a zone of no longer know why I hold my beliefs, in the same way that I don't know how I perceive that a cat is a cat. But I know that it works. It's radically changed my life over a span of mere months. Amongst other things, I finally identified a major blindspot that had underpinned my near total failure to achieve my goals between ages 18 and 28.
I have a lot of evidence that this way of thinking is how the most effective people think about the world. Here I'll give two examples. Holden worked under Greg Jensen, the co-CEO of Bridgewater Associates, which is the largest hedge fund in the world. Carl Shulman is one of the most epistemically rational members of the LW and EA communities. I've had a number of very illuminating conversations with him, and in hindsight, I see that he probably thinks about the world in this way. See Luke Muehlhauser's post Just the facts, ma'am! for hints of this. If I understand correctly, Carl correctly estimated Mark Zuckerberg's future net worth as being $100+ million upon meeting him as a freshman at Harvard, before Facebook.
I would like to share what I learned. I think that what I've learned is something that lots of people are capable of learning, and that learning it would greatly improve people's effectiveness. But communicating the information is very difficult. Abel Prize winner Mikhail Gromov wrote
It took me 10,000+ hours to learn how to "see" patterns in evidence in the way that I can now. Right now, I don't know how to communicate how to do it succinctly. It's too much for me to do as an individual: as far as I know, nobody has ever been able to convey the relevant information to a sizable audience!
In order to succeed, I need collaborators who are open to spend a lot of time thinking carefully about the material, to get to the point of being able to teach others. I'd welcome any suggestions for how to find collaborators.