Short version (courtesy of Nanashi)
Our brains' pattern recognition capabilities are far stronger than our ability to reason explicitly. Most people can recognize cats across contexts with little mental exertion. By way of contrast, explicitly constructing a formal algorithm that can consistently cats across contexts requires great scientific ability and cognitive exertion.
Very high level epistemic rationality is about retraining one's brain to be able to see patterns in the evidence in the same way that we can see patterns when we observe the world with our eyes. Reasoning plays a role, but a relatively small one. Sufficiently high quality mathematicians don't make their discoveries through reasoning. The mathematical proof is the very last step: you do it to check that your eyes weren't deceiving you, but you know ahead of time that your eyes probably weren't deceiving you.
I have a lot of evidence that this way of thinking is how the most effective people think about the world. I would like to share what I learned. I think that what I've learned is something that lots of people are capable of learning, and that learning it would greatly improve people's effectiveness. But communicating the information is very difficult.
It took me 10,000+ hours to learn how to "see" patterns in evidence in the way that I can now. Right now, I don't know how to communicate how to do it succinctly. In order to succeed, I need collaborators who are open to spend a lot of time thinking carefully about the material, to get to the point of being able to teach others. I'd welcome any suggestions for how to find collaborators.
Long version
For most of my life, I believed that epistemic rationality was largely about reasoning carefully about the world. I frequently observed people's intuitions leading them astray. I thought that what differentiated people with high epistemic rationality is Cartesian skepticism: the practice of carefully scrutinizing all of one's beliefs using deductive-style reasoning.
When I met Holden Karnofsky, co-founder of GiveWell, I came to recognize that Holden's general epistemic rationality was much higher than my own. Over the course of years of interaction, I discovered that Holden was not using my style of reasoning. Instead, his beliefs were backed by lots of independent small pieces of evidence, which in aggregate sufficed to instill confidence, even if no individual piece of evidence was compelling by itself. I finally understood this in 2013, and it was a major epiphany for me. I wrote about it in two posts [1], [2].
After learning data science, I realized that my "many weak arguments" paradigm was also flawed: I had greatly overestimated the role that reasoning of any sort plays in arriving at true beliefs about the world.
In hindsight, it makes sense. Our brains' pattern recognition capabilities are far stronger than our ability to reason explicitly. Most people can recognize cats across contexts with little mental exertion. By way of contrast, explicitly constructing a formal algorithm that can consistently cats across contexts requires great scientific ability and cognitive exertion. And the best algorithms that people have been constructed (within the paradigm of deep learning) are highly nontransparent: nobody's been able to interpret their behavior in intelligible terms.
Very high level epistemic rationality is about retraining one's brain to be able to see patterns in the evidence in the same way that we can see patterns when we observe the world with our eyes. Reasoning plays a role, but a relatively small one. If one has developed the capacity to see in this way, one can construct post hoc explicit arguments for why one believes something, but these arguments aren't how one arrived at the belief.
The great mathematician Henri Poincare hinted at what I finally learned, over 100 years ago. He described his experience discovering a concrete model of hyperbolic geometry as follows:
I left Caen, where I was living, to go on a geological excursion under the auspices of the School of Mines. The incidents of the travel made me forget my mathematical work. Having reached Coutances, we entered an omnibus to go to some place or other. At the moment when I put my foot on the step, the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry. I did not verify the idea; I should not have had time, as upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty. On my return to Caen, for convenience sake, I verified the result at my leisure.”
Sufficiently high quality mathematicians don't make their discoveries through reasoning. The mathematical proof is the very last step: you do it to check that your eyes weren't deceiving you, but you know ahead of time that your eyes probably weren't deceiving you. Given that this is true even in math, which is thought of as the most logically rigorous subject, it shouldn't be surprising that the same is true of epistemic rationality across the board.
Learning data science gave me a deep understanding of how to implicitly model the world in statistical terms. I've crossed over into a zone of no longer know why I hold my beliefs, in the same way that I don't know how I perceive that a cat is a cat. But I know that it works. It's radically changed my life over a span of mere months. Amongst other things, I finally identified a major blindspot that had underpinned my near total failure to achieve my goals between ages 18 and 28.
I have a lot of evidence that this way of thinking is how the most effective people think about the world. Here I'll give two examples. Holden worked under Greg Jensen, the co-CEO of Bridgewater Associates, which is the largest hedge fund in the world. Carl Shulman is one of the most epistemically rational members of the LW and EA communities. I've had a number of very illuminating conversations with him, and in hindsight, I see that he probably thinks about the world in this way. See Luke Muehlhauser's post Just the facts, ma'am! for hints of this. If I understand correctly, Carl correctly estimated Mark Zuckerberg's future net worth as being $100+ million upon meeting him as a freshman at Harvard, before Facebook.
I would like to share what I learned. I think that what I've learned is something that lots of people are capable of learning, and that learning it would greatly improve people's effectiveness. But communicating the information is very difficult. Abel Prize winner Mikhail Gromov wrote
We are all fascinated with structural patterns: periodicity of a musical tune, a symmetry of an ornament, self-similarity of computer images of fractals. And the structures already prepared within ourselves are the most fascinating of all. Alas, most of them are hidden from ourselves. When we can put these structures-within-structures into words, they become mathematics. They are abominably difficult to express and to make others understand.
It took me 10,000+ hours to learn how to "see" patterns in evidence in the way that I can now. Right now, I don't know how to communicate how to do it succinctly. It's too much for me to do as an individual: as far as I know, nobody has ever been able to convey the relevant information to a sizable audience!
In order to succeed, I need collaborators who are open to spend a lot of time thinking carefully about the material, to get to the point of being able to teach others. I'd welcome any suggestions for how to find collaborators.
Continuing a bit…
It’s truly strange seeing you say something like “Very high level epistemic rationality is about retraining one's brain to be able to see patterns in the evidence in the same way that we can see patterns when we observe the world with our eyes.” I already compulsively do the thing you talking about training yourself to do! I can’t stop seeing patterns. I don’t claim that the patterns I see are always true, just that’s it’s really easy for me to see them.
For me, thinking is like a gale wind carrying puzzle pieces that dance in the air and assemble themselves in front of me in gigantic structures, without any intervention by me. I do not experience this as an “ability” that I could “train”, because it doesn’t feel like there is any sort of “me” that is doing it: I am merely the passive observer. “Training” pattern recognition sounds as strange to me training vision itself: all I have to do is open my eyes, and it happens. Apparently it isn’t that way for everyone?
The only ways I’ve discovered to train my pattern recognition is to feed myself more information of higher quality (because garbage-in, garbage out), and to train my attention. Once I can learn to notice something, I will start to compulsively see patterns in it. For someone who isn’t compulsively maxing out their pattern recognition already, maybe it’s trainable.
Another example: my brain is often lining people in rows of 3 or 4 according to some collection of traits. There might “something” where Alice has more of it than Bob, and Bob has more of it than Carol. I see them standing next to each other, kind of like pieces on a chessboard. Basically, I think what my brain is doing is some kind of factor analysis where it is identifying unnamed dimensions behind people’s personalities and using them to make predictions. I’m pretty sure that not everyone is constantly doing this, but I could be wrong.
Perhaps someone smarter than me might be able to visualize a larger number of people in multiple dimensions in people-space. That would be pretty cool.
On a trivial level, everyone can do pattern-recognition to some degree, merely by virtue of being a human with general intelligence. Yet some people can synthesize larger amounts of information collected over a longer period of time, update their synthesis faster and more frequently, and can draw qualitatively different sorts of connections.
I think that’s what you are getting at when you talk about pattern recognition being important for epistemic rationality. Pattern recognition is like a mental muscle: some people have it stronger, some people have different types of muscles, and it’s probably trainable. There is only one sort of deduction, but perhaps there are many approaches to induction.
Luke’s description of Carl Shulman reminds me of Ben Kovitz’s description of Introverted Thinking as constantly writing and rewriting a book. When you ask Carl Shulman a question on AI, and he starts giving you facts instead of a straight answer, he is revealing part of his book.
“Many weak arguments” is not how this feels from the inside. From the inside, it all feels like one argument. Except the thing you are hearing from Carl Shulman is really only the tip of the iceberg because he cannot talk fast enough. His real answer to your question involves the totality of his knowledge of AI, or perhaps the totality of the contents of his brain.
For another example of taking arguments in totality vs. in isolation, see King On The Mountain, describing an immature form of Extraverted Thinking:
Some of the failure modes of Introverted Thinking involves seeing imaginary patterns, dealing with corrupted input, or having aesthetic biases (aesthetic bias is when you are biased towards an explanation that look neat or harmonious). Communication is also hard, but your true arguments would take a book to describe, if they could even be put into words at all.