What you are describing is my native way of thinking. My mind fits large amounts of information together into an aesthetic whole. I took me a while to figure out that other people don't think this way, and they can't easily just absorb patterns from evidence.
This mode of thinking has been described as Introverted Thinking in Ben Kovitz's obscure psychology wiki about Lenore Thomson's obscure take on Jungian psychology. Some of you are familiar with Jungian functions through MBTI, the Myers-Briggs Type Indicator. Introverted Thinking (abbreviated Ti) is the dominant function of the INTP type.
It will only take a few quotes to illustrate why you are talking about the same thing:
Introverted Thinking (Ti) is the attitude that beneath the complexity of what is manifest (apparent, observed, experienced) there is an underlying unity: a source or essence that emerges and takes form in different ways depending on circumstances. What is manifest is seen as a manifestation of something. From a Ti standpoint, the way to respond to things is in a way that is faithful to that underlying cause or source and helps it emerge fully and complete, without interference from any notion of self. The way to understand that underlying essence is to learn to simultaneously see many relationships within what is manifest, to see every element in relation to every other element, the relationships being the "signature" of the underlying unity. This can only be experienced directly, not second-hand.
Introverted thinking is a form of mental representation in which every input, every variable, every aspect of things is considered simultaneously and holistically to perceive causal, mathematical, and aesthetic order. What you know by Ti, you know with your hands, your eyes, your muscles, even a tingling sensation "downstairs" because you sense that everything fits. Every variable is fair game to vary, every combination of variables worthy of consideration; the only ultimate arbiter is how well the parts form a unified whole rather than a jumble.
Introverted Thinking (Ti) is contrasted with Extraverted Thinking (Te):
From the Te perspective, anything for which you can't give an operational definition in terms of measurement (an "objective test") doesn't exist. The decision criteria are defined not exactly in terms of the things: they're defined in terms of observations of a sort that anyone can do and get the same result. You put the totality of the real-world situation onto your scales, so that all causal factors come into play--both known and unknown. What's accessible to you is the reading on the scale: that and only that is the basis for your decision.
As a dominant function, Te typically leads one to pursue and collect reliable ways of making decisions to get predictable results. The repeatability of a process becomes one of the main criteria for finding it valuable. Repeatable processes are valuable from a Te perspective because they enable you to make agreements with other people, where there is no doubt as to whether each party has fulfilled its part of the agreement. Making and delivering on promises is often how a Te attitude leads one to understand ethics.
Introverted Thinking about language:
From the Ti standpoint, communication is possible only between people who share some common experience of the things that they're talking about. To say something that you can understand, I need to relate it logically to things in your own experience. To show you how far a piece of wood bends, instead of giving a numerical measure (Te), I'd either encourage you to bend a piece of wood yourself, or find some mathematically similar thing that you know about and relate wood-bending to that. Words cannot be defined prior to the reality that they're about; words and criteria defined independently of the reality would be meaningless. The world itself provides a natural set of reference points, arising from the real, causal structure of things. Ultimately, to talk is to say, "I mean *that)."
Introverted Thinking uses language and concepts merely as pointers to patterns in reality that are incredibly more complex than anything that can be described in words. In contrast, Extraverted Thinking is about step-by-step justification according to shared language and critera. A common failure mode of Extraverted Thinking is King on The Mountain, which I think everyone will instantly recognize.
Introverted Thinking and Extraverted Thinking, along with Extraverted Intuition and Introverted Intuition, are combined to create rationality. Extraverted Intuition provides the idea generation, Introverted Thinking provides pattern recognition, Extraverted Thinking handles justification, and Introverted Intuition avoids bias. According to the Jung-Thomson-Kovitz theory, all of these modes of thinking provide benefits and failure modes. For example, a failure mode of Introverted Thinking is that since it is aesthetic and subjective, it can be very hard for Introverted Thinkers with different inputs to reconcile worldviews if they differ, whereas Extraverted Thinkers could slowly hammer out agreement step-by-step.
LessWrong seems mostly dominated by INTJs, who have Introverted Intuition and Extraverted Thinking. They are mostly focused on justification and bias. These are important skills, but Introverted Thinking is important for marshaling the priors of the totality of your experience.
Continuing a bit…
It’s truly strange seeing you say something like “Very high level epistemic rationality is about retraining one's brain to be able to see patterns in the evidence in the same way that we can see patterns when we observe the world with our eyes.” I already compulsively do the thing you talking about training yourself to do! I can’t stop seeing patterns. I don’t claim that the patterns I see are always true, just that’s it’s really easy for me to see them.
For me, thinking is like a gale wind carrying puzzle pieces that dance in the air and a...
Short version (courtesy of Nanashi)
Long version
For most of my life, I believed that epistemic rationality was largely about reasoning carefully about the world. I frequently observed people's intuitions leading them astray. I thought that what differentiated people with high epistemic rationality is Cartesian skepticism: the practice of carefully scrutinizing all of one's beliefs using deductive-style reasoning.
When I met Holden Karnofsky, co-founder of GiveWell, I came to recognize that Holden's general epistemic rationality was much higher than my own. Over the course of years of interaction, I discovered that Holden was not using my style of reasoning. Instead, his beliefs were backed by lots of independent small pieces of evidence, which in aggregate sufficed to instill confidence, even if no individual piece of evidence was compelling by itself. I finally understood this in 2013, and it was a major epiphany for me. I wrote about it in two posts [1], [2].
After learning data science, I realized that my "many weak arguments" paradigm was also flawed: I had greatly overestimated the role that reasoning of any sort plays in arriving at true beliefs about the world.
In hindsight, it makes sense. Our brains' pattern recognition capabilities are far stronger than our ability to reason explicitly. Most people can recognize cats across contexts with little mental exertion. By way of contrast, explicitly constructing a formal algorithm that can consistently cats across contexts requires great scientific ability and cognitive exertion. And the best algorithms that people have been constructed (within the paradigm of deep learning) are highly nontransparent: nobody's been able to interpret their behavior in intelligible terms.
Very high level epistemic rationality is about retraining one's brain to be able to see patterns in the evidence in the same way that we can see patterns when we observe the world with our eyes. Reasoning plays a role, but a relatively small one. If one has developed the capacity to see in this way, one can construct post hoc explicit arguments for why one believes something, but these arguments aren't how one arrived at the belief.
The great mathematician Henri Poincare hinted at what I finally learned, over 100 years ago. He described his experience discovering a concrete model of hyperbolic geometry as follows:
Sufficiently high quality mathematicians don't make their discoveries through reasoning. The mathematical proof is the very last step: you do it to check that your eyes weren't deceiving you, but you know ahead of time that your eyes probably weren't deceiving you. Given that this is true even in math, which is thought of as the most logically rigorous subject, it shouldn't be surprising that the same is true of epistemic rationality across the board.
Learning data science gave me a deep understanding of how to implicitly model the world in statistical terms. I've crossed over into a zone of no longer know why I hold my beliefs, in the same way that I don't know how I perceive that a cat is a cat. But I know that it works. It's radically changed my life over a span of mere months. Amongst other things, I finally identified a major blindspot that had underpinned my near total failure to achieve my goals between ages 18 and 28.
I have a lot of evidence that this way of thinking is how the most effective people think about the world. Here I'll give two examples. Holden worked under Greg Jensen, the co-CEO of Bridgewater Associates, which is the largest hedge fund in the world. Carl Shulman is one of the most epistemically rational members of the LW and EA communities. I've had a number of very illuminating conversations with him, and in hindsight, I see that he probably thinks about the world in this way. See Luke Muehlhauser's post Just the facts, ma'am! for hints of this. If I understand correctly, Carl correctly estimated Mark Zuckerberg's future net worth as being $100+ million upon meeting him as a freshman at Harvard, before Facebook.
I would like to share what I learned. I think that what I've learned is something that lots of people are capable of learning, and that learning it would greatly improve people's effectiveness. But communicating the information is very difficult. Abel Prize winner Mikhail Gromov wrote
It took me 10,000+ hours to learn how to "see" patterns in evidence in the way that I can now. Right now, I don't know how to communicate how to do it succinctly. It's too much for me to do as an individual: as far as I know, nobody has ever been able to convey the relevant information to a sizable audience!
In order to succeed, I need collaborators who are open to spend a lot of time thinking carefully about the material, to get to the point of being able to teach others. I'd welcome any suggestions for how to find collaborators.