If you're right, in chess it requires years and years of domain specific practice to get pattern recognition skills adequately prepared so that scrupulous thought is not required when evaluating moves. That doesn't seem like an argument against the importance of scrupulous thought to me, it seems like the opposite. Scrupulous thought is very hard to avoid relying on.
I think you're wrong however. I think once you reach a certain level of familiarity with a subject, the distinction between pattern recognition and scrupulous reasoning itself breaks down. I don't think chess experts only use the raw processing power of their subconscious minds when evaluating the board, I think they alternate between making bottom-up assessments and top-down judgements. The accounts given in the neurology books are reactions to the popular perception that reasoning abilities are all that matters in chess, but if they've given you the impression that reasoning isn't important in chess then I feel like they may have gone too far in emphasizing their point. Expert chess players certainly feel like they're doing something important with their conscious minds. They give narrative descriptions of their rounds regularly. I acknowledge that explicit thought is not all there is to playing chess, but I'm not prepared to say experts' accounts of their thoughts are just egoist delusions, or anything like that.
I suppose one point I'm trying to make here is that biased stupid thought and genius insightful thought feel the same from the inside. And I think even geniuses have biased stupid thoughts often, even within their fields of expertise, and so the importance of rigor should not be downplayed even for them. Genius isn't a quality for avoiding bad thoughts, it's quality that makes someone capable of having a few good thoughts in addition to all their other bad ones. When genius is paired with good filters, then it produces excellence regularly. Without good filters, it's much less reliable.
Finally, when you're dealing with theories about the universe the situation is different than when dealing with strategy games. You can't make a dumb subargument and then a smart subargument and have the two statements combine to produce a moderately valuable hypothesis. If you start driving down the wrong street, correctly following the rest of a list of directions will not be helpful to you. Rigor is important throughout all steps of the entire process. No mistakes can lead to success without first being undone (or at least almost none will - there are always exceptions).
I think even geniuses have biased stupid thoughts often, even within their fields of expertise, and so the importance of rigor should not be downplayed even for them.
To use the chess analogy once more: this seems to conflict with the fact that in chess, top grandmasters' intuitions are almost always correct (and the rare exceptions almost always involve some absurd-looking move that only gets found after the fact through post-game computer analysis). Quite often, you'll see a chess author touting the importance of "quiet judgment" instead of &...
Short version (courtesy of Nanashi)
Long version
For most of my life, I believed that epistemic rationality was largely about reasoning carefully about the world. I frequently observed people's intuitions leading them astray. I thought that what differentiated people with high epistemic rationality is Cartesian skepticism: the practice of carefully scrutinizing all of one's beliefs using deductive-style reasoning.
When I met Holden Karnofsky, co-founder of GiveWell, I came to recognize that Holden's general epistemic rationality was much higher than my own. Over the course of years of interaction, I discovered that Holden was not using my style of reasoning. Instead, his beliefs were backed by lots of independent small pieces of evidence, which in aggregate sufficed to instill confidence, even if no individual piece of evidence was compelling by itself. I finally understood this in 2013, and it was a major epiphany for me. I wrote about it in two posts [1], [2].
After learning data science, I realized that my "many weak arguments" paradigm was also flawed: I had greatly overestimated the role that reasoning of any sort plays in arriving at true beliefs about the world.
In hindsight, it makes sense. Our brains' pattern recognition capabilities are far stronger than our ability to reason explicitly. Most people can recognize cats across contexts with little mental exertion. By way of contrast, explicitly constructing a formal algorithm that can consistently cats across contexts requires great scientific ability and cognitive exertion. And the best algorithms that people have been constructed (within the paradigm of deep learning) are highly nontransparent: nobody's been able to interpret their behavior in intelligible terms.
Very high level epistemic rationality is about retraining one's brain to be able to see patterns in the evidence in the same way that we can see patterns when we observe the world with our eyes. Reasoning plays a role, but a relatively small one. If one has developed the capacity to see in this way, one can construct post hoc explicit arguments for why one believes something, but these arguments aren't how one arrived at the belief.
The great mathematician Henri Poincare hinted at what I finally learned, over 100 years ago. He described his experience discovering a concrete model of hyperbolic geometry as follows:
Sufficiently high quality mathematicians don't make their discoveries through reasoning. The mathematical proof is the very last step: you do it to check that your eyes weren't deceiving you, but you know ahead of time that your eyes probably weren't deceiving you. Given that this is true even in math, which is thought of as the most logically rigorous subject, it shouldn't be surprising that the same is true of epistemic rationality across the board.
Learning data science gave me a deep understanding of how to implicitly model the world in statistical terms. I've crossed over into a zone of no longer know why I hold my beliefs, in the same way that I don't know how I perceive that a cat is a cat. But I know that it works. It's radically changed my life over a span of mere months. Amongst other things, I finally identified a major blindspot that had underpinned my near total failure to achieve my goals between ages 18 and 28.
I have a lot of evidence that this way of thinking is how the most effective people think about the world. Here I'll give two examples. Holden worked under Greg Jensen, the co-CEO of Bridgewater Associates, which is the largest hedge fund in the world. Carl Shulman is one of the most epistemically rational members of the LW and EA communities. I've had a number of very illuminating conversations with him, and in hindsight, I see that he probably thinks about the world in this way. See Luke Muehlhauser's post Just the facts, ma'am! for hints of this. If I understand correctly, Carl correctly estimated Mark Zuckerberg's future net worth as being $100+ million upon meeting him as a freshman at Harvard, before Facebook.
I would like to share what I learned. I think that what I've learned is something that lots of people are capable of learning, and that learning it would greatly improve people's effectiveness. But communicating the information is very difficult. Abel Prize winner Mikhail Gromov wrote
It took me 10,000+ hours to learn how to "see" patterns in evidence in the way that I can now. Right now, I don't know how to communicate how to do it succinctly. It's too much for me to do as an individual: as far as I know, nobody has ever been able to convey the relevant information to a sizable audience!
In order to succeed, I need collaborators who are open to spend a lot of time thinking carefully about the material, to get to the point of being able to teach others. I'd welcome any suggestions for how to find collaborators.