A few years back, my great-grandmother died, in her nineties, after a long, slow, and cruel disintegration. I never knew her as a person, but in my distant childhood, she cooked for her family; I remember her gefilte fish, and her face, and that she was kind to me. At her funeral, my grand-uncle, who had taken care of her for years, spoke. He said, choking back tears, that God had called back his mother piece by piece: her memory, and her speech, and then finally her smile; and that when God finally took her smile, he knew it wouldn’t be long before she died, because it meant that she was almost entirely gone.
I heard this and was puzzled, because it was an unthinkably horrible thing to happen to anyone, and therefore I would not have expected my grand-uncle to attribute it to God. Usually, a Jew would somehow just-not-think-about the logical implication that God had permitted a tragedy. According to Jewish theology, God continually sustains the universe and chooses every event in it; but ordinarily, drawing logical implications from this belief is reserved for happier occasions. By saying “God did it!” only when you’ve been blessed with a baby girl, and just-not-thinking “God did it!” for miscarriages and stillbirths and crib deaths, you can build up quite a lopsided picture of your God’s benevolent personality.
Hence I was surprised to hear my grand-uncle attributing the slow disintegration of his mother to a deliberate, strategically planned act of God. It violated the rules of religious self-deception as I understood them.
If I had noticed my own confusion, I could have made a successful surprising prediction. Not long afterward, my grand-uncle left the Jewish religion. (The only member of my extended family besides myself to do so, as far as I know.)
Modern Orthodox Judaism is like no other religion I have ever heard of, and I don’t know how to describe it to anyone who hasn’t been forced to study Mishna and Gemara. There is a tradition of questioning, but the kind of questioning . . . It would not be at all surprising to hear a rabbi, in his weekly sermon, point out the conflict between the seven days of creation and the 13.7 billion years since the Big Bang—because he thought he had a really clever explanation for it, involving three other Biblical references, a Midrash, and a half-understood article in Scientific American. In Orthodox Judaism you’re allowed to notice inconsistencies and contradictions, but only for purposes of explaining them away, and whoever comes up with the most complicated explanation gets a prize.
There is a tradition of inquiry. But you only attack targets for purposes of defending them. You only attack targets you know you can defend.
In Modern Orthodox Judaism I have not heard much emphasis of the virtues of blind faith. You’re allowed to doubt. You’re just not allowed to successfully doubt.
I expect that the vast majority of educated Orthodox Jews have questioned their faith at some point in their lives. But the questioning probably went something like this: “According to the skeptics, the Torah says that the universe was created in seven days, which is not scientifically accurate. But would the original tribespeople of Israel, gathered at Mount Sinai, have been able to understand the scientific truth, even if it had been presented to them? Did they even have a word for ‘billion’? It’s easier to see the seven-days story as a metaphor—first God created light, which represents the Big Bang . . .”
Is this the weakest point at which to attack one’s own Judaism? Read a bit further on in the Torah, and you can find God killing the first-born male children of Egypt to convince an unelected Pharaoh to release slaves who logically could have been teleported out of the country. An Orthodox Jew is most certainly familiar with this episode, because they are supposed to read through the entire Torah in synagogue once per year, and this event has an associated major holiday. The name “Passover” (“Pesach”) comes from God passing over the Jewish households while killing every male firstborn in Egypt.
Modern Orthodox Jews are, by and large, kind and civilized people; far more civilized than the several editors of the Old Testament. Even the old rabbis were more civilized. There’s a ritual in the Seder where you take ten drops of wine from your cup, one drop for each of the Ten Plagues, to emphasize the suffering of the Egyptians. (Of course, you’re supposed to be sympathetic to the suffering of the Egyptians, but not so sympathetic that you stand up and say, “This is not right! It is wrong to do such a thing!”) It shows an interesting contrast—the rabbis were sufficiently kinder than the compilers of the Old Testament that they saw the harshness of the Plagues. But Science was weaker in these days, and so rabbis could ponder the more unpleasant aspects of Scripture without fearing that it would break their faith entirely.
You don’t even ask whether the incident reflects poorly on God, so there’s no need to quickly blurt out “The ways of God are mysterious!” or “We’re not wise enough to question God’s decisions!” or “Murdering babies is okay when God does it!” That part of the question is just-not-thought-about.
The reason that educated religious people stay religious, I suspect, is that when they doubt, they are subconsciously very careful to attack their own beliefs only at the strongest points—places where they know they can defend. Moreover, places where rehearsing the standard defense will feel strengthening.
It probably feels really good, for example, to rehearse one’s prescripted defense for “Doesn’t Science say that the universe is just meaningless atoms bopping around?” because it confirms the meaning of the universe and how it flows from God, etc. Much more comfortable to think about than an illiterate Egyptian mother wailing over the crib of her slaughtered son. Anyone who spontaneously thinks about the latter, when questioning their faith in Judaism, is really questioning it, and is probably not going to stay Jewish much longer.
My point here is not just to beat up on Orthodox Judaism. I’m sure that there’s some reply or other for the Slaying of the Firstborn, and probably a dozen of them. My point is that, when it comes to spontaneous self-questioning, one is much more likely to spontaneously self-attack strong points with comforting replies to rehearse, than to spontaneously self-attack the weakest, most vulnerable points. Similarly, one is likely to stop at the first reply and be comforted, rather than further criticizing the reply. A better title than “Avoiding Your Belief’s Real Weak Points” would be “Not Spontaneously Thinking About Your Belief’s Most Painful Weaknesses.”
More than anything, the grip of religion is sustained by people just-not-thinking-about the real weak points of their religion. I don’t think this is a matter of training, but a matter of instinct. People don’t think about the real weak points of their beliefs for the same reason they don’t touch an oven’s red-hot burners; it’s painful.
To do better: When you’re doubting one of your most cherished beliefs, close your eyes, empty your mind, grit your teeth, and deliberately think about whatever hurts the most. Don’t rehearse standard objections whose standard counters would make you feel better. Ask yourself what smart people who disagree would say to your first reply, and your second reply. Whenever you catch yourself flinching away from an objection you fleetingly thought of, drag it out into the forefront of your mind. Punch yourself in the solar plexus. Stick a knife in your heart, and wiggle to widen the hole. In the face of the pain, rehearse only this:1
What is true is already so.
Owning up to it doesn’t make it worse.
Not being open about it doesn’t make it go away.
And because it’s true, it is what is there to be interacted with.
Anything untrue isn’t there to be lived.
People can stand what is true,
for they are already enduring it.
1Eugene T. Gendlin, Focusing (Bantam Books, 1982).
First off:
This is usually considered a very bad sign and to be against community norms and/or ethics. Many people would/will downvote your comment exclusively because of the quoted paragraph. My first impulse was to do so, but I'm overriding it in favor of this response and in light of the rest of your comment, which seems like a habit of reasoning to be strongly encouraged, regardless of other things I'll get to in a minute.
So, first, before any productive discussion of this can be done (edit: from my end, at least), I have to be reasonably confident that you've read and understood "What Do We Mean By "Rationality"?", which establishes as two separate functions what I believe you're referring to when you say "Rationality as a (near-)universal theory on decision-making."
Alright. Now, assuming you understand the point of that post and the content of "rationality", could you help me pinpoint your exact question? To me, "How has Rationality confronted its most painful weaknesses?" and "What are rationality's weak points?" are incoherent questions - they seem Mysterious - to the same extent that one could ask the same questions of thinking, of existence, of souls, of the Peano Axioms, or of basically anything that requires more context to properly compute those questions for.
If you're trying to question the usefulness of the function "be instrumentally rational", then the most salient weakness is that it is theoretically possible that a human could attempt to be instrumentally rational, end up applying it inexactly or inefficiently, waste time, not recurse to a high enough stack, or a slew of other mistakes.
The second most important is that sometimes, even a human properly applying the principles of instrumental rationality will find out that their values are more easily fulfilled by doing something else and not applying instrumental rationality - at which point, because they are applying instrumental rationality and the function "be instrumentally rational" is a polymorphic function, the next instrumentally rational thing to do is to not be instrumentally rational anymore, since it is what maximizes "winning", which as described in the first link above is what instrumental rationality strives for. In this case, using instrumental rationality in the first place if you were already doing the other thing that maximizes value could be considered an opportunity-cost virus, since it consumed time and mental energy and possibly other resources in a quest to figure out that you shouldn't have done this.
However, if you look at the odds using the tools at your disposal, it seems extremely unlikely that it would be the case that being rational is less efficient towards achieving values than other strategies, since optimizing for expected utility, over all possible strategies in all possible worlds, is mathematically the strategy most likely to achieve optimal utility. This sounds like a trivial theorem that follows from standard peano axioms, but I don't recall seeing any example of this particular statement being formalized like that.
By simple probability axioms, it is even more unlikely that what you're already doing is better than applying instrumental rationality and finding out the actual non-rational strategy that is optimal for your values, let alone compared against the expected utility of the probabilistic expectations of instrumental rationality itself being optimal versus the low probability of it leading to some other non-rational optimal strategy.
Basically, it seems like the only relevant weaknesses of applied instrumental rationality are: computational (in)tractability, unlikely chance that some non-expected-winning-maximizing strategy might actually be better for maximizing winning (which can't be known reliably in advance anyway unless you happen to defy all probability and by hypothesis already contain the true knowledge of the true optimal strategy for the agent your mind implements), and some difficulties or risks during implementation by us humans as a result of bugs and inefficiencies in human hardware.
When this is applied in a meta manner, where you rationally attempt to choose which strategies instead of applying a naive version of rationality, such as many of the ways described in the Sequences on LessWrong, then as per bayesian updating and the tools available to us, this seems to be probabilistically the most effective possible strategy for human hardware. Which means that on a statistical level, the only weakness of instrumental rationality is that it's hard to understand correctly, hard to actually implement, and hard to apply. The other responses to your comment have more details on many ways human hardware can fail to be optimal at this or have/cause various important problems.