Of course, Vox is not a Catholic so there is no "we" in his argument.
Moreover, this post is one in a series responding to New Atheists and others who explicitly argue that religious institutions, people and motivations are worse than the secular alternatives. He doesn't introduce the comparison between religious and secular as a counterattack. He is responding to people who have already made that moral comparison and is showing that the calculus doesn't work out as they claimed.
I wouldn't say that this is a fear of an "inaccurate conclusion," as you say. Instead, it's a fear of losing control and becoming disoriented: "losing your bearings" as you said . You're afraid that your most trustworthy asset - your ability to reason through a problem and come out safe on the other side; an asset that should never fail you - will fail you and lead you down a path you don't want to go. In fact, it could lead to Game Over if you let that lead you to kill or be killed, as you highlight in your examples of the Unabomber, Mitchell Heisman and zealot soldiers.
I especially like the orientation metaphor here. And I think that your piece addresses this. First, you need to know where you are. Recognize when you are in far mode and thinking abstractly and when you are in near mode and thinking concretely. Then you can think about where you should be, near or far. Learn to recognize which one is better for your current situation and be able to switch between them. This is also part of being oriented. Finally, have a kill switch if you feel yourself losing control.
I think skeptical people are too quick to say "Forer Effect" when they first do Myers-Briggs. They notice that their type only partially describes them and assume that something fishy is going on. But if you switch all the letters and read the description of the exact opposite type, there is almost nothing that could apply to you. That in itself means that there is some non-trivial classification going on. San Francisco may not be LA, but it sure isn't Moscow.
Fixed.
Does it make sense to think of yourself as crazy to the same extent that people of other psychetypes are?
I don't think so. The term captures how radically different the another types are from your own. It's about relative distance between you and others, not an absolute quality.
You mentioned Myers-Briggs types and "the idea that either I was crazy, or everyone else was." I think I had a similar experience but with a different analysis of the MBTI classifications. It was Personality Type: An Owner's Manual by Lenore Thomson and there is a wiki discussion here.
I found the scientific basis fairly flimsy. She connects the 8 cognitive functions to various regions of the brain - left and right, anterior and posterior - but it seems like a just so story to me. However, I have found it immensely useful as a tool for self-improvement.
The main insight I got from it is that while other people are crazy, they are crazy in a fairly well-defined, reproducible way. Other people see things completely differently from you, but it's fairly internally consistent and so you simulate it on your own hardware.
There are two ways I think about this:
One, your brain is is trying to constantly make sense of all this sensory data that comes in. So it determines that one part is the signal and one part is the noise. It tries to minimize the noise and focus on the signal. But then you realize there is a whole other signal in what you thought was noise and there are people tuning into that and think your signal is actually the noise. If you then turn into that signal, you can understand what other people have been listening to the whole time
The other is, we are all playing 8 board games simultaneously, where if we roll the dice our piece moves that amount in each of the games. In order to make sense of this, we focus on one of the games, trying to forget about the others, and try to win this one. But other people are focused on trying to win a different game. So when they try to talk to each other about who is winning, they completely talk past each other. But when you realize that someone thinks he is playing a different game and you figure out what it is, you can have a much more productive conversation/relationship.
This sounds like a "Yes, Minister" interpretation. In that series, the British politicians are nominally in charge of the various ministries, being the representatives of the party in charge, but in actuality the civil service bureaucracy runs the show. The minister, Jim Hacker, and the permanent secretary (top civil servant), Sir Humphrey Appleby, are constantly in conflict over some little policy or bureaucratic issue and the latter almost always wins while letting his "superior" feel like he actually got his way.
So consciousness lets us think we are in charge, in fact we are convinced we are in charge, when in reality we will constantly be thwarted by that part of our brain operating outside conscious awareness.
That's why it can be such an effective tactic when persuading normal people. You can get them to commit to your side and then they rationalize themselves into believing it's truth (which it is) because they don't want to admit they were conned.
There is something that bother's me and I would like to know if it bothers anyone else. I call it "Argument by Silliness"
Consider this quote from the Allais Malaise post: "If satisfying your intuitions is more important to you than money, do whatever the heck you want. Drop the money over Niagara Falls. Blow it all on expensive champagne. Set fire to your hair. Whatever."
I find this to be a common end point when demonstrating what it means to be rational. Someone will advance a good argument that correctly computes/deduces how you should act, given a certain goal. In the post quoted above, that would be maximizing your money. And in order to get their point across, they cite all the obviously silly things you could otherwise do. To a certain extent, it can be more blackmail than argument, because your audience does not want to seem a fool and so he dutifully agrees that yes, it would be silly to throw your money off of Niagara Falls and he is certainly a reasonable man who would never do that so of course he agrees with you.
Now, none of the intelligent readers on LW need to be blackmailed this way because we all understand what rationality demands of us and we respond to solid arguments not rhetoric. And Eliezer is just using that bit of trickery to get a basic point across to the uninitiated.
But the argument does little to help those who already grasp the concept improve their understanding. Absurdity does not mean you have correctly implemented a "reductio ad absurdum" technique. You have to be careful because he appealed to something that is self-evidently absurd and you should be wary of anything considered self-evident. Actually, I think it is more a case of being commonly accepted as absurd, but you should be just as wary of anything commonly accepted as silly. And you should be careful about where you think it is the former but it's actually the later.
The biggest problem, however, is that silly is a class in which we put things that can be disregarded. Silly is not a truth statement. It is a value statement. It says things are unimportant, not that they are untrue. It says that according to a given standard, this thing is ranked very low, so low in fact that it is essentially worthless.
Now, disregarding things is important for thinking. It is often impossible to think through the whole problem, so we at first concern ourselves with just a part and put the troublesome cases aside for later. In the Allais Malaise post, Eliezer was concerned just with the minor problem of "How do we maximize money under these particular constraints?" and separating out intuitions was part of having a well-defined, solvable problem to discuss.
But the silliness he cites only proves that the two standards - maximizing money and satisfying your intuitions - conflict in a particular case. It tells you little about any other case or the standards themselves.
The point I most want to make is "Embrace what you find silly," but since this comment has gone on very long, so I am going to break this up into several postings.
I think one place to look for this phenomenon is when in a debate, you seize upon someone's hidden assumptions. When this happens, it usually feels like a triumph, that you have successfully uncovered an error in their thinking that invalidates a lot of what they have argued. And it is incredibly annoying to have one of your own hidden assumptions laid bare, because it is both embarrassing and means you have to redo a lot of your thinking.
But hidden assumptions aren't bad. You have to make some assumptions to think through a problem anyway. You can only reason from somewhere to somewhere else. It's a transitive operation. There has to be a starting point. Moreover, assumptions make thinking and computation easier. They decrease the complexity of the problem, which means you can figure out at least part of the problem. Assuming pi is 3.14 is good if you want an estimate of the volume of the Earth. But that is useless if you want to prove a theorem. So in the metaphor, maps are characterized by their assumptions/axioms.
When you come into contact with assumptions, you should make them as explicit as possible. But you should also be willing to provisionally accept others' assumptions and think through their implications. And it is often useful to let that sit alongside your own set of beliefs as an alternate map, something that can shed light on a situation when your beliefs are inadequate.
This might be silly, but I tend to think there is no Truth, just good axioms. And oftentimes fierce debates come down to incompatible axioms. In these situations, you are better off making explicit both sets of assumptions, accepting that they are incompatible and perhaps trying on the other side's assumptions to see how they fit.
I try to treat my emotions in the following way: Emotions just ''are'' and as such carry information only about emotions themselves. They have meaning only in relation to other emotions, both mine and those of others. I've found that the most effective way to consistently take the outside view. Once I made that leap, it became much easier to apply rationality in mastering them for my own benefit. I can collect empirical data about my emotions and make predictions about my emotions. I can devise strategies to change my emotions and then assess whether they work. If you feel sad and it's raining today, you might infer that rain leads to an increased probability of sadness. If you feel excited about a job opportunity, you might infer that you will generally be happy on a day to day basis. If I meet someone and feel comfortable talking to them, that's only an indication that I will feel comfortable talking to them in the future. And if you pay attention for long enough, you realize that many emotions are ultimately harmless. If you stop feeding them, they drift away, they pass.
It is partly a dissociative approach, being a spectator to your own emotions (as mentioned by EE43026F). But at the same time, it's like treating your emotions as you treat your toes. They are a part of you, but they're only mildly informative about whether you should change careers.
Looking back on what I just wrote, I should also say that dealing with emotions is a skill. I don't mean to suggest that one little insight outweighs practice. About two years and a half years ago I made a commitment to not be some completely oblivious to emotions and it's taken a while to develop the skills. The simplest skill is just identifying emotions. At various points of the day, ask yourself how you are feeling. When I started, I literally could not give a verbal response, I could not produce a word describing how I felt.