In the spirit of contrarianism, I'd like to argue against The Bottom Line.
As I understand the post, its idea is that a rationalist should never "start with a bottom line and then fill out the arguments".
It sounds neat, but I think it is not psychologically feasible. I find that whenever I actually argue, I always have the conclusion already written. Without it, it is impossible to have any direction, and an argument without any direction does not go anywhere.
What actually happens is:
- I arrive at a conclusion, intuitively, as a result of a process which is usually closed to introspection.
- I write the bottom line, and look for a chain of reasoning that supports it.
- I check the argument and modify/discard it or parts of it if any are found defective.
It is at the point 3 that the biases really struck. Motivated Stopping makes me stop checking too early, and Motivated Continuation makes me look for better arguments when defective ones are found for the conclusion I seek, but not for alternatives, resulting in Straw Men.
Ooh! I see a "should" statement. Let's open it up and see what's inside!
\gets out the consequentialist box-cutter**
Hmm. Is that what "The Bottom Line" says?
Let's take a look at what it says about some actual consequences:
In other words, it's not like you're sinning or cheating at the rationality game if you write the bottom line first.
Rather, you ran some algorithm to generate that bottom line. You selected that bottom line out of hypothesis-space somehow. Perhaps you used the availability heuristic. Perhaps you used some ugh fields. Perhaps you used physics. Perhaps you used Tristan Tzara's cut-up technique. Or a Ouija board. Or following whatever your car's drivers' manual said to do. Or your mom's caring advice.
Well, how good was that algorithm?
Do people who use that algorithm tend to get good consequences, or not?
Once your bottom line is written — once you have made the decision whether or not to fix your brakes — the consequences you experience don't depend on any clever arguments you made up to justify that decision retrospectively.
If you come up with a candidate "bottom line" and then explore arguments for and against it, and sometimes end up rejecting it, then it wasn't really a bottom line — your algorithm hadn't actually terminated. We can then ask, still, how good is your algorithm, including the exploring and maybe-rejecting? This is where questions about motivated stopping and continuation come in.
Oh. That makes sense. So it's the bottom line only if I write it and refuse to change it forever after. Or, if it is the belief on which I actually act in the end, if it was all a part of a decision-making process.
Guess that's what everybody was telling me... feeling stupid now.