So you've seen reason to suspect you might be rationalizing, and you can't avoid the situation, what now?
Here are some tests you can apply to see whether you were rationalizing.
Reverse the Consequences
Let's explain this one via example:
Some Abstinence Educators like to use the "scotch tape" model of human sexuality. In it, sex causes people to attach to each other emotionally, but decreasingly with successive partners, just like tape is sticky, but less sticky when reused. Therefore, they say, you should avoid premarital sex because it will make you less attached to your eventual spouse.
Do you think this is a reasonable summary of human sexuality? Are people basically scotch tape?
Suppose the postscript had been: therefore you should have lots of premarital sex, so that you're not irrationally attached to someone. That way, when you believe you are in love and ready to commit, you really are.
Does this change your views on the scotch tape model? For many people, it does.
If so, then your views on the model are not driven by your analysis of its own merits, but by either your desire to have premarital sex, or your reluctance to admit Abstinence Educators could ever be right about anything.
(Or, possibly, your emotional revulsion at premarital sex or your affiliation to Abstinence Educators. The point of this section is unaffected.)
The point here is to break the argument into the piece to be evaluated, and the consequence of that piece which logically shouldn't effect the first part's validity but somehow does.
If the consequences seem hard to spin backwards, put on your Complete Monster Hat for a quick role-play. Suppose you think third-world charity is breaking the economies it goes to, and therefore you should keep your money for yourself, but this could be a rationalization from an unendorsed value (greed). Imagine yourself as a Dick Dastardly, a mustache-twirling villain who's trying to maximize suffering. Does Mr. Dastardly give generously to charity? Probably not.
I don't want to get into an analysis of economic aid here. If contemplating Mr Dastardly gives you a more complex result like "I should stop treating all third-world economic aid as equivalent" and not a simple "I should give", then the intuition pump is working as intended. Because it's helping you build a more accurate world-model.
To put it in imperative terms, imagine you'd observed the opposite. If you wouldn't update the opposite way, something has gone wrong. If you wouldn't update as much, this must be balanced by having been surprised when you learned this.
Note that "opposite" can be a little subtle. There can be u-shaped response curves, where "too little" and "too much" are both signs of badness, and only "just right" updates you in favor of something. But unless such a curve is well-known beforehand, the resulting model takes a complexity penalty.
Ask a Friend
A classic way of dealing with flaws in your own thinking is to get someone else's thinking.
Ideally someone with uncorrelated biases. This is easiest to find when you have a personal connection weighing on your reasoning, and it's easy to find someone without one. (In extreme cases, this can be recusal: make the unconnected person do all the work.)
Be careful when asking that you don't distort the evidence as you present it. Verbosity is your friend here.
You may even find that your friend doesn't need to say anything. When you reach the weak-point in your argument, you'll feel it.
One's Never Alone with a Rubber Duck
If your friend doesn't need to say anything, maybe they don't need to be there. Programmers refer to this as "rubber duck debugging".
This has the advantage that your friend can be whomever you want. You can't actually run your ideas past Richard Feynman for a double-checking, for several reasons, but you can certainly run them past imaginary Richard Feynman. The ideal person for this is someone whose clear thinking you respect, and whose respect you want (as this will throw your social instincts into the fray at finding flaws).
Be sure, if attempting this that you explain your entire argument. In imagination, it's possible to fast-forward through the less interesting parts, but those could easily be where the flaw is.
Previously: Avoiding Rationalization
So you've seen reason to suspect you might be rationalizing, and you can't avoid the situation, what now?
Here are some tests you can apply to see whether you were rationalizing.
Reverse the Consequences
Let's explain this one via example:
Some Abstinence Educators like to use the "scotch tape" model of human sexuality. In it, sex causes people to attach to each other emotionally, but decreasingly with successive partners, just like tape is sticky, but less sticky when reused. Therefore, they say, you should avoid premarital sex because it will make you less attached to your eventual spouse.
Do you think this is a reasonable summary of human sexuality? Are people basically scotch tape?
Suppose the postscript had been: therefore you should have lots of premarital sex, so that you're not irrationally attached to someone. That way, when you believe you are in love and ready to commit, you really are.
Does this change your views on the scotch tape model? For many people, it does.
If so, then your views on the model are not driven by your analysis of its own merits, but by either your desire to have premarital sex, or your reluctance to admit Abstinence Educators could ever be right about anything.
(Or, possibly, your emotional revulsion at premarital sex or your affiliation to Abstinence Educators. The point of this section is unaffected.)
The point here is to break the argument into the piece to be evaluated, and the consequence of that piece which logically shouldn't effect the first part's validity but somehow does.
If the consequences seem hard to spin backwards, put on your Complete Monster Hat for a quick role-play. Suppose you think third-world charity is breaking the economies it goes to, and therefore you should keep your money for yourself, but this could be a rationalization from an unendorsed value (greed). Imagine yourself as a Dick Dastardly, a mustache-twirling villain who's trying to maximize suffering. Does Mr. Dastardly give generously to charity? Probably not.
I don't want to get into an analysis of economic aid here. If contemplating Mr Dastardly gives you a more complex result like "I should stop treating all third-world economic aid as equivalent" and not a simple "I should give", then the intuition pump is working as intended. Because it's helping you build a more accurate world-model.
Conservation of Expected Evidence
The examples in the original Conservation of Expected Evidence post cover this pretty well.
To put it in imperative terms, imagine you'd observed the opposite. If you wouldn't update the opposite way, something has gone wrong. If you wouldn't update as much, this must be balanced by having been surprised when you learned this.
Note that "opposite" can be a little subtle. There can be u-shaped response curves, where "too little" and "too much" are both signs of badness, and only "just right" updates you in favor of something. But unless such a curve is well-known beforehand, the resulting model takes a complexity penalty.
Ask a Friend
A classic way of dealing with flaws in your own thinking is to get someone else's thinking.
Ideally someone with uncorrelated biases. This is easiest to find when you have a personal connection weighing on your reasoning, and it's easy to find someone without one. (In extreme cases, this can be recusal: make the unconnected person do all the work.)
Be careful when asking that you don't distort the evidence as you present it. Verbosity is your friend here.
You may even find that your friend doesn't need to say anything. When you reach the weak-point in your argument, you'll feel it.
One's Never Alone with a Rubber Duck
If your friend doesn't need to say anything, maybe they don't need to be there. Programmers refer to this as "rubber duck debugging".
This has the advantage that your friend can be whomever you want. You can't actually run your ideas past Richard Feynman for a double-checking, for several reasons, but you can certainly run them past imaginary Richard Feynman. The ideal person for this is someone whose clear thinking you respect, and whose respect you want (as this will throw your social instincts into the fray at finding flaws).
Be sure, if attempting this that you explain your entire argument. In imagination, it's possible to fast-forward through the less interesting parts, but those could easily be where the flaw is.
Next: Using Expert Disagreement