This post is pointing at a good tool for identifying bias and motivated reasoning, but l don’t think that the use of “reversal test”, here aligns with how the term was coined in the original Bostrom / Ord paper (https://nickbostrom.com/ethics/statusquo.pdf). That use of the term makes the point that if you oppose some upward change in a scaler value, and you have no reason to think that that value is already precisely optimized, then you should want to change that value in the opposite direction.
I used this term because I think the fundamental move being pointed towards is fairly similar (although actually I think the Bostrom/Ord application of this method is incorrect, which maybe means I should have come up with a different name!).
One thing that I've noticed recently is that simple reversal tests can be very useful for detecting bias when it comes to evaluating policy arguments or points made in a debate.
In other words, when encountering an argument it can be useful to think "Would I accept this sort of argument if it were being made for the other side?" or perhaps "If the ideological positions here were reversed, would this sort of reasoning be acceptable?"
This can be a very easy check to determine whether there is biased thinking going on. Here are some examples of situations where one might be able to apply this:
Often one will find that in fact that sort of argument or reasoning would not fly. This can be a good way to check your biases -- people are often prone to accepting weak arguments for things that they already agree with or against thing they already disagree with, and stopping to check whether that reasoning would work in the "other direction" is useful.
(Other times, of course, one will find that the reasoning in question does pass the reversal test -- but even so, it can good to check such things! "Trust but verify" and all that.)