A framework for quashing deflection and plausibility mirages

The truth is people lie. Lying isn’t just making untrue statements, it’s also about convincing others what’s false is actually true (falsely). It’s bad that lies are untrue, because truth is good. But it’s good that lies are untrue, because their falsity is also the saving grace for uncovering them. Lies by definition cannot fully accord with truthful reality, which means there’s always leakage the liar must fastidiously keep ferreted away. But if that’s true, how can anyone successfully lie?

Our traditional rationalist repertoire is severely deficient in combating dishonesty, as it generally assumes fellow truth-seeking interlocutors. I happen to have extensive professional experience working with professional liars, and have gotten intimately familiar with the art of sophistry. As a defense attorney, detecting lies serves both my duty and my clients’ interests. Chasing false leads waste everyone’s time and risks backfiring spectacularly, and it’s myopic to forget that persuasion is best paired with corroborating evidence rather than just naked assertions.

A liar’s repertoire necessarily has to be robust and adaptable to an ever-changing battlefield. Obfuscation is the main character here, which is why their tactics are primarily geared towards generating a miasma of confusion. All this is buttressed by an array of emotional manipulation techniques. Anger, feigning offense, pity appeals (“How could you say that about me?”) are all fog machines drafted into the war effort.

Even skilled deceivers (read: any level above ‘nuh-uh’) will inevitably snap against reality’s tether. There will always be a delta between lies and reality, and this is the liar’s fatal vulnerability.

In response to the tireless efforts of my clients (along with their distant cousins, online grifters), I’ve developed a rubric that has been very effective at cutting through the bullshit. It’s a method I call the Miasma-Clearing Protocol. This process doesn’t rely on gut feelings or endlessly tedious debates over plausibility; instead, it systematically evaluates competing theories against all known facts. By forcing each theory to run through this gauntlet, we can separate solid ground from inflatable life rafts and expose the mirages for what they are.


How to Lie

I’ll walk you through how these common tactics manifest. One of my responsibilities is going over all the evidence with my clients, step by step. If a client is either factually innocent or guilty-but-sober-minded, there’s no difficulty getting them to admit the incriminating nature of incriminating evidence. If a client is lying — whether to me, themselves, or just desperately trying to manifest a reality which doesn’t exist — it’s like pulling teeth.

For this cohort, perpetual deflection is a favorite. When confronted with evidence like “Your phone location shows you were at the crime scene” a liar might claim someone had stolen their phone that day. Deflections are effective smokescreens because while they might be implausible, none are strictly impossible. Phones do get stolen after all.

Deflections do have a fatal vulnerability however. They can only retain their plausible deniability when exploiting a pinhole aperture, viewing one fact at a time in isolation. Let’s say the phone location that day was also at every location that my client regularly frequents. When viewed in isolation, there is nothing incriminating about this fact, but it cannot sustain the stolen phone explanation.

The easiest way out of this conundrum is confetti — a liar will throw it up in the air and pivot to another point, hoping you don’t notice. The more time between the two confrontations, the easier it is to pull off the pirouette. A less desirable way out is the contortionist act. Sometimes liars will swallow the paradox and ask to amend their theory; not only was their phone stolen but also this mystery thief was stalking them and thus went to the same places they normally would. This new theory is still not strictly impossible, but if you stick to a relentless schedule of other facts, each new barrage will require ever more acrobatic contortions. There’s always a breaking point.

Liars can only twist themselves into so many knots before they snap — best to tighten the rope early. Deflection pairs very well with another common tactic, what I call the “plausibility mirage.” Liars conflate possibility with exclusivity, as if their theory’s mere feasibility rules out all others. It’s as if they’re saying “I’m floating on a life raft, therefore the solid pier you claim to be standing on must not exist.” Through mere fiat and assertions, grounded explanations can just disappear. When laid out so transparently, it’s bizarre that this tactic could work on anyone — but work it does.

So while islands of plausibility might exist in isolation, they cannot form a coherent, traversable path across the ocean of available evidence. And while the plausibility mirage might be effective distraction, it struggles with unseating a theory that is untarnished by any contradictory evidence. Cutting through all this dense fog requires more of a systemic approach.


How To Lie Not

Now that we’ve explored the liar’s playbook, let’s turn the tables and examine how to systematically dismantle their house of cards.

Here’s how the Miasma-Clearing Protocol works. You take the dueling theories and pit them against each other, side by side. Then you run both through a gauntlet of all relevant facts, a quick and dirty determination on whether each fact is congruent or not under each theory. We don’t get bogged down with likelihood, plausibility, or burden of proof, or anything else; we stay within stark binary territory for the sake of simplicity.

Let’s say one theory is “Jake ate the cookies from the jar” squaring up against “Gillian ate the cookies from the jar” (left and right below, respectively). Two basic relevant facts might look like this:

  1. Jake was home when the cookies went missing: ✅✅
  2. Gillian was home when the cookies went missing: ✅✅

Regardless of which one ate the cookies, the fact that the other person was home at the same time is not a contradiction. Being under the same roof does not preclude someone else from stuffing their face with illicit pastries. We know nothing so far.

The real fun begins as soon as we encounter a vexing fact for the liar, say for example “Gillian has a debilitating chocolate allergy”. The liar will ask to modify their theory, because whatever could live comfortably among Fact #1 & #2 suddenly has to account for inconvenient Fact #3. This, on its own, is neither a problem nor is it an indicator for dishonesty. Even bona fide truth-seekers sometimes realize they’ve overlooked important details or made faulty assumptions in their reasoning. Modifications are welcome! However, the two rules are: 1) We start all over again from the very beginning and 2) everyone is allowed to add (never subtract) new relevant facts to the evaluation gauntlet.

To account for the allergy, the Gillian theory is now amended to “Gillian stole the cookies, but gave them to something or someone else”, then we go through the gauntlet again:

  1. Jake was home when the cookies went missing: ✅✅
  2. Gillian was home when the cookies went missing: ✅✅
  3. Gillian has a debilitating chocolate allergy: ✅✅

Again, we don’t get dragged into a tedious debate about plausibility, we stay on stark binary ground. Right now, both theories are still congruent with all the evaluated facts. Based solely on this quick and dirty rubric, there’s no reason to favor one over the other.

If a liar is forward-thinking enough, now is the ideal time to pull a plausibility mirage (“Gillian could’ve given the cookies away, therefore it wasn’t Jake that stole them!”) because the vice will only get tighter. Remember, there’s always a delta between lies and truthful reality, by definition. If you haven’t found an incongruent fact yet, you’re either not dealing with a lie, or you haven’t looked hard enough. Let’s introduce a fourth fact:

4. The only other entity that could’ve been fed cookies was a dog, and it shows no signs of illness from eating chocolate: ✅❌

The liar faces a conundrum. He can ask to modify the theory again, which is perfectly fine, but whatever he comes up with to accommodate “dog isn’t sick” fact will directly contradict the preceding “Gillian stole cookies but didn’t eat them” theory. Deflection that may have worked in isolated bursts often looks idiotic when displayed in its full solar glory.

Or maybe the liar can just swallow the demerit but nevertheless argue the veracity of their theory. It’s fine for them to try, but they face an uphill slog trying to dethrone the truthful theory that has remained untarnished by the muck. What’s wrong with accepting that Jake was the one who ate the cookies?

In conversations with lying clients, it’s around this point that I ask them to specifically point out which part of the theory they’re challenging is contradicted or otherwise incongruent with reality. Stop trying to come up with new excuses, just tell me why this theory is specifically wrong. They can’t. Or maybe they just tell me to fuck off and threaten to fire me.


How?

Why is this framework so effective? It clamps down on the eternal deflection pivot by forcing liars to commit to a single “alternative” theory, rather than an funhouse mirror’s array of deflection. Even if they propose a decent alternative theory with only a few dings, forcing it through the entire gauntlet will still expose just how deficient that theory is compared to the truthful one.

This exercise is never meant to be definitive, it remains strictly provisional. If a theory survives this gauntlet unscathed, it does not mean it’s the last word! Rather, this rubric is intended to be a ruthlessly efficient method of cutting through the chaff. First we clear the fog — only then does nuance have room to breathe.

What are some problems with this approach? Surprisingly, disagreements about in/congruent are very rare. If any particular fact starts to get bogged down in debate, there’s nothing wrong with just skipping it until the end, because it might not even be determinative. The whole point is to avoid the perpetual deflection miasma, and so the glib starkness is an intended feature of sorts.

What about cherry-picked facts? This is a problem easily remedied, and calls back to how this exercise is provisional rather than conclusive. If both sides are encouraged to bring their best facts forward, this will not be an issue. What about incomplete or unknowable facts? This is indeed an unavoidable limitation. Some lies are simply impossible to uncover, because they leave no traceable evidence. The only solace is that these undetectable lies are bound to be rare in the grand scheme of things.

Can this exercise still be gamed by dishonest actors? I tried, but cannot think of any ways. This rubric works explicitly because lies are untrue. If you’re dealing with an actual liar, you're bound to unearth a vexing fact somewhere out there. Of course, liars can always choose to go down the confetti road and refuse to engage with the exercise, but that’s its own tell.

This is a broadly applicable method, but it’s specifically designed for confronting dishonesty — whether from evasive clients or political spin doctors. Outside of that arena, this method is unnecessary and potentially counterproductive for most situations where all parties are earnestly seeking the truth. If you’re arguing whether to build a bridge out of steel or wood, there are far more robust methods from the traditional rationalist toolbox that don’t involve assuming your engineer is trying to pull a fast one.

This protocol is the lie detector test for conversations — you don't break it out unless you smell something fishy.

New Comment
23 comments, sorted by Click to highlight new comments since:

At first I thought this was a tutorial on how to catch a talented liar, and it didn’t seem that accurate. As I read, I realized that this is a tutorial on how to create common knowledge between yourself and a bad liar that you know they are lying, even if they are very stupid. This is also an interesting task, and I appreciate your insight on how to approach it.

I agree, and if the author also agrees with this or something like it, I think the post would be easier to read if something like that were described in the preface.

I'm unclear on what the distinction is exactly. This is a tutorial that works for catching a talented liar but also creating common knowledge between yourself and a bad liar.

I am having some difficulty that Jake may not have eaten the pastries and may not know what happened.  But the adversarial situation asks him to somehow solve a puzzle he had no hand in creating.  How do you deal with the fact that 'turning up heat' like this pretty much begs for speculation anyway?

As soon as I imagine an actually innocent Jake in this situation, his next line is, "Chocolate does not make all dogs sick, nor necessarily consistently do so.  For that matter, a friend of Gillian's could have stopped by or something.  I don't know, I was in my room playing video games."

I mentioned this in another comment, I used an unrealistically convenient example for illustrative purposes. A real-life application of my rubric on a real-life lie would be much more complicated and take multiple detours.

There's still a principle I feel needs addressed.  An authority figure (or at least someone who is supposed to be my ally) seems to be turning up the heat, adding pressure, escalating, questioning my veracity, fishing around for lies.  And in that situation, evasive, defensive (possibly offense as a defense) and justifying tactics could be a natural response for the innocent.

Simply put, how do you make useful distinctions within that?

Why would lying be a natural response for a non-liar falsely accused of lying?

[-]Jiro20

The response wouldn't actually be lying, but it would be indistinguishable by an outsider from the kind of deflection that you describe here and that you consider part of lying.

And I don't think "this example is unrealistically convenient" lets you handwave this away. The exact response "maybe a friend of Gillian's stopped over" is specific to your example, but that type of response is not. If Jake is innocent in this scenario but accused of lying, the only possible response is to try to come up with ways to explain the available information. That's the exact same thing that would be deflection when done by a guilty person.

I don't consider alternative explanations on their own to be indicative of lying, especially if the alternative theory as a whole more accurately comports with reality. This is why there are two parts to this exercise: surviving the gauntlet of facts and dethroning the other survivor (if any).

[-]Jiro20

But if the innocent person doesn't know what's going on (other than his own innocence), his alternative theory might not comport with reality--because he has no idea what's going on. All he can do is make hypotheses and try to confirm them. It may take several hypotheses before he gets it right. If you're going to "force liars to commit to a single alternative theory", you've put the innocent person in a position where unless he gets lucky and picks the right answer the first time, he can't defend himself because he committed to the theory and it turned out not to be true, and he doesn't get to change it.

If they have no idea what's going on then there's no need for this exercise. There's other ways to cooperate in truth-seeking.

[-]Jiro20

If they have no idea what is going on, but have been accused, they need to do what they can to maximize the chance of being believed. Sometimes this means responding with a theory. And such responses will look like evasion by your standards.

There aren't (useful) "other ways" for an innocent person with no direct knowledge to act, unless he gets lucky, like by catching the culprit.

I already said I don't consider alternative explanations on their own to be indicative of lying. I don't know where you're getting this notion that speculation is evasion, here's what I said on the matter:

If a client is either factually innocent or guilty-but-sober-minded, there’s no difficulty getting them to admit the incriminating nature of incriminating evidence. If a client is lying — whether to me, themselves, or just desperately trying to manifest a reality which doesn’t exist — it’s like pulling teeth.

[-]Jiro50

I don’t know where you’re getting this notion that speculation is evasion

The liar faces a conundrum. He can ask to modify the theory again, which is perfectly fine, but whatever he comes up with to accommodate “dog isn’t sick” fact will directly contradict the preceding “Gillian stole cookies but didn’t eat them” theory.

You've described this kind of speculation as specific to liars, yet innocent people will end up having to do it too.

If a client is either factually innocent or guilty-but-sober-minded, there’s no difficulty getting them to admit the incriminating nature of incriminating evidence.

If an innocent person was shown evidence, of course he's going to try to explain why the evidence is consistent with his innocence. Why do you think he wouldn't do this?

This would be a lot stronger if it acknowledged how few lies have the convenient fatal flaw of a chocolate allergy.  Many do, and it's a good overall process, but it's nowhere near as robust as implied.

Note that I disagree that it's not applicable when you don't already suspect deception - it's useful to look for details and inconsistency when dealing with any fallible source of information - doesn't matter whether it's an intentional lie, or a confused reporter, or an inapplicable model, truth the only thing that's consistent with itself and with observations.  

The example was intended to be unrealistically convenient, since the goal there was just an illustrative example. Had I used an actual lie narrative from one of my clients (for example) it would've gotten very convoluted and wordy, and more likely to confuse the reader.

I acknowledge there are limitations when you're dealing with unknowable lies. Beyond that, it was really hard to figure out how rare "lies with convenient flaws" really are. I don't know what denominator I'd use (how many lies are in the universe? which ones count?) or how I'd calculate the numerator.

[-]deep118

I think a realistic example would be useful! I suspect a lot of the nuance (nuance that might feel obvious to you) is in how to apply this over a long conversation with lots of data points, amendments on both sides, etc.

I was having some trouble really grokking how to apply this, so I had o3-mini rephrase the post in terms of the Experiential Array:


1. Ability

Name of Ability:

“Miasma-Clearing Protocol” (Systematically cornering liars and exposing contradictions)

Description:

This is the capacity to detect dishonest or evasive claims by forcing competing theories to be tested side-by-side against all relevant facts, thereby revealing contradictions and “incongruent” details that cannot coexist with the lie.


2. Beliefs (The Belief Template)

2.1 Criterion (What is most important?)

  • Criterion:

    “Ensuring that all relevant facts align with a coherent, contradiction-free explanation.”

  • Definition (for that Criterion):

    To “ensure all relevant facts align” means systematically verifying that each piece of evidence is fully accounted for by a theory without requiring impossible or self-contradictory assumptions.

    In practice, this translates to:

    1. Listing out every significant or relevant fact.
    2. Checking each fact against any proposed explanation.
    3. Tracking which theory remains consistent with every fact, and which theory fails on one or more points.

2.2 Cause-Effect

When modeling how someone successfully applies the Miasma-Clearing Protocol, two types of Cause-Effects often emerge:

  1. Enabling Cause-Effects (What makes it possible to satisfy the Criterion?)
    • Cause-Effect #1:

      “By methodically listing all facts side by side with each theory, I create a clear structure that prevents isolated ‘deflections.’”

      In other words, organizing all the evidence in a single framework enables a person to see where a theory’s contradictions lie.

    • Cause-Effect #2:

      “By insisting that we restart from the beginning whenever a theory is modified, we ensure no contradictory details are lost.”

      Thus, forcing a re-check of all facts enables us to capture newly revealed contradictions.

  2. Motivating Cause-Effects (What deeper/higher criteria or values does satisfying the main Criterion lead to?)
    • Cause-Effect #3:

      “When I use the protocol and find a single, unscathed theory, I can be confident I’ve uncovered the truth and avoided being misled.”

      Confidence and clarity motivate the pursuit of the protocol.

    • Cause-Effect #4:

      “Uncovering a lie protects me (or my client) from severe negative consequences (e.g., wasted time, bad decisions, legal jeopardy).”

      This motivates rigorous application of the protocol.

2.3 Supporting Beliefs

These are other beliefs that shape how someone carries out the protocol but are not the main drivers of it:

  • “All lies contain contradictions that will eventually appear if tested systematically.”
  • “If you allow a liar to deflect on one fact at a time, they can remain ‘plausible’ indefinitely.”
  • “Possibility alone does not equate to probability or exclusivity—any single ‘could be’ must still account for all facts.”

These beliefs color the attitude one has while running the protocol (e.g., staying patient, knowing contradictions will emerge).


3. Strategy

A strategy describes the internal/external sequence for ensuring the Criterion (“all facts align with a coherent explanation”) is met.

3.1 Test (How do you know the Criterion is met?)

Test:

  • You see that each relevant fact (phone location, cookie allergy, timeline, etc.) is congruent with a proposed theory.
  • You find no single fact that contradicts that theory.

When the protocol is working, you know the Criterion is satisfied because there is zero incongruence between the theory and any known fact.

3.2 Primary Operation

Primary Operation (the main sequence of steps):

  1. List all relevant facts in a shared framework (e.g., bullet points, spreadsheet).
  2. Identify the competing theory (or theories) under consideration (“Jake ate the cookies” vs. “Gillian ate the cookies”).
  3. Check each fact side by side with each theory. Mark “Congruent” or “Incongruent.”
  4. Note any facts that create contradictions for a theory.
  5. If one theory remains fully congruent and the other is contradicted, highlight the contradiction and invite the person to explain or revise.

3.3 Secondary Operations

These come into play when a contradiction emerges or the liar tries to pivot:

Secondary Operation #1: Re-run the Gauntlet

  • If the theory is modified (“Actually, the phone was stolen and the thief followed me!”), start from scratch with the entire fact list.
  • This ensures no detail is lost in the shuffle.

Secondary Operation #2: Add New Facts

  • If a new piece of evidence surfaces, add it to the list and re-check all theories from the top.
  • A liar’s confetti (irrelevant details or pivoting to new stories) can be turned into new “facts” to test for consistency.

4. Emotions

4.1 Sustaining Emotion

A “background” emotional state that keeps one persistent and systematic:

Sustaining Emotion:

  • Calm Curiosity – The ability to remain unflustered, methodical, and genuinely interested in aligning facts with reality.
  • Determination – The refusal to let emotional manipulation (“How could you say that about me?”) derail the step-by-step analysis.

These emotions maintain the mental environment needed to keep applying the protocol without succumbing to frustration or intimidation.


5. External Behavior

Key Observable Behaviors:

  • Writing or visually mapping out facts and theories (e.g., “Let’s put this on the board.”).
  • Insisting on going one by one through each piece of evidence: “Let’s not skip around; we’ll get to that point after we finish with the first.”
  • Refusing to accept indefinite deflection: “We need to see how your new explanation fits every piece of evidence, not just one.”
  • Asking direct clarifying questions whenever the other person tries to pivot: “Which fact does your new story explain better than the old one?”

6. Contributing Factors

These are abilities or conditions outside the main mental structure but crucial to success:

  • Access to all relevant facts (e.g., phone records, allergy knowledge, timelines, logs).
  • Time and willingness to run through each point systematically.
  • A stable context (e.g., a conversation where you can keep returning to the “map” of facts; a legal proceeding, a negotiation, etc.).
  • Domain knowledge sufficient to interpret facts correctly (e.g., if it’s about cookie-eating, you must know enough about chocolate allergies and how dogs react to chocolate).

7. Putting It All Together

In summary, the Miasma-Clearing Protocol, as framed by the Experiential Array, is the ability to systematically confront a dishonest or evasive person by:

  1. Holding a central Criterion: All facts must align with one coherent explanation.
  2. Maintaining Enabling Beliefs: Contradictions emerge naturally when tested thoroughly.
  3. Following a Strategy of listing facts, comparing them to competing theories, and re-checking whenever a theory is modified.
  4. Sustaining Emotions of calm curiosity and determination so as not to be derailed by emotional manipulation.
  5. Engaging in External Behaviors that keep the process transparent, organized, and methodical.
  6. Leveraging Contributing Factors (full knowledge, time, context) to ensure a robust exploration of all relevant facts.

When done correctly, the protocol exposes incongruences that the liar cannot reconcile without further contradiction. It “clears the miasma” of deflections, so that the truthful theory remains standing, unscathed by contradictory evidence.
 

I've never encountered this framework before but I'm curious. What do you find useful about it?

It gives me everything I need to replicate the ability. I just step by step bring on the motivation, emotions, beliefs, and then follow the steps, and I can do the same thing!

Whereas, just reading your post, I get a sense you have a way of really getting down to the truth, but replicating it feels quite hard.

Planecrash (from Eliezer and Lintamande) seems highly relevant here: the hero, Keltam, tries to determine whether he is in a conspiracy or not. To do that he basically applies Bayes theorem to each new fact he encounters: "Is fact F more likely to happen if I am in a conspiracy or if I am not? hmm, fact F seems more likely to happen if I am not in a conspiracy, let's update my prior a bit towards the 'not in a conspiracy' side".

Planecrash is a great walkthrough on how to apply that kind of thinking to evaluate whether someone is bullshitting you or not, by keeping two alternative worlds that explain what they are saying, and updating the likelihoods as the discussion goes on.

Surely if you start putting probability on events such as "someone stole my phone", and "that person then tailed me", and multiply the probability of each new fact, it gets really unlikely really fast. Also relevant: Burdensome details

Interesting. I don't understand why one cant continuously modify the theory to accommodate each new fact, even if each amendment becomes increasingly convoluted, as long as it remains technically congruent. You seem to imply that this is not probable because the lier doesn't have an incentive. Could you specify/elaborate?

It's certainly possible to just constantly amend a theory and keep it technically cohesive, but I've found that even dedicated liars eventually throw in the gauntlet after their contortions become too much to bear. Even if a liar refuses to give up, they still have to grapple with trying to unseat the truthful (and much less convoluted) theory. That's why there's two parts to this exercise: surviving the gauntlet and dethroning the other survivor.

Curated and popular this week