[Epistemic status: quite speculative. I've attended a CFAR workshop including a lesson on double crux, and found it wore counterintuitive than I expected. I ran my own 3-day event going through the CFAR courses with friends, including double crux, but I don't think anyone started doing double crux based on my attempt to teach it. I have been collecting notes on my thoughts about double crux so as to not lose any; this is a synthesis of some of those notes.]

This is a continuation of my attempt to puzzle at Double Crux until it feels intuitive. While I think I understand the _algorithm_ of double crux fairly well, and I _have_ found it useful when talking to someone else who is trying to follow the algorithm, I haven't found that I can explain it to others in a way that causes them to do the thing, and I think this reflects a certain lack of understanding on my part. Perhaps others with a similar lack of understanding will find my puzzling useful.

Here's a possible argument for double crux as a way to avoid certain conversational pitfalls. This argument is framed as a sort of "diff" on my current conversational practices, which are similar to those mentioned by CCC. So, here is approximately what I do when I find an interesting disagreement:

 

  1. We somehow decide who states their case first. (Usually, whoever is most eager.) That person gives an argument for their side, while checking for understanding from the other person and looking for points of disagreement with the argument.
  2. The other person asks questions until they think they understand the whole argument; or, sometimes, skip to step 3 when a high-value point of disagreement is apparent before the full argument is understood.
  3. Recurse into step 1 for the most important-seeming point of disagreement in the argument offered. (Again the person whose turn it is to argue their case will be chosen "somehow"; it may or may not switch.)
  4. If that process is stalling out (the argument is not understood by the other person after a while of trying, or the process is recursing into deeper and deeper sub-points without seeming to get closer to the heart of the disagreement), switch roles; the person who has explained the least of their view should now give an argument for their side.

Steps 1-3 can have a range of possible results [using 'you' as the argument-giver and 'they' as the receiver]:
  • In the best case, they accept your argument, perhaps after a little recursion into sub-arguments to clarify.
  • In a very good case, the process finds a lot of common ground (in the form of parts of the argument which are agreed upon) and a precise point of disagreement, X, such that if either person changed their mind about X they'd change their mind about the whole. They can now dig into X in the same way they dug into the overall disagreement, with confidence that resolving X is a good way to resolve the disagreement.
  • In a slightly less good case, a precise disagreement X is found, but it turns out that the argument you gave wasn't your entire reason for believing what you believe. IE, you've given an argument which you believe to be sufficient to establish the point, but not necessary. This means resolving the point of disagreement X is only potentially changing their mind. At best you may find that your argument fails, in which case you'd give another argument.
  • In a partial failure case, all the points of disagreement are right away; IE, you fail to find any common ground for arguments to gain traction. It's still possible to recurse into points of disagreement in this case, and doing so may still be productive, but often this is a sign that you haven't understood the other person well enough or that you've put them on the defensive so that they're biased to disagree.
  • In a failure case, you keep digging down into reasons why they don't buy one point after another, and never really get anywhere. You don't contact with anything which would change their mind, because you're digging into your reasons rather than theirs. Your search for common ground is failing.
  • In a failure case, you've made a disingenuous argument which your motivated cognition thinks they'll have a hard time refuting, but which is unlikely to convince them. A likely outcome is a long, pointless discussion or an outright rejection of the argument without any attempt to point at specific points of disagreement with it.

I think double crux can be seen as an attempt to modify the process of 1-4 in a way which attempts to make the better outcomes more common. You can still give your same argument in double crux, but you're checking earlier to see whether it will convince the other person. Suppose you have an argument for the disagreement D:

"A.

A implies B.

B implies C.

C implies D.

So, D."

In my algorithm, you start by checking for agreement with "A". You then check for agreement with "A implies B". And so on, until a point of disagreement is reached. In double crux, you are helping the other person find cruxes by suggesting cruxes for them. You can ask "If you believed C, would you believe D?" Then, if so, "If you believed B, would you believe D?" and so on. Going through the argument backwards like this, you only keep going for so long as you have some assurance that you've connected with their model of D. Going through the argument in the forward direction, as in my method, you may recurse into further and further sub-arguments starting at a point of disagreement like "B implies C" and find that you never make contact with something in their model which has very much to do with their disbelief of D. Also, looking for the other person's cruxes encourages honest curiosity about their thinking, which makes the whole process go better.

Furthermore, you're looking for your own cruxes at the same time. So, you're more likely to think about arguments which are critical to your belief, and much less likely to try disingenuous arguments designed to be merely difficult to refute.

A quote from Feynman's Cargo Cult Science:

The first principle is that you must not fool yourself—and you are the easiest person to fool.  So you have to be very careful about that.  After you’ve not fooled yourself, it’s easy not to fool other scientists.  You just have to be honest in a conventional way after that. 

 

I would like to add something that’s not essential to the science, but something I kind of believe, which is that you should not fool the layman when you’re talking as a scientist. I’m not trying to tell you what to do about cheating on your wife, or fooling your girlfriend, or something like that, when you’re not trying to be a scientist, but just trying to be an ordinary human being.  We’ll leave those problems up to you and your rabbi.  I’m talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you’re maybe wrong, that you ought to do when acting as a scientist.  And this is our responsibility as scientists, certainly to other scientists, and I think to laymen.

 

This kind of "bending over backwards to show how maybe you're wrong" (in service of not fooling yourself) is close to double crux. Listing cruxes puts us in the mindset of thinking about ways we could be wrong.

On the other hand, I notice that in a blog post like this, I have a hard time really explaining how I might be wrong before I've explained my basic position. It seems like there's still a role for baking arguments forwards, rather than backwards. In my (limited) experience, double crux still requires each side to explain themselves (which then involves giving some arguments) before/while seeking cruxes. So perhaps double crux can't be viewed as a "pure" technique, and really has to be flexible, mixed with other approaches including the one I gave at the beginning. But I'm not sure what the best way to achieve that mixture is.

New Comment