Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: mingyuan 02 February 2017 02:02:29AM *  5 points [-]

Anecdotal data time! We tried this at last week’s Chicago rationality meetup, with moderate success. Here’s a rundown of how we approached the activity, and some difficulties and confusion we encountered.

Approach:

Before the meeting, some of us came up with lists of possibly contentious topics and/or strongly held opinions, and we used those as starting points by just listing them off to the group and seeing if anyone held the opposite view. Some of the assertions on which we disagreed were:

  • Cryonic preservation should be standard medical procedure upon death, on an opt-out basis
  • For the average person, reading the news has no practical value beyond social signalling
  • Public schools should focus on providing some minimum quality of education to all students before allocating resources to programs for gifted students
  • The rationality movement focuses too much of its energy on AI safety
  • We should expend more effort to make rationality more accessible to ‘normal people’

We paired off, with each pair in front of a blackboard, and spent about 15 minutes on our first double crux, after the resolution of which the conversations mostly devolved. We then came together, gave feedback, switched partners, and tried again.

Difficulties/confusion:

  • For the purposes of practice, we had trouble finding points of genuine disagreement – in some cases we found that the argument dissolved after we clarified minor semantic points in the assertion, and in other cases a pair would just sit there and agree on assertion after assertion (though the latter is more a flaw in the way I designed the activity than in the actual technique). However, we all agree that this technique will be useful when we encounter disagreements in future meetings, and even in the absence of disagreement, the activity of finding cruxes was a useful way of examining the structure of our beliefs.

  • We were a little confused as to whether coming up with an empirical test to resolve the issue was a satisfactory endpoint, or if we actually needed to seek out the results in order to consider the disagreement resolved.

  • In one case, when we were debating the cryonics assertion, my interlocutor managed to convince me of all the factual questions on which I thought my disagreement rested, but I still had some lingering doubt – even though I was convinced of the conclusion on an intellectual level, I didn’t grok it. When we learned goal factoring, we were taught not dismiss fuzzy, difficult-to-define feelings; that they could be genuinely important reasons for our thoughts and behavior. Given its reliance on empiricism, how does Double Crux deal with these feelings, if at all? (Disclaimer: it’s been two years since we learned goal factoring, so maybe we were taught how to deal with this and I just forgot.)

  • In another case, my interlocutor changed his mind on the question of public schools, but when asked to explain the line of argument that led him to change his mind, he wasn’t able to construct an argument that sounded convincing to him. I’m not sure what happened here, but in the future I would place more emphasis on writing down the key points of the discussion as it unfolds. We did make some use of the blackboards, but it wasn’t very systematic.

  • Overall it wasn’t as structured as I expected it to be. People didn’t reference the write-up when immersed in their discussions, and didn’t make use of any of the tips you gave. I know you said we shouldn’t be preoccupied with executing “the ideal double crux,” but I somehow still have the feeling that we didn’t quite do it right. For example, I don’t think we focused enough on falsifiability and we didn’t resonate after reaching our conclusions, which seem like key points. But ultimately the model was still useful, no matter how loosely we adhered to it.

I hope some of that was helpful to you! Also, tell Eli Tyre we miss him!

Comment author: Duncan_Sabien 02 February 2017 04:28:01AM 1 point [-]

Very useful. I don't have the time to give you the detailed response you deserve, but I deeply appreciate the data (and Eli says hi).

Comment author: Robin 10 December 2016 04:44:35AM 0 points [-]

I'm not sure what you mean and I'm not sure that I'd let a LWer falsify my hypothesis. There are clear systemic biases LWers have which are relatively apparent to outsiders. Ultimately I am not willing to pay CFAR to validate my claims and there are biases which emerge from people who are involved in CFAR whether as employees or people who take the courses (sunk cost as well as others).

Comment author: Duncan_Sabien 30 December 2016 05:12:50AM 1 point [-]

I can imagine that you might have hesitated to list specifics to avoid controversy or mud-slinging, but I personally would appreciate concrete examples, as it's basically my job to find the holes you're talking about and try to start patching them.

Comment author: CCC 07 December 2016 02:09:57PM *  2 points [-]

"Uniforms are good because they'll reduce bullying." (A because B, B --> A) "Uniforms are bad, because along with all their costs they fail to reduce bullying." (~A because ~B, ~B --> ~A)

A: "Uniforms are good"

B: "Uniforms reduce bullying"

B->A: "If uniforms reduce bullying, then uniforms are good."

~B->~A : "If uniforms do not reduce bullying, then uniforms are not good."

"A is equivalent to B": "The statement 'uniforms are good' is exactly as true as the statement 'uniforms reduce bullying'."

A->B: "If uniforms are good, then it is possible to deduce that uniforms reduce bullying."

...does that help?

Comment author: Duncan_Sabien 09 December 2016 11:17:20PM *  1 point [-]

Yep. Thanks. =)

I was misunderstanding "equivalency" as "identical in all respects to," rather than seeing equivalency as "exactly as true as."

Comment author: Robin 09 December 2016 07:30:27PM 0 points [-]

I'd take your bet if it were for the general population, not LWers...

My issue with CFAR is it seems to be more focused on teaching a subset of people (LWers or people nearby in mindspace) how to communicate with each other than in teaching them how to communicate with people they are different from.

Comment author: Duncan_Sabien 09 December 2016 11:11:01PM 0 points [-]

That's an entirely defensible impression, but it's also actually false in practice (demonstrably so when you see us at workshops or larger events). Correcting the impression (which again you're justified in having) is a separate issue, but I consider the core complaint to be long-since solved.

Comment author: rational_rob 07 December 2016 12:36:55PM 1 point [-]

I always thought of school uniforms as being a logical extension of the pseudo-fascist/nationalist model of running them. (I mean this in the pre-world war descriptive sense rather than the rhetorical sense that arose after the wars) Lots of schools, at least in America, try to encourage a policy of school unity with things like well-funded sports teams and school pep rallies. I don't know how well these policies work in practice, but if they're willing to go as far as they have now, school uniforms might contribute to whatever effects they hope to achieve. My personal opinion is in favor of school uniforms, but I'm mostly certain that's because I'm not too concerned with fashion or displays of wealth. I'd have to quiz some other people to find out for sure.

Comment author: Duncan_Sabien 07 December 2016 10:37:24PM 0 points [-]

I should note that my own personal opinions on school uniforms are NOT able-to-be-determined from this article.

Comment author: MrMind 06 December 2016 08:10:42AM *  1 point [-]

They are NOT logically equivalent.

Ah, I think I've understood where the problem lies.
See, we both agree that B --> A and -B --> -A. This second statement, as we know from logic, is equivalent to A --> B. So we both agree that B --> A and A --> B. Which yields that A is equivalent to B, or in symbols A <--> B.
This is what I was referring to: the crux being equivalent to the original statement, not that B --> A is logically equivalent to -B --> -A

Comment author: Duncan_Sabien 07 December 2016 12:56:16AM 0 points [-]

I'm probably rustier on my formal logic than you. But I think what's going on here is fuzz around the boundaries where reality gets boiled down and translated to symbolic logic.

"Uniforms are good because they'll reduce bullying." (A because B, B --> A) "Uniforms are bad, because along with all their costs they fail to reduce bullying." (~A because ~B, ~B --> ~A)

Whether this is a minor abuse of language or a minor abuse of logic, I think it's a mistake to go from that to "Uniforms are equivalent to bullying reduction" or "Bullying reductions result in uniforms." I thought that was what you were claiming, and it seems nonsensical to me. I note that I'm confused, and therefore that this is probably not what you were implying, and I've made some silly mistake, but that leaves me little closer to understanding what you were.

Comment author: MrMind 05 December 2016 08:11:36AM 0 points [-]

I hear what you're saying, which is what I hinted at with point n° 2, but "if B then A" is explicitely written in the post: last paragraph in the "How to play" section.
It seems to me you're arguing against the original poster about what "being crucial" means logically, and although I do not agree on the conclusion you reach, I do agree that the formulation is wrong.

Comment author: Duncan_Sabien 05 December 2016 07:30:37PM 0 points [-]

I'm quite confident that my formulation isn't wrong, and that we're talking past each other (specifically, that you're missing something important that I'm apparently not saying well).

What was explicitly written in the post was "If B then A. Furthermore, if not B then not A." Those are two different statements, and you need both of them. The former is an expression of the belief structure of the person on the left. The latter is an expression of the belief structure of the person on the right. They are NOT logically equivalent. They are BOTH required for a "double crux," because the whole point is for the two people to converge—to zero in on the places where they are not in disagreement, or where one can persuade the other of a causal model.

It's a crux that cuts both ways—B's true state implies A's trueness, but B's false state is not irrelevant in the usual way. Speaking strictly logically, if all we know is that B implies A, not-B doesn't have any impact at all on A. But when we're searching for a double crux, we're searching for something where not-B does have causal impact on A—something where not-B implies not-A. That's a meaningfully different and valuable situation, and finding it (particularly, assuming that it exists and can be found, and then going looking for it) is the key ingredient in this particular method of transforming argument into collaborative truth-seeking.

Comment author: Rubix 02 December 2016 01:21:23AM 0 points [-]

For the author and the audience: what are your favourite patience- and sanity-inducing rituals?

Comment author: Duncan_Sabien 05 December 2016 07:22:47PM 0 points [-]

For me, sanity always starts with remembering times that I was wrong—confidently wrong, arrogantly wrong, embarrassingly wrong. I have a handful of dissimilar situations cached in my head as memories (there's one story about popsicles, one story about thinking a fellow classmate was unintelligent, one story about judging a student's performance during a tryout, one story about default trusting someone with some sensitive information), and I can lean on all of those to remind myself not to be overconfident, not to be dismissive, not to trust too hard in my feeling of rightness.

As for patience, I think the key thing is a focus on the value of the actual truth. If I really care about finding the right answer, it's easy to be patient, and if I don't, it's a good sign that I should disengage once I start getting bored or frustrated.

Comment author: negamuhia 03 December 2016 02:12:32PM 2 points [-]

Does anyone else, other than me, have a problem with noticing when the discussion they're having is getting more abstract? I'm often reminded of this fact when debating some topic. This is relating to the point on "Narrowing the scope", and how to notice the need to do this.

Comment author: Duncan_Sabien 05 December 2016 07:20:02PM 0 points [-]

A general strategy of "can I completely reverse my current claim and have it still make sense?" is a good one for this. When you're talking about big, vague concepts, you can usually just flip them over and they still sound like reasonable opinions/positions to take. When you flip it and it seems like nonsense, or seems provably, specifically wrong, that means you're into concrete territory. Try just ... adopting a strategy of doing this 3-5 times per long conversation?

Comment author: CCC 02 December 2016 07:56:32AM 0 points [-]

If I'm understanding correctly, I think you've made a mistake in your formal logic above—you equated "If B, then A" with "If A, then B" which is not at all the same.

No, he only inferred "If A, then B" from "If not B, then not A" which is a valid inference.

Comment author: Duncan_Sabien 02 December 2016 10:06:50PM *  0 points [-]

1) if B then A

2) if not B, then not A. Which implies if A then B.

... but then he went on to say "How can an equivalent argument have explanatory power?" which seemed, to me, to assume that "if B then A" and "if A then B" are equivalent (which they are not).

View more: Next