I'm probably rustier on my formal logic than you. But I think what's going on here is fuzz around the boundaries where reality gets boiled down and translated to symbolic logic.

"Uniforms are good because they'll reduce bullying." (A because B, B --> A)
"Uniforms are bad, because along with all their costs they fail to reduce bullying." (~A because ~B, ~B --> ~A)

Whether this is a minor abuse of language or a minor abuse of logic, I think it's a mistake to go from that to "Uniforms are equivalent to bullying reduction" or "Bullying reductions result in uniforms." I thought that was what you were claiming, and it seems nonsensical to me. I note that I'm confused, and therefore that this is probably not what you were implying, and I've made some silly mistake, but that leaves me little closer to understanding what you were.

Comment author:CCC
07 December 2016 02:09:57PM
*
2 points
[-]

"Uniforms are good because they'll reduce bullying." (A because B, B --> A) "Uniforms are bad, because along with all their costs they fail to reduce bullying." (~A because ~B, ~B --> ~A)

A: "Uniforms are good"

B: "Uniforms reduce bullying"

B->A: "If uniforms reduce bullying, then uniforms are good."

~B->~A : "If uniforms do not reduce bullying, then uniforms are not good."

"A is equivalent to B": "The statement 'uniforms are good' is exactly as true as the statement 'uniforms reduce bullying'."

A->B: "If uniforms are good, then it is possible to deduce that uniforms reduce bullying."

Comment author:OneStep
10 December 2016 06:55:24AM
0 points
[-]

There's confusion here between logical implication and reason for belief.

Duncan, I believe, was expressing belief causality -- not logical implication -- when he wrote "If B, then A." This was confusing because "if, then" is the traditional language for logical implication.

With logical implication, it might make sense to translate "A because B" as "B implies A". However, with belief causality, "I believe A because I believe B" is very different from "B implies A".

For example:

A: Uniforms are good.

B: Uniforms reduce bullying.

C: Uniforms cause death.

Let's assume that you believe A because you believe B, and also that you would absolutely not believe A if it turned out that C were true. (That is, ~C is another crux of your belief in A.)

Now look what happens if B and C are both true. (Uniforms reduce bullying to zero because anyone who wears a uniform dies and therefore cannot bully or be bullied.)

C is true, therefore A is false even though B is true. So B can't imply A.

B is only one reason for your belief in A, but other factors could override B and make A false for you in spite of B being true. That's why you can have multiple independent cruxes. If any one of your cruxes for A turns out to be false, then you would have to conclude that A is false. But any one crux being true doesn't by itself imply that A is true, because some other crux could be false, which would make A false.

So with belief causality, "A because B" does not mean that B implies A. What it actually means is that ~B implies ~A, or equivalently, that A implies B -- which in that form sounds counter-intuitive even though it's right.

So for B to be a crux of A means only (in formal logical implication) that A -> B, and definitely not that B -> A. In fact, for a crux to be interesting/useful, you don't want a logical implication of B -> A, because then you've effectively made no progress toward identifying the source of disagreement. To make progress, you want each crux to be "saying less than" A.

Comment author:CCC
15 December 2016 07:13:32AM
0 points
[-]

In a pure-logic kind of way, finding B where B is exactly equivalent to A means nothing, yes. However, in a human-communication kind of way, it's often useful to stop and rephrase your argument in different words. (You'll recognise when this is helpful if your debate partner says something along the lines of "Wait, is that what you meant? I had it all wrong!")

This has nothing to do with formal logic; it's merely a means of reducing the probability that your axioms have been misunderstood (which is a distressingly common problem).

## Comments (106)

BestI'm probably rustier on my formal logic than you. But I think what's going on here is fuzz around the boundaries where reality gets boiled down and translated to symbolic logic.

"Uniforms are good because they'll reduce bullying." (A because B, B --> A) "Uniforms are bad, because along with all their costs they fail to reduce bullying." (~A because ~B, ~B --> ~A)

Whether this is a minor abuse of language or a minor abuse of logic, I think it's a mistake to go from that to "Uniforms are equivalent to bullying reduction" or "Bullying reductions result in uniforms." I thought that was what you were claiming, and it seems nonsensical to me. I note that I'm confused, and therefore that this is probably

notwhat you were implying, and I've made some silly mistake, but that leaves me little closer to understanding what youwere.*2 points [-]A: "Uniforms are good"

B: "Uniforms reduce bullying"

B->A: "If uniforms reduce bullying, then uniforms are good."

~B->~A : "If uniforms do not reduce bullying, then uniforms are not good."

"A is equivalent to B": "The statement 'uniforms are good' is exactly as true as the statement 'uniforms reduce bullying'."

A->B: "If uniforms are good, then it is possible to deduce that uniforms reduce bullying."

...does that help?

*1 point [-]Yep. Thanks. =)

I was misunderstanding "equivalency" as "identical in all respects to," rather than seeing equivalency as "exactly as true as."

There's confusion here between logical implication and reason for belief.

Duncan, I believe, was expressing belief causality -- not logical implication -- when he wrote "If B, then A." This was confusing because "if, then" is the traditional language for logical implication.

With logical implication, it might make sense to translate "A because B" as "B implies A". However, with belief causality, "I believe A because I believe B" is very different from "B implies A".

For example:

A: Uniforms are good.

B: Uniforms reduce bullying.

C: Uniforms cause death.

Let's assume that you believe A because you believe B, and also that you would absolutely not believe A if it turned out that C were true. (That is, ~C is another crux of your belief in A.)

Now look what happens if B and C are both true. (Uniforms reduce bullying to zero because anyone who wears a uniform dies and therefore cannot bully or be bullied.)

C is true, therefore A is false even though B is true. So B can't imply A.

B is only one reason for your belief in A, but other factors could override B and make A false for you in spite of B being true. That's why you can have multiple independent cruxes. If any one of your cruxes for A turns out to be false, then you would have to conclude that A is false. But any one crux being true doesn't

by itselfimply that A is true, because some other crux could be false, which would make A false.So with belief causality, "A because B" does not mean that B implies A. What it actually means is that ~B implies ~A, or equivalently, that A implies B -- which in that form sounds counter-intuitive even though it's right.

So for B to be a crux of A means only (in formal logical implication) that A -> B, and definitely not that B -> A. In fact, for a crux to be interesting/useful, you

don'twant a logical implication of B -> A, because then you've effectively made no progress toward identifying the source of disagreement. To make progress, you want each crux to be "saying less than" A.I guess we now can get rid of http://lesswrong.com/lw/wj/is_that_your_true_rejection/ .

In a pure-logic kind of way, finding B where B is exactly equivalent to A means nothing, yes. However, in a human-communication kind of way, it's often useful to stop and rephrase your argument in different words. (You'll recognise when this is helpful if your debate partner says something along the lines of "Wait, is

thatwhat you meant? I had it all wrong!")This has nothing to do with formal logic; it's merely a means of reducing the probability that your axioms have been misunderstood (which is a distressingly common problem).