Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Qiaochu_Yuan comments on "Flinching away from truth” is often about *protecting* the epistemology - Less Wrong

68 Post author: AnnaSalamon 20 December 2016 06:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread.

Comment author: Qiaochu_Yuan 20 December 2016 07:42:01AM 18 points [-]

The bucket diagrams don't feel to me like the right diagrams to draw. I would be drawing causal diagrams (of aliefs); in the first example, something like "spelled oshun wrong -> I can't write -> I can't be a writer." Once I notice that I feel like these arrows are there I can then ask myself whether they're really there and how I could falsify that hypothesis, etc.

Comment author: devi 20 December 2016 10:30:27PM 9 points [-]

The causal chain feels like a post-justification and not what actually goes on in the child's brain. I expect this to be computed using a vaguer sense of similarity that often ends up agreeing with causal chains (at least good enough in domains with good feedback loops). I agree that causal chains are more useful models of how you should think explicitly about things, but it seems to me that the purpose of these diagrams is to give a memorable symbol for the bug described here (use case: recognize and remember the applicability of the technique).

Comment author: SatvikBeri 21 December 2016 01:03:28AM *  3 points [-]

In my head, it feels mostly like a tree, e.g:

"I must have spelled oshun right"

–Otherwise I can't write well

– –If I can't write well, I can't be a writer

–Only stupid people misspell common words

– –If I'm stupid, people won't like me

etc. For me, to unravel an irrational alief, I generally have to solve every node below it–e.g., by making sure that I get the benefit from some other alief.

Comment author: LawrenceC 20 December 2016 08:44:01PM 1 point [-]

I think they're equivalent in a sense, but that bucket diagrams are still useful. A bucket can also occur when you conflate multiple causal nodes. So in the first example, the kid might not even have a conscious idea that there are three distinct causal nodes ("spelled oshun wrong", "I can't write", "I can't be a writer"), but instead treats them as a single node. If you're able to catch the flinch, introspect, and notice that there are actually three nodes, you're already a big part of the way there.

Comment author: Qiaochu_Yuan 20 December 2016 08:52:05PM 5 points [-]

The bucket diagrams are too coarse, I think; they don't keep track of what's causing what and in what direction. That makes it harder to know what causal aliefs to inspect. And when you ask yourself questions like "what would be bad about knowing X?" you usually already get the answer in the form of a causal alief: "because then Y." So the information's already there; why not encode it in your diagram?

Comment author: LawrenceC 20 December 2016 08:56:48PM 1 point [-]

Fair point.

Comment author: Sniffnoy 20 December 2016 08:07:46PM 1 point [-]

Agreed -- this sort of "bucket error" can be generalized to "invisible uninspected background assumption". But those don't necessarily need to be biconditionals.

Comment author: John_Maxwell_IV 22 December 2016 01:13:47PM 0 points [-]

Does anyone know whether something like buckets/causal diagram nodes might have an analogue at the neural level?