In this video, Douglas Crockford (JavaScript MASTER) says:

So I think programming would not be possible without System I; without the gut. Now, I have absolutely no evidence to support that statement, but my gut tells me it's true, so I believe it.

 

1

I don't think he has "absolutely no evidence". In worlds where DOUGLAS CROCKFORD has a gut feeling about something related to programming, how often does that gut feeling end up being correct? Probably a lot more than 50% of the time. So according to Bayes, his gut feeling is definitely evidence.

The problem isn't that he lacks evidence. It's that he lacks communicable evidence. He can't say "I believe A because X, Y and Z." The best he could do is say, "just trust me, I have a feeling about this".

Well, "just trust me, I have a feeling about this" does qualify as evidence if you have a good track record, but my point is that he can't communicate the rest of the evidence his brain used to produce the resulting belief.

 

2

How do you handle a situation where you're having a conversation with someone and they say, "I can't explain why I believe X; I just do."

Well, as far as updating beliefs, I think the best you could do is update on the track record of the person. I don't see any way around it. For example, you should update your beliefs when you hear Douglas Crockford say that he has a gut feeling about something related to programming. But I don't see how you could do any further updating of your beliefs. You can't actually see the evidence he used, so you can't use it to update your beliefs. If you do, the Bayes Police will come find you.

Perhaps it's also worth trying to dig the evidence out of the other persons subconscious.

  • If the person has a good track record, maybe you could say, "Hmm, you have a good track record so I'm sad to hear that you're struggling to recall why it is you believe what you do. I'd be happy to wait for you to spend some time trying to dig it up."
  • Maybe there are some techniques that can be used to "dig evidence out of one's subconscious". I don't know of any, but maybe they exist.

 

3

Ok, now let's talk about what you shouldn't do. You shouldn't say, "Well if you can't provide any evidence, you shouldn't believe what you do." The problem with that statement is that it assumes that the person has "no evidence". This was addressed in Section 1. It's akin to saying, "Well Douglas Crockford, you're telling me that you believe X and you have a fantastic track record, but I don't know anything about why you believe it, so I'm not going to update my beliefs at all, and you shouldn't either."

Brains are weird and fantastic thingys. They process information and produce outputs in the form of beliefs (amongst other things). Sometimes they're nice and they say, "Ok Adam - here is what you believe, and here is why you believe it". Other times they're not so nice and the conversation goes like this:

Brain: Ok Adam, here is what you think.

Adam: Awesome, thanks! But wait - why do I think that?

Brain: Fuck you, I'm not telling.

Adam: Fuck me? Fuck you!

Brain: Who the fuck do you think you're talking to?!!!

Just because brains could be mean doesn't mean they should be discounted.

New Comment
49 comments, sorted by Click to highlight new comments since:
[-][anonymous]80

I think this is what many people find confusing about Bayesian reasoning: the nods to subjectivity. Some people I've discussed it with often say things like "But it's still subjective!" or "You're pulling those numbers out of your ass!" Well, yes, maybe. But calibration games, immediate-feedback systems like the Good Judgement Project, and general awareness of priors are being shown as better than chance, and better than intuition, when the RESULTS are objectively measured.

If I had to speculate on why people react so negatively, I'd say it's because of the false dichotomy between "objective" and "subjective." Objective: numbers, math, computer programs. Subjective: fuzzy, things that aren't science. So, saying that something can be evidence in a rational approach AND be subjective...is confusing. They aren't thinking about weighting the evidence accordingly. They're annoyed that it's counted at all.

You shouldn't say, "Well if you can't provide any evidence, you shouldn't believe what you do." The problem with that statement is that it assumes that the person has "no evidence".

Another reason not to do that is that people often come up with fake reasons for believing something and convince themselves that it is the actual reason. The more socially unacceptable it is to fail to provide an adequate reason for a belief, the more this is encouraged.

Unfortunately I cannot communicate why I think Christianity is true; it's a gestalt thing - it just makes sense, it can't be any other way in the light of all the evidence.

-- Any number of quite successful CEOs, neurosurgeons, writers.

[-][anonymous]90

I think you can differentiate between people who say that about a skill, and people who say it about a concept.

Consider driving: I recall driving being a System 2 activity for at least a year while I was learning. It was certainly stressful enough to induce tears on a regular basis (give me a pass, I was a teenager). Slowly, driving under normal conditions became integrated into System 1, and now I don't feel like crying when I have to change lanes. Sufficient practice of any skill can turn it into a System 1 activity.

Currently, programming is a System 2 activity for me. My husband, however, has more than a decade of experience programming. When he helps debug my code, he doesn't painstakingly go over every line, first thing. He glances, skims, says "This doesn't look right..." and then uses a combination of instinct and experience to find my error. I can't imagine being a professional programmer until it's a System 1 activity at least half of the time.

So: the difference between saying "System 1 is integral to my profession and execution of a skill," and "System 1 is all the evidence I need for the existence of a deity" is very large. In the first case, we can take the statement as evidence, appropriately weighted against the speakers track record with that skill, that System 1 has been beneficial. In the second case, people are saying the equivalent of "My instinct = God, and that's the only test I need!" The weight that bears in your Bayesian calculation should be nothing, or almost nothing, because there is no way to develop a God-detecting skill and integrate it into System 1.

I can't imagine being a professional programmer until it's a System 1 activity at least half of the time.

This. In just about every field. It helps that in programming, and some of the sciences, that there is quick feedback. I have a physics degree but my job is fuzzier than that. After 4 years a lot of the time judgments start as "This doesn't sound right " and a lot of the work is reconstructing a communicable line of reasoning.

[-][anonymous]00

I'm glad my intuition has been borne out by real people!

[-]gjm40

When you're thinking "in explicit mode" -- seriously trying to assess probabilities, considering the possibility that your own judgement might be in error, etc. -- you should (1) take account of your feelings and hunches and whatnot, but (2) not take them at face value. E.g., Douglas Crockford should think "This is only a gut feeling, but I've found that my gut feelings in this field are right much more often than they're wrong. So my gut feeling is good evidence."

This evidence is (at least in principle) communicable. In fact, it seems like to some extent it's been successfully communicated to you. If Douglas Crockford ever tells you of a gut feeling he has about something related to good Javascript programming style, you will take it seriously and may well modify how you write your code.

Now, if Crockford is operating in system-1 mode instead, just going with what his gut tells him, then indeed his evidence isn't communicable to you. I don't see why this is of more concern than the fact that when a cricket fielder is catching a ball, the information his eyes are feeding him that helps him catch the ball isn't communicable to you.

I don't see why this is of more concern than the fact that when a cricket fielder is catching a ball, the information his eyes are feeding him that helps him catch the ball isn't communicable to you.

Hmm; seems like that's the whole point of the article. When you're evaluating claims, you prefer communicable information because that''s the social standard. The moral is that that's often unnecessary or misguided. So, your response seems like hindsight bias to me.

I wonder if it's worth separating out the track record across working by gut and by communicable-evidence channels? Like, if the last 30 predictions he made, he gave reasoning for 27 of them, and all of those were right but the other 3 were wrong, then that's a bad track record for this sort of thing.

Seems obvious, but it might be over-fitting, especially since 'trust me' cases might be rare.

[-]ike30

You shouldn't say, "Well, if you can't provide any evidence, you shouldn't believe what you do."

There are at least two justifications I can think of for this.

  1. A reductio: basically saying "since you do believe X, you must have some evidence, so see if you can figure out what it is". This can start a discussion on how not all evidence is definite/based on statistics etc.
  2. If they don't have a good track record, then you're trying to influence them to drop the belief because of lack of evidence (this rarely works, unfortunately).

I agree with your second point. If your intuition has proven to be false more than true (in a given context), then the intuition your brain produces would be evidence that the intuition is wrong (sorry if that was poorly worded).

As for the first point, I agree that it'd be nice to make an attempt to figure out what it is, but if the attempt fails, I don't think the observation that "Person X reports an intuitive belief that Y is true" should be ignored as evidence.

I treat conversations like this as a communication problem, since the information should be communicable if it's based in Douglas Crockford’s mind. I try to find what the intuition is based on, which helps i) send me in the right direction and ii) avoid double-counting the evidence if I find it independently.

To me, the labels “skill” or “intuition” mean that something is not well understood enough to be communicated objectively. A total understanding would include the ability to describe it as one or more clear-cut techniques or algorithms.

One normal-world handy use for the phrases 'skill' and 'intuition' is in shortcutting communication, either out of reticence or convenience. For example, if I ask a professional poker player why they made a particular play, if they don't want to get into a meaningful discussion about the sorts of math they work through while playing the game (plus attendant behavioural observations etc) then essentially either of those two responses are a reasonably polite way of brushing me off.

I'm sure you can think of instances where, regardless of the polite good intentions of a questioner, you've been in a situation where it's not in at leasts one parties' best interests to go into the minutiae of a process - either because you're talking across a vast knowledge gap or because there are other demands on your time.

I'm reminded of the joke variants that mechanically-inclined people tend to make: $1 for hitting your TV with a hammer, $50 for knowing where to hit it. Complex knowledge is valuable!

Another thing to consider - and something I'm guilty of - is using skill/intuition references to short-circuit people from getting sidetracked in an early stage of their learning process. I'm sure an analogy could be made with programming as in above responses. When the mastery of a complex field comes through a progression of skills {A, B, C [...] Z}, and you're trying to guide someone from B to C and they spot the shininess of J or K off on the intellectual horizon, as a teacher, your pedagogy might lean towards gently nudging them back to focussing on the fundamentals back at C.

I agree with your main point, and I sometimes use the phrases in the same way. But what do you say when they ask you for details anyways? I mostly interact with non-rationalists, and my experience is that after people make a claim about skill or intuition, they're usually unable to explain further (or unwilling to the point of faking ignorance). If I'm talking to someone I trust to be honest with me and I keep trying to pin down an answer, it seems to eventually reduce to the claim that an explanation is impossible. A few people have said exactly that, but a claim like "you'll just know once you have more experience" is more common.

In a situation like this, what approach would get you to give more detail? I'd be happy with "you need to understand skills D through I before I answer you," but I'm rarely able to get that.

For me personally, a long career in a particular public service sector has made me surprisingly efficient at smilingly, politely ignoring what people say and digging out information from an unwilling audience. When someone drops a blanket 'You must fulfil condition Y to truly understand foo' statement,I respond by seeing it as an interrogatory challenge :-)

When people try to push me off with a 'I can't explain' or 'You need more experience' type of response, I usually deflect it by nodding, smiling broadly, and saying something along the lines of "I'm very smart, interested in your thought process, and have the patience to sit here while you figure out how to say what it is you want to say,' or 'The best way for me to get experience is to learn from someone with it.'. I find in these situations a little bit of an ego jab also works wonders in getting people to enunciate their opinions - YMMV. Refer to your local Zen Master for tips and tricks.

Also, asking leading but open questions can help people articulate their rationalisations in a way that they hadn't considered before. I like to raise contrary-hypotheses - 'What would need to be different about the real world for this theory not to work?' / 'If I/You were wrong about X, how would we be able to tell?'.

People who have a great depth of expertise in an area will often be cozignant of other people in that ideaspace who they mildly or strongly disagree with, and sometimes by getting people to differentiate between themselves and other thinkers, they might be able to articulate their points a little

If I'm in a teaching situation, I'll usually try and find a gaming metaphor that will fit. "International share transfer pricing is the end boss of tax law. You're still halfway through the main quest and you don't have all the items you need yet'. More generally, I fall back on car driving / plane flying / SCUBA diving analogies, as they're all pretty unviersally understood, even in the abstract.

One final alternative - and I use this on precociously inquiring children more than adults - is to deflect into academia/ "Gee, that's an interesting question about black holes, what does your Encyclopaedia of Space say?"

Do you go through an explicit updating procedure on a regular basis in conversations like that? I don't and I think most people don't.

When someone says that they believe X but can't explain why, I do update. As for how I update, it isn't much more than querying my intuition to see what they're track record is in similar contexts.

I do update

What does that mean in practice? What mental operation do you do?

I say, "X is more likely to be true".

You mean you move your mouth and those words come out?

(I apologize if I'm not understanding your point/question.)

As for what words I actually speak, sometimes I say something along the lines of along the lines of "It's ok, your intuition still means something".

I asked what you do and you said something that's not what you do.

In my model most people don't explicitely update at all but let their brains shift beliefs in the way the brain is accustomed to do.

This is certainly what most people do in fact, but it is a bad idea, because it means that normally a person who hears something he disagrees with simply ignores it, even when there are good reasons not to do so.

This is certainly what most people do in fact, but it is a bad idea, because it means that normally a person who hears something he disagrees with simply ignores it, even when there are good reasons not to do so.

In the moment where I hear someone I consider to be a programming expert saying that 'programming needs system I to be done well', beliefs in my brain shift pretty automatically in that direction without any direct intervention. I don't think I have even the option to not let it affect my beliefs.

Fine, but that could be because you had no strong opinion on the matter in the first place opposed to that, or because you are unusual.

or because you are unusual.

I don't think so. It's very normal human process that beliefs change when you hear a person you consider authoritative making an argument.

Beliefs get mostly changed by system I and we don't even have system II direct write access to them.

I disagree; I think we have direct write access to nearly everything that matters about our beliefs.

If that would be true a person with social anxiety could simply overwrite the beliefs that make them uncomfortable because they think other people are judging them.

Yes, it's a learnable skill. Stage hypnotists exist.

Yes, it's a learnable skill. Stage hypnotists exist.

In stage hypnosis people don't change their beliefs themselves but get lead by another person to change their beliefs.

More to the point, I wasn't focused on what's theoretically possible but what we do in day to day interactions. In day to day interactions we don't simply write new beliefs directly into our minds.

Let's suppose that you had reason to believe that the sky is blue, but found yourself believing that it was green. This would not stop you from telling people, "I found out that the sky is blue," and giving the reasons that show that it is blue (since we are assuming you had reason to believe that it is blue.) Likewise, suppose someone comes up to you and says, "I would like to bet you $100 that the sky is green and propose the following test..." No matter how you feel about the color of the sky, you are perfectly free to accept the bet and win if the sky is blue.

So in other words, as I said, you have direct write access to pretty much everything that matters about a belief: you can say it is true, argue for it, and act on it.

No matter how you feel about the color of the sky, you are perfectly free to accept the bet and win if the sky is blue.

"perfectly free" basically supposes that you have free will. In practice human's like to believe that they have free will but they don't behave that way in experimental settings. As long as you think about thought experiments with your usual intuition that presupposes free will you don't get to the substance of the argument and understand how beliefs work in practice.

You can argue that we don't have direct write access to anything; but if you want to describe the facts with "we have direct write access to some things," then it is reasonable to include most aspects of our beliefs in that statement.

Let's say you attempt to think nothing for 5 minutes. Will you succeed with that because you are free enough to do so? >99% of people won't. On the other hand if I want to rise my right arm I can do that successfully nearly every time.

When it comes to changing beliefs, newspaper corrections are a good example. Alice reads a newspaper saying 'Bob is evil'. The next week the newspaper writes: "We were wrong, Bob isn't an evil at all". Does that mean that Alice is now less likely to believe that 'Bob is evil'? That's no automatic effect. Being remembered of the old belief that 'Bob is evil' can strenghens the belief. That's a fact that you have to take into account when you want to think about how belief change works, but that isn't in the mental model that assumes that people simply do free will decisions to change their beliefs.

"Think nothing for 5 minutes" is not like "raise my right arm." It is like "hold my arm so still for 5 minutes than a careful observer will not even notice a jitter." It is unlikely that anyone can do either of those things on demand. But I can refrain from most complete thoughts for 5 minutes, and from large motions of my arm. My control over my thoughts is actually very similar to my control over my arm. You find dissimilarity because you are comparing the wrong things.

So, you pick an example with no emotional valence. But let's suppose instead that I have reason to believe that I'm perfectly safe, but find myself believing that someone is going to kill me in my sleep. This would not stop me from telling people I'm perfectly safe, or from giving the reasons that show I'm perfectly safe, or from accepting a similar $100 bet. It might, however, prevent me from getting a good night's sleep.

Is that not a thing that matters about the belief that I'm safe?

I expected ChristianKI might say that you would be lying, if you tell people that the sky is blue in my hypothetical situation. He hasn't responded, so maybe he does not think this. In any case, I would deny that it is a lie to tell people something that you know you have good reasons to believe, however you feel about it when you do it.

In any case, when I first made the claim, I said that we have direct write access to almost everything important about a belief, not everything important simply. And in particular, we don't have have write access to how we feel about them. I agree that could be something important, but it is relatively minor compared to all sorts of other things that result from beliefs, like external relationships and real world actions.

In theory we could describe this same situation in two different ways: by saying, "I can't control my beliefs," and then we would be implicitly identifying our beliefs with those feelings. Or by saying, "I can control my beliefs," and then we would be implicitly identifying our beliefs with a pattern of speaking, acting, and consciously controlled thinking. It is pointless to ask which of these is true: either could be true, if that's what we meant by a belief. The question is which is a better idea for practical purposes. And it seems to me better to say, "I can control my beliefs," because saying the other thing tends to make us forget many of our options (for example winning good bets.)

Also, another advantage is that in practice what I am suggesting tends to modify the feelings as well, although indirectly, and not always completely.

I asked what you do and you said something that's not what you do.

You asked what mental operation I do. In my head, I do say "X is more likely to be true".

'Say' is usually a word that refers to verbal experssion.

Are you saying that you do have "X is more likely to be true" as a voice in your head?

Are you saying that you do have "X is more likely to be true" as a voice in your head?

Yes.

I had a discussion with Gilbert a while ago about the meaning of updating in the comments on his post here.

Agree with this. Have had to learn to do this with some people I know who intuit a lot of their beliefs, but tend to be right a lot.

This seems like an informal extension of the no disagreement theroem to situations where the agent are not perfect rationalists.

The claim is not observable in any way and offers no testable predictions or anything that even remotely sounds like advice. It's unprovable because it doesn't talk about objective reality.

There's a sequence about how the scientific method is less powerful than Bayesian reasoning that you should probably read.

I think the point is, how would we tell the difference between worlds in which programming does and does not require System 1?

Which claim? As for the claim that one's intuition is evidence, I predict that in worlds where someone with a good track record has an intuitive belief, the belief will be true more than it will be false.

I predict that if the Pope declares Jesus is God, there will be more worlds in which Jesus is God than worlds in which Jesus is merely the son of God.

If a statement does not say anything about observable reality, there is no objective truth to be determined.

Fair point. I agree that "I have a gut feeling about something non-observable" is a possibility. But so is "I have a gut feeling about something that is observable".

And the only way to distinguish is to find an observation you can make. Crockford's model offers none I can recognize, not even "System I coordinates your muscles to move your mouse".