Constant comments on Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields - Less Wrong

73 Post author: Vladimir_M 15 February 2011 09:17AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (272)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 20 February 2011 05:49:56AM *  0 points [-]

I think that belief is a kind of internal declaration of belief, because it serves essentially the same function (internally) as declaration of belief serves (externally). Please allow me to explain.

There are two pictures of how the brain works which don't match up comfortably. On one picture, the brain assigns a probability to something. On the other picture, the brain either believes, or fails to believe, something. The reason they don' t match up is that in the first picture the range of possible brain-states is continuous, ranging from P=0 to p=1. But in the second picture, the range of possible brain-states is binary: one state is the state of belief, the other is the state of failure to believe.

So the question then is, how do we reconcile these two pictures? My current view is that on a more fundamental level, our brains assign [probabilities (edited)]. And on a more superficial level, which is partially informed by the fundamental level, we flip a switch between two states: belief and failure to believe.

I think a key question here is: why do we have these two levels, the continuous level which assigns probabilities, and the binary level which flips a switch between two states? I think the reason for the second level is that action is (usually) binary. If you try to draw a map from probability assignment to best course of action (physical action involving our legs and arms), what you find is that the optimal leg/arm action quite often does not range continuously as probability assignment ranges from 0 to 1. Rather, at some threshold value, the optimal leg/arm action switches from one action to another, quite different action - with nothing in between.

So the level of action is a level populated by distinct courses of action with nothing in between, rather than a continuous range of action. What I think, then, is that the binary level of belief versus failure to believe is a kind of half-way point between probability assignments and leg/arm action. What it is, is a translation of assignment of probability (which ranges continuously from zero to one) into a non-continuous, binary belief which is immediately translatable into decision and then into leg/arm action.

But as has I think been agreed on, the optimal course of action does not depend merely on probability assignments. It also depends on value assignments. So, depending on your value assignments, the optimal course of action may switch from A to B at P=60%, or alternatively at P=80%, etc. In the case of crossing the street, I argued that the optimal course of action switches at P>99.9%.

But binary belief (i.e. belief versus non-belief), I think, is immediately translatable into decision and action. That, I think, is the function of binary belief. But in that case, since optimal action switches at different P depending on value assignments, then belief must also switch between belief and failure to believe at different P depending on value assignments.

Comment author: CuSithBell 20 February 2011 06:47:44AM 1 point [-]

Okay, this makes sense, though I think I'd use 'belief' differently.

What does it mean in a situation where I take precautions against two possible but mutually exclusive dangers?

Comment author: [deleted] 20 February 2011 10:58:43AM *  0 points [-]

Here's a concise answer that straightforwardly applies the rule I already stated. Since my rule only applies above 50% and since P(being shot)=10% (as I recall), then we must consider the negation. Suppose P(I will be shot) is 10% and P(I will be stabbed) is 10% and suppose that (for some reason) "I will be shot" and "I will be stabbed" are mutually exclusive. Since P<50% for each of these we turn it around, and get:

P(I will not be shot)is 90% and P(I will not be stabbed) is 90%. Because the cost of being shot, and the cost of being stabbed, are so very high, then the threshold for being convinced must be very high as well - set it to 99.9%. Since P=90% for each of these, then it does not reach my threshold for being convinced.

Therefore I am not convinced that I will not be shot and I am not convinced that I will not be stabbed. Therefore I will not go without my bulletproof body armor and I will not go without my stab-proof body armor.

So the rule seems to work. The fact that these are mutually exclusive dangers doesn't seem to affect the outcome. [Added: For what I consider to be a more useful discussion of the topic, see my other answer.]