by [anonymous]
2 min read

2

I'm here to teach you a new phobia. The phobia concerns phrases such as "for many reasons".

Rational belief updating is a random walk without drift. If you expect your belief to go up (down) in response to evidence, you should instead make it go up (down) right now. If you're not convinced, read about conservation of expected evidence, or the law of iterated expectations.

If evidence comes in similar-sized chunks, the number of chunks in the "for" direction follows a binomial distribution with p=.5. Such a distribution can output most of the pieces being in the same direction, but if the number of pieces is high, this will happen quite rarely.

So if you can find, say, ten reasons to do or believe something and no reasons not to, something is going on.

One possibility is it's a one in a thousand coincidence. But let's not dwell on that.

Another possibility is that the process generating your reasons, while unbiased, is skewed. That is to say, it produces many weak reasons in one direction and a few strong reasons in the other, and it just happened not to produce such a strong reason in your case. And so we have many empirical reasons to think the Sun will rise tomorrow (e.g., it rose on June 3rd 1978 and February 16th 1260), and none that it won't. But this does not seem to describe cases like "what university should I choose", "should I believe in a hard takeoff singularity", or "is global warming harmful on net".

Another possibility (probably a special case of the previous one, but worth stating on its own) is that what you're describing as "many reasons" is really a set of different manifestations of the same underlying reason. Maybe you have a hundred legitimate reasons for not hiring someone, including that he smashes furniture, howls at the moon, and strangles kittens. If so, the reason underlying all these may just be that he's nuts.

Then there's the last, scariest, most important possibility. You may be biased toward finding reasons in one direction, so that you will predictably trend toward your favorite belief. This means you're doing something wrong! Luckily, thinking about why you thought the phrase "for many reasons" caused you to find out.

In sum, when your brain speaks of "many reasons" all going the same way, grab, shake, and strangle it. It may just barf up a better, more compressed way of seeing the world, or confess to confirmation bias. 

(Incidentally, this also applies to the phrase "in many ways". If you judge someone to be in many ways a weird person, that suggests he has some underlying property that causes many kinds of weirdness, or that you have some underlying property that causes you to judge his traits as weird. Both are noteworthy.)

New Comment
6 comments, sorted by Click to highlight new comments since:

I don't understand what you're saying. The law of conservation of expected evidence applies only to expectations of evidence and is conditional upon your current level of belief.

It seems to me that your possibility that "what you're describing as many reasons is really a set of different manifestations of the same underlying reason" is exactly the point at issue. The underlying reason is that something is true. I can think of many reasons to believe Russia exists. I've seen pictures of it, I know people who have been there, I've read literature by Russian writers, et cetera. I know of no evidence that Russia does not exist. I can think of more reasons to believe Russia exists than that Russia does not exist because Russia exists. Likewise, I can think of many more reasons not to kill a policeman than I can think of reasons to kill a policeman, and this is because killing a policeman is a bad idea.

Am I missing your point?

[-][anonymous]00

Russia falls under the skewed case as I guess does everything where probabilities are close to 1. I disagree with your analysis of the policeman example ("killing a policeman is a bad idea" does not explain the reasons why killing a policeman is a bad idea, but rather is explained by them).

That said, I suspect my thinking in some places confuses predictions and decisions and I'm strongly considering depublishing. Damnit.

So if you can find, say, ten reasons to do or believe something and no reasons not to, something is going on.

No, if you expect to find ten reasons to believe something, something is going on. If X is true it's perfectly reasonable to have found ten pieces of evidence for X and none against it. Or am I missing something?

I listed one of the things that could be going on as one's distribution being skewed, which however standardly happens for a distribution of one's future probabilities as one veers away from .5, so I was confused and shouldn't have listed it as a thing going on. I ended up defining "chunks of equal size" in an unreasonable way by relating them to binomial with p=.5, though the math is correct given that unreasonable definition. My brain barfed on that; sorry for making you read this. The idea for the post came more from a utility estimate being a random walk if one learns about independent random components to it, and I'm now rewriting the post in a way that's correct.

(Whew, it took me a while to articulate the exact thing I did wrong.)

Hmm. I'm still trying to parse the post, and there seems to be something your argument might not have captured. Does it apply to value judgments? Say, if you find you have found a hundred apparently independent reasons for not hiring someone, does this fact demand an explanation?

[-][anonymous]00

The idea for this post came from something more like independent components in a utility function, and I stupidly extended it to belief updating when my brain made a barf not realizing probability distributions for future beliefs are usually skewed. I'll depublish and maybe republish the valid part. Sorry for making you read that.