If you have worked your way through most of the sequences you are likely to agree with the majority of these statements:
- When people die we should cut off their heads so we can preserve those heads and make the person come back to life in the (far far) future.
- It is possible to run a person on Conways Game of Life. This would be a person as real as you or me, and wouldn't be able to tell he's in a virtual world because it looks exactly like ours.
- Right now there exist many copies/clones of you, some of which are blissfully happy and some of which are being tortured and we should not care about this at all.
- Most scientists disagree with this but that's just because it sounds counter-intuitive and scientists are biased against counterintuitive explanations.
- Besides, the scientific method is wrong because it is in conflict with probability theory. Oh, and probability is created by humans, it doesn't exist in the universe.
- Every fraction of a second you split into thousands of copies of yourself. Of course you cannot detect these copies scientifically, but that because science is wrong and stupid.
- In fact, it's not just people that split but the entire universe splits over and over.
- Time isn't real. There is no flow of time from 0 to now. All your future and past selves just exist.
- Computers will soon become so fast that AI researchers will be able to create an artificial intelligence that's smarter than any human. When this happens humanity will probably be wiped out.
- To protect us against computers destroying humanity we must create a super-powerful computer intelligence that won't destroy humanity.
- Ethics are very important and we must take extreme caution to make sure we do the right thing. Also, we sometimes prefer torture to dust-specs.
- If everything goes to plan a super computer will solve all problems (disease, famine, aging) and turn us into super humans who can then go on to explore the galaxy and have fun.
- And finally, the truth of all these statements is completely obvious to those who take the time to study the underlying arguments. People who disagree are just dumb, irrational, miseducated or a combination thereof.
- I learned this all from this website by these guys who want us to give them our money.
In two words: crackpot beliefs.
These statements cover only a fraction of the sequences and although they're deliberately phrased to incite kneejerk disagreement and ugh-fields I think most LW readers will find themselves in agreement with almost all of them. And If not then you can always come up with better examples that illustrate some of your non-mainstream beliefs.
Think back for a second to your pre-bayesian days. Think back to the time before your exposure to the sequences. Now the question is, what estimate would you have given that any chain of arguments could persuade you the statements above are true? In my case, it would be near zero.
You can take somebody who likes philosophy and is familiar with the different streams and philosophical dilemmas, who knows computation theory and classical physics, who has a good understanding of probability and math and somebody who is a naturally curious reductionist. And this person will still roll his eyes and will sarcastically dismiss the ideas enumerated above. After all, these are crackpot ideas, and people who believe them are so far "out there", they cannot be reasoned with!
That is really the bottom line here. You cannot explain the beliefs that follow from the sequences because they have too many dependencies and even if you did have time to go through all the necessary dependencies explaining a belief is still an order of magnitude more difficult than following the explanation written down by somebody else because in order to explain something you have to juggle two mental models: your own and the one of the listener.
Some of the sequences touches on the concept of the cognitive gap (inferential distance). We have all learned this the hard way that we can't expect people to just understand what we say and we can't expect short inferential distances. In practice there is just no way to bridge the cognitive gap. This isn't a big deal for most educated people, because people don't expect to understand complex arguments in other people's fields and all educated intellectuals are on the same team anyway (well, most of the time). For crackpot LW beliefs it's a whole different story though. I suspect most of us have found that out the hard way.
Rational Rian: What do you think is going to happen to the economy?
Bayesian Bob: I'm not sure. I think Krugman believes that a bigger cash injection is needed to prevent a second dip.
Rational Rian: Why do you always say what other people think, what's your opinion?
Bayesian Bob: I can't really distinguish between good economic reasoning and flawed economic reasoning because I'm a lay man. So I tend to go with what Krugman writes, unless I have a good reason to believe he is wrong. I don't really have strong opinions about the economy, I just go with the evidence I have.
Rational Rian: Evidence? You mean his opinion.
Bayesian Bob: Yep.
Rational Rian: Eh? Opinions aren't evidence.
Bayesian Bob: (Whoops, now I have to either explain the nature of evidence on the spot or Rian will think I'm an idiot with crazy beliefs. Okay then, here goes.) An opinion reflects the belief of the expert. These beliefs can either be uncorrelated with reality, negatively correlated or positively correlated. If there is absolutely no relation between what an expert believes and what is true then, sure, it wouldn't count as evidence. However, it turns out that experts mostly believe true things (that's why they're called experts) and so the beliefs of an expert are positively correlated with reality and thus his opinion counts as evidence.
Rational Rian: That doesn't make sense. It's still just an opinion. Evidence comes from experiments.
Bayesian Bob: Yep, but experts have either done experiments themselves or read about experiments other people have done. That's what their opinions are based on. Suppose you take a random scientific statement, you have no idea what it is, and the only thing you know is that 80% of the top researchers in that field agree with that statement, would you then assume the statement is probably true? Would the agreement of these scientists be evidence for the truth of the statement?
Rational Rian: That's just an argument ad populus! Truth isn't governed by majority opinion! It is just religious nonsense that if enough people believe something then there must be some some truth to it.
Bayesian Bob: (Ad populum! Populum! Ah, crud, I should've phrased that more carefully.) I don't mean that majority opinion proves that the statement is true, it's just evidence in favor of it. If there is counterevidence the scale can tip the other way. In the case of religion there is overwhelming counterevidence. Scientifically speaking religion is clearly false, no disagreement there.
Rational Rian: There's scientific counterevidence for religion? Science can't prove non-existence. You know that!
Bayesian Bob: (Oh god, not this again!) Absence of evidence is evidence of absence.
Rational Rian: Counter-evidence is not the same as absence of evidence! Besides, stay with the point, science can't prove a negative.
Bayesian Bob: The certainty of our beliefs should be proportional to amount of evidence we have in favor of the belief. Complex beliefs require more evidence than simple beliefs, and the laws of probability, Bayes specifically, tell us how to weigh new evidence. A statement, any statement, starts out with a 50% probability of being true, and then you adjust that percentage based on the evidence you come into contact with. (I shouldn't have said that 50% part. There's no way that's going to go over well. I'm such an idiot.)
Rational Rian: A statement without evidence is 50% likely to be true!? Have you forgotten everything from math class? This doesn't make sense on so many levels, I don't even know where to start!
Bayesian Bob: (There's no way to rescue this. I'm going to cut my losses.) I meant that in a vacuum we should believe it with 50% certainty, not that any arbitrary statement is 50% likely to accurately reflect reality. But no matter. Let's just get something to eat, I'm hungry.
Rational Rian: So we should believe something even if it's unlikely to be true? That's just stupid. Why do I even get into these conversations with you? *sigh* ... So, how about Subway?
The moral here is that crackpot beliefs are low status. Not just low-status like believing in a deity, but majorly low status. When you believe things that are perceived as crazy and when you can't explain to people why you believe what you believe then the only result is that people will see you as "that crazy guy". They'll wonder, behind your back, why a smart person can have such stupid beliefs. Then they'll conclude that intelligence doesn't protect people against religion either so there's no point in trying to talk about it.
If you fail to conceal your low-status beliefs you'll be punished for it socially. If you think that they're in the wrong and that you're in the right, then you missed the point. This isn't about right and wrong, this is about anticipating the consequences of your behavior. If you choose to to talk about outlandish beliefs when you know you cannot convince people that your belief is justified then you hurt your credibility and you get nothing for it in exchange. You cannot repair the damage easily, because even if your friends are patient and willing to listen to your complete reasoning you'll (accidently) expose three even crazier beliefs you have.
An important life skill is the ability to get along with other people and to not expose yourself as a weirdo when this isn't in your interest to do so. So take heed and choose your words wisely, lest you fall into the trap.
EDIT - Google Survey by Pfft
PS: intended for /main but since this is my first serious post I'll put it in discussion first to see if it's considered sufficiently insightful.
In your example before we have any information we'd assume P(A) = 0.5 and after we have information about the alphabet and how X is constructed from the alphabet we can just calculate the exact value for P(A|B). So the "update" here just consists of replacing the initial estimate with the correct answer. I think this is also what you're saying so I agree that in situations like these using P(A) = 0.5 as starting point does not affect the final answer (but I'd still start out with a prior of 0.5).
I'll propose a different example. It's a bit contrived (well, really contrived, but OK).
Frank and his buddies (of which you are one) decide to rob a bank.
Frank goes: "Alright men, in order for us to pull this off 4 things have to go perfectly according to plan."
(you think: conjunction of 4 things: 0.0625 prior probability of success)
Frank continues: the first thing we need to do is beat the security system (... long explanation follows).
(you think: that plan is genius and almost certain to work (0.9 probability of success follows from Bayesian estimate). I'm updating my confidence to 0.1125)
Frank continues: the second thing we we need to do is break into the safe (... again a long explanation follows).
(you think: wow, that's a clever solution - 0.7 probability of success. Total probability of success 0.1575)
Frank continues: So! Are you in or are you out?
At this point you have to decide immediately. You don't have the time to work out the plausibility of the remaining two factors, you just have to make a decision. But just by knowing that there are two more things that have to go right you can confidently say "Sorry Frank, but I'm out.".
If you had more time to think you could come up with a better estimate of success. But you don't have time. You have to go with your prior of total ignorance for the last two factors of your estimate.
If we were to plot the confidence over time I think it should start at 0.5, then go to 0.0625 when we understand a estimate of a conjunction of 4 parts is to be calculated and after that more nuanced Bayesian reasoning follows. So if I were to build an AI then I would make it start out with the universal prior of total ignorance and go from there. So I don't think the prior is a purely mathematical trick that has no bearing on we way we reason.
(At the risk of stating the obvious: you're strictly speaking never adjusting based on the prior of 0.5. The moment you have evidence you replace the prior with the estimate based on evidence. When you get more evidence you can update based on that. The prior of 0.5 completely vaporizes the moment evidence enters the picture. Otherwise you would be doing an update on non-evidence.)