It's pretty easy to see how it would work if there are only a finite number of hypotheses, say n: in that case, Ω is basically just the collection of binary strings of length n (assuming the hypothesis space is carved up appropriately), and each map V_A is evaluation at a particular coordinate. Sure enough, at each coordinate, half the elements of Ω evaluate to 1, and half to 0 !
Here are a few problems that I have with this approach:
This approach makes your focus on the case where the hypotheses A is "unspecified" seem very mysterious. Under this model, we have P(V_A = True) = 0.5 even for a hypothesis A that is entirely specified, down to its last bit. So why all the talk about how a true prior probability for A needs to be based on complete ignorance even of the content of A? Under this model, even if you grant complete knowledge of A, you're still assigning it a prior probability of 0.5. Much of the push-back you got seemed to be around the meaningfulness of assigning a probability to an unspecified hypothesis. But you could have sidestepped that issue and still established the claim in the OP under this model, because here the claim is true even of specified hypotheses. (However, you would still need to justify that this model is how we ought to think about Bayesian updating. My remaining concerns address this.)
By having Ω be the collection of all bit strings of length n, you've dropped the condition that the maps v respect logical operations. This is equivalent to dropping the requirement that the possible worlds be logically possible. E.g., your sample space would include maps v such that v(A) = v(~A) for some hypothesis A. But, maybe you figure that this is a feature, not a bug, because knowledge about logical consistency is something that the agent shouldn't yet have in its prior state of complete ignorance. But then ...
... If the agent starts out as logically ignorant, how can it work with only a finite number of hypotheses? It doesn't start out knowing that A, A&A, A&A&A, etc., can all be collapsed down to just A, and that's infinitely many hypotheses right there. But maybe you mean for the n hypotheses to be "atomic" propositions, each represented by a distinct proposition letter A, B, C, ..., with no logical dependencies among them, and all other hypotheses built up out of these "atoms" with logical connectives. It's not clear to me how you would handle quantifiers this way, but set that aside. The more important problem is ...
... How do you ever accomplish any nontrivial Bayesian updating under this model? For suppose that you learn somehow that A is true. Now, conditioned on A, what is the probability of B? Still 0.5. Even if you learn the truth value of every hypothesis except B, you still would assign probability 0.5 to B.
More generally, one could imagine a probability distribution on the hypothesis space controlling the "weighting" of elements of Ω. For instance, if hypothesis #6 gets its probability raised, then those mappings v in Ω such that v(6) = 1 would be weighted more than those such that v(6) = 0. I haven't checked that this type of arrangement is actually possible, but something like it ought to be.
Is this a description of what the prior distribution might be like? Or is it a description of what updating on the prior distribution might yield?
If you meant the former, wouldn't you lose your justification for claiming that the prior probability of an unspecified hypothesis is exactly 0.5? For, couldn't it be the case that most hypotheses are true in most worlds (counted by weight), so that an unknown random hypothesis would be more likely to be true than not?
If you meant the latter, I would like to see how this updating would work in more detail. I especially would like to see how Problem 4 above could be overcome.
If you have worked your way through most of the sequences you are likely to agree with the majority of these statements:
In two words: crackpot beliefs.
These statements cover only a fraction of the sequences and although they're deliberately phrased to incite kneejerk disagreement and ugh-fields I think most LW readers will find themselves in agreement with almost all of them. And If not then you can always come up with better examples that illustrate some of your non-mainstream beliefs.
Think back for a second to your pre-bayesian days. Think back to the time before your exposure to the sequences. Now the question is, what estimate would you have given that any chain of arguments could persuade you the statements above are true? In my case, it would be near zero.
You can take somebody who likes philosophy and is familiar with the different streams and philosophical dilemmas, who knows computation theory and classical physics, who has a good understanding of probability and math and somebody who is a naturally curious reductionist. And this person will still roll his eyes and will sarcastically dismiss the ideas enumerated above. After all, these are crackpot ideas, and people who believe them are so far "out there", they cannot be reasoned with!
That is really the bottom line here. You cannot explain the beliefs that follow from the sequences because they have too many dependencies and even if you did have time to go through all the necessary dependencies explaining a belief is still an order of magnitude more difficult than following the explanation written down by somebody else because in order to explain something you have to juggle two mental models: your own and the one of the listener.
Some of the sequences touches on the concept of the cognitive gap (inferential distance). We have all learned this the hard way that we can't expect people to just understand what we say and we can't expect short inferential distances. In practice there is just no way to bridge the cognitive gap. This isn't a big deal for most educated people, because people don't expect to understand complex arguments in other people's fields and all educated intellectuals are on the same team anyway (well, most of the time). For crackpot LW beliefs it's a whole different story though. I suspect most of us have found that out the hard way.
Rational Rian: What do you think is going to happen to the economy?
Bayesian Bob: I'm not sure. I think Krugman believes that a bigger cash injection is needed to prevent a second dip.
Rational Rian: Why do you always say what other people think, what's your opinion?
Bayesian Bob: I can't really distinguish between good economic reasoning and flawed economic reasoning because I'm a lay man. So I tend to go with what Krugman writes, unless I have a good reason to believe he is wrong. I don't really have strong opinions about the economy, I just go with the evidence I have.
Rational Rian: Evidence? You mean his opinion.
Bayesian Bob: Yep.
Rational Rian: Eh? Opinions aren't evidence.
Bayesian Bob: (Whoops, now I have to either explain the nature of evidence on the spot or Rian will think I'm an idiot with crazy beliefs. Okay then, here goes.) An opinion reflects the belief of the expert. These beliefs can either be uncorrelated with reality, negatively correlated or positively correlated. If there is absolutely no relation between what an expert believes and what is true then, sure, it wouldn't count as evidence. However, it turns out that experts mostly believe true things (that's why they're called experts) and so the beliefs of an expert are positively correlated with reality and thus his opinion counts as evidence.
Rational Rian: That doesn't make sense. It's still just an opinion. Evidence comes from experiments.
Bayesian Bob: Yep, but experts have either done experiments themselves or read about experiments other people have done. That's what their opinions are based on. Suppose you take a random scientific statement, you have no idea what it is, and the only thing you know is that 80% of the top researchers in that field agree with that statement, would you then assume the statement is probably true? Would the agreement of these scientists be evidence for the truth of the statement?
Rational Rian: That's just an argument ad populus! Truth isn't governed by majority opinion! It is just religious nonsense that if enough people believe something then there must be some some truth to it.
Bayesian Bob: (Ad populum! Populum! Ah, crud, I should've phrased that more carefully.) I don't mean that majority opinion proves that the statement is true, it's just evidence in favor of it. If there is counterevidence the scale can tip the other way. In the case of religion there is overwhelming counterevidence. Scientifically speaking religion is clearly false, no disagreement there.
Rational Rian: There's scientific counterevidence for religion? Science can't prove non-existence. You know that!
Bayesian Bob: (Oh god, not this again!) Absence of evidence is evidence of absence.
Rational Rian: Counter-evidence is not the same as absence of evidence! Besides, stay with the point, science can't prove a negative.
Bayesian Bob: The certainty of our beliefs should be proportional to amount of evidence we have in favor of the belief. Complex beliefs require more evidence than simple beliefs, and the laws of probability, Bayes specifically, tell us how to weigh new evidence. A statement, any statement, starts out with a 50% probability of being true, and then you adjust that percentage based on the evidence you come into contact with. (I shouldn't have said that 50% part. There's no way that's going to go over well. I'm such an idiot.)
Rational Rian: A statement without evidence is 50% likely to be true!? Have you forgotten everything from math class? This doesn't make sense on so many levels, I don't even know where to start!
Bayesian Bob: (There's no way to rescue this. I'm going to cut my losses.) I meant that in a vacuum we should believe it with 50% certainty, not that any arbitrary statement is 50% likely to accurately reflect reality. But no matter. Let's just get something to eat, I'm hungry.
Rational Rian: So we should believe something even if it's unlikely to be true? That's just stupid. Why do I even get into these conversations with you? *sigh* ... So, how about Subway?
The moral here is that crackpot beliefs are low status. Not just low-status like believing in a deity, but majorly low status. When you believe things that are perceived as crazy and when you can't explain to people why you believe what you believe then the only result is that people will see you as "that crazy guy". They'll wonder, behind your back, why a smart person can have such stupid beliefs. Then they'll conclude that intelligence doesn't protect people against religion either so there's no point in trying to talk about it.
If you fail to conceal your low-status beliefs you'll be punished for it socially. If you think that they're in the wrong and that you're in the right, then you missed the point. This isn't about right and wrong, this is about anticipating the consequences of your behavior. If you choose to to talk about outlandish beliefs when you know you cannot convince people that your belief is justified then you hurt your credibility and you get nothing for it in exchange. You cannot repair the damage easily, because even if your friends are patient and willing to listen to your complete reasoning you'll (accidently) expose three even crazier beliefs you have.
An important life skill is the ability to get along with other people and to not expose yourself as a weirdo when this isn't in your interest to do so. So take heed and choose your words wisely, lest you fall into the trap.
EDIT - Google Survey by Pfft
PS: intended for /main but since this is my first serious post I'll put it in discussion first to see if it's considered sufficiently insightful.