Sorry for the confusion, but I couldn't recast your argument in any formal language whatsoever.
Sorry too. That was my risk of inventing a notion on the spot. The original comment started using excessive "believes X" which I basically replaced. I'm not aware of/trained in a notation to compactly write nested probability assertions.
I'm still trying to resolve the expansion and nesting issues I totally glossed over.
What is B[a][b]? b believes that a believes B or a believes that b believes that B?
[] is like a parameterized suffix. It could be bracketed as X[a][b] = (X[a])[b] if that is more clear. I just lent this from programming languages.
Note: There seems to be a theory of beliefs that might applicable but which uses a different notion (looks like X[a] == K_aX): http://en.wikipedia.org/wiki/Epistemic_modal_logic
So what does B[a] mean? B[a] means are are reasoning about the probability assignment P_a(B) of the actor a and we ask for variants of P(P_a(B)=p).
First: I glossed about a lot of required P(...) assuming (in my eagerness to address the issue) that that'd be clear from context. In general instead of writing e.g.
P((A & B)[p]) ~= P(A[p] & B[p])
I just wrote
(A & B)[P] ~= A[P] & B[P]
What is B0(a)? It is the same as B[a]?
No. the 0 was meant to indicate an apriori (which was hidden in the fragment "a) an aprioi B0 of a person"). Instead of writing the needed probability that Bobs prior probability of B is b (needed in the orig post) as
P_{Bob}(B) = b
I just wrote
B0(Bob)
That is informally I represented my belief in the prior p of another actor in some fact F as a fact in itself (calling it F0) instead of representing all beliefs of the represented actor as relative to that (P(F)=p).
This allowed me to simplify the never written out long form of P(B|X(Bob)). On this I'm still working.
What is X0(a)? It is the same as X[a], so that X is a relational variable?
Yes. For all prior belief expressions X0 it is plausible to approximate other persons prior probability to be less strict than your own.
Is X(a) different from X[a]?
Yes. X(a) is the X of person a. This is mostly releant for the priors.
What I now see after trying to clean up all the issues glossed over is that this possibly doesn't make sense. At least not in this incomplete form. Please stay tuned.
Please stay tuned.
I will!
The main problem (not in your post, in the general discussion) seems to me that there's no way to talk about probabilities and beliefs clearly and dependently, since after all a belief is the assignment of a probability, but they cannot be directly targeted in the base logic.
This article is going to be in the form of a story, since I want to lay out all the premises in a clear way. There's a related question about religious belief.
Let's suppose that there's a country called Faerie. I have a book about this country which describes all people living there as rational individuals (in a traditional sense). Furthermore, it states that some people in Faerie believe that there may be some individuals there known as sorcerers. No one has ever seen one, but they may or may not interfere in people's lives in subtle ways. Sorcerers are believed to be such that there can't be more than one of them around and they can't act outside of Faerie. There are 4 common belief systems present in Faerie:
This is completely exhaustive, because everyone believes there can be at most one sorcerer. Of course, some individuals within each group have different ideas about what their sorcerer is like, but within each group they all absolutely agree with their dogma as stated above.
Since I don't believe in sorcery, a priori I assign very high probability for case 4, and very low (and equal) probability for the other 3.
I can't visit Faerie, but I am permitted to do a scientific phone poll. I call some random person, named Bob. It turns out he believes in Bright. Since P(Bob believes in Bright | case 1 is true) is higher than the unconditional probability, I believe I should adjust the probability of case 1 up, by Bayes rule. Does everyone agree? Likewise, the probability of case 3 should go up, since disbelief in Dark is evidence for existence of Dark in exactly the same way, although perhaps to a smaller degree. I also think the case 2 and case 4 have to lose some probability, since it adds up to 1. If I further call a second person, Daisy, who turns out to believe in Dark, I should adjust all probabilities in the opposite direction. I am not asking either of them about the actual evidence they have, just what they believe.
I think this is straightforward so far. Here's the confusing part. It turns out that both Bob and Daisy are themselves aware of this argument. So, Bob says, one of the reasons he believes in Bright, is because that's positive evidence for Bright's existence. And Daisy believes in Dark despite that being evidence against his existence (presumably because there's some other evidence that's overwhelming).
Here are my questions:
I am looking forward to your thoughts.