Hmm, the first case seems reducible to "moving things around in the world", and the second sounds like it might be solvable by Robin Hanson's pre-rationality.
How about, if Bob has a sort of "sorcerous experience" which is kind of like an epiphany. I don't want to go off to Zombie-land with this, but let's say it could be caused by his brain doing its mysterious thing, or by a sorcerer. Does that still count as "moving things around in the world"?
Is it sane for Bob and Daisy to be in such a positive or negative feedback loop? How is this resolved?
It is not sane.
If you use a belief (say A) to change the value of another belief (say B), then depending on how many times you use A, you arrive at different values. That is, if you use A or A,A,A,A as evidence, you get different results.
It would be as if:
P(B|A) <> P(B|A,A,A,A)
But the logic underlying bayesian reasoning is classical, so that A <-> A+A+A+A, and, by Jaynes' requirement IIIc (see page 19 of The logic of science):
P(B|A) = P(B|A,A,A,A)
I am not certain that it's the same A. If I say to you, here's a book that proves that P=NP. You go and read it, and it's full of Math, and you can't fully process it. Later, you come back and read it again, this time you actually able to fully comprehend it. Even later you come back again, and not only comprehend it, but are able to prove some new facts, using no external sources, just your mind. Those are not all the same "A". So, you may have some evidence for/against a sorcerer, but are not able to accurately estimate the probability. After some reflection, you derive new facts, and then update again. Upon further reflection, you derive more facts, and update. Why should this process stop?
Like cousin_it, I'm assuming no mind tampering is involved, only evidence tampering
Is it sane for Bob and Daisy to be in such a positive or negative feedback loop? How is this resolved?
I don't think the feedback loops exist. Bob saying "the fact that I believe in bright is evidence of Bright's existence" is double-counting the evidence; deducing "and therefore, Bright exist" doesn't bring in any new information.
It's not that different from saying "I believe it will rain tomorrow, and the fact that I believe that is evidence that it is rain tomorrow, so I'll increase my degree of belief. But wait, that makes the evidence even stronger!".
If Bob and Daisy took the evidence provided by their belief into account already, how does this affect my own evidence updating? Should I take it into account regardless, or not at all, or to a smaller degree?
Just ignore the whole "belief in dark is evidence against dark" thing, Daisy already took that information into account when determining her own belief, you don't want to double count it.
Treat it the same way as you'd treat hearing Bob tell you that in Faery, the sky is Blue, and Daisy telling you that in Faery, the sky is Green.
It's not that different from saying "I believe it will rain tomorrow, and the fact that I believe that is evidence that it is rain tomorrow, so I'll increase my degree of belief. But wait, that makes the evidence even stronger!".
This is completely different. My belief about the rain tomorrow is in no way evidence for actual rain tomorrow, as you point out - it's already factored in. Tomorrow's rain is in no way able to affect my beliefs, whereas a sorcerer can, even without mind tampering. He can, for instance, manufacture evidence so as to mislead me, and if he is sufficiently clever, I'll be misled. But I am also aware that my belief state about sorcerers is not as reliable because of possible tampering.
Here, by me, I mean a person living in Faerie, not "me" as in the original post.
A sorcerer has two ways to manipulate people:
1) Move things around in the world.
2) Directly influence people's minds.
I'm not going to talk about option 2 because it stops people from being perfect reasoners. (If there's a subset of option 2 that still lets people be perfect reasoners, I'd love to hear it - that might be the most interesting part of the puzzle). That leaves option 1.
Here's a simple model of option 1. Nature shuffles a deck of cards randomly, then a sorcerer (if one exists) has a chance to rearrange the cards somehow, then the deck is shown to an observer, who uses it as Bayesian evidence for or against the sorcerer's existence. We will adopt the usual "Nash equilibrium" assumption that the observer knows the sorcerer's strategy in advance. This seems like a fair idealization of "moving things around in the world". What would the different types of sorcerers do?
Note that if both Bright and Dark might exist, the game becomes unpleasant to analyze, because Dark can try to convince the observer that Bright exists, which would mean Dark doesn't exist. To simplify the game, we will let the observer know which type of sorcerer they might be playing against, so they only need to determine if the sorcerer exists.
A (non-unique) best strategy for Bright is to rearrange the cards in perfect order, so the observer can confidently say "either Bright exists or I just saw a very improbable coincidence". A (non-unique) best strategy for Dark is to leave the deck alone, regardless of the observer's prior. Invisible has the same set of best strategies as Dark. I won't spell out the proofs here, anyone sufficiently interested should be able to work them out.
To summarize: if sorcerers can only move things around in the world and cannot influence people's minds directly, then Bright does as much as possible, Invisible and Dark do as little as possible, and the observer only looks at things in the world and doesn't do anything like "updating on the strength of their own beliefs". The latter is only possible if sorcerers can directly influence minds, which stops people from being perfect reasoners and is probably harder to model and analyze.
Overall it seems like your post can generate several interesting math problems, depending on how you look at it. Good work!
That's a very interesting analysis. I think you are taking the point of view that sorcerers are rational, or that they are optimizing solely for proving or disproving their existence. That wasn't my assumption. Sorcerers are mysterious, so people can't expect their cooperation in an experiment designed for this purpose. Even under your assumption you can never distinguish between Bright and Dark existing: they could behave identically, to convince you that Bright exists. Dark would sort the deck whenever you query for Bright, for instance.
The way I was thinking about it is that you have other beliefs about sorcerers and your evidence for their existence is primarily established based on other grounds (e.g. see my comment about kittens in another thread). Then Bob and Daisy take into account the fact that Bright and Dark have these additional peculiar preferences for people's belief in them.
Adding some structure to this hypothetical: At time t=0, Bob and Daisy have certain priors for their beliefs on sorcery, which they have not adjusted for this argument. Bob's belief was Position 1, with reduced strength, and Daisy's was Position 3, with greater strength.
I'll call your argument A0.
At time t=1, Bob and Daisy are both made aware of A0 and its implications for adjusting their beliefs. They update; Bob's belief in 1 increases, and Daisy's belief in 3 decreases.
More arguments:
A1: If Position 1 is true, then Bright is likely to cause you to increase your belief in him, therefore increasing your belief in Bright is evidence for Position 1.
A1': Corollary: Decreasing your belief in Bright is evidence against Position 1.
A2:If Position 3 is true, then Dark is likely to cause you to decrease your belief in him, therefore decreasing your belief in Dark is evidence for Position 3.
A2': Corollary: Increasing your belief in Dark is evidence against Position 3.
At time t=2, Bob and Daisy are exposed to A1 and A2, and their converses A1' and A2'. If they believe these, they should both increase credence for Positions 1 and 3, following A1 and A2, then increase credence for Position 1 and decrease it for Position 3, following A1 and A2', then follow A1 and A2 again, etc. This might be difficult to resolve, as you mention in your first question.
However, there is a simple reason to reject A1 and A2: Their influence is totally screened off! Bob and Daisy know why they revised their beliefs, and it was because of the valid argument A0. Unless Bright and Dark can affect the apparent validity of logical arguments (in which case your thoughts can't be trusted anyway), A0 is valid independent of which position is true. This action moves them to begin a feedback loop, but stop after a single iteration.
There is a valid reason they might want to continue a weaker loop.
A3: That you have encountered A0 is evidence for the sorcerers whose goals are served by having you be influenced by A0.
A3': That you have encountered A3 is evidence for the sorcerers whose goals are served by having you be influenced by A3.
A3'': That you have encountered A3'' is evidence for the sorcerers whose goals are served by having you be influenced by A3''.
etc.
But this is only true if they didn't reason out A0 or A3 for themselves, and even then A3', A3'', etc. should be considered obvious implications of A3 for a well-reasoned thinker. (In fact, A3 is properly more like "That you have encountered a valid argument is evidence for the sorcerers whose goals are served by having you be influenced by that argument.") So that adds at most one more layer, barring silly Tortoise-and-Achilles arguments.
Given all that, for your second question, you still should take their beliefs into account, but possibly to a slightly lesser degree.
A point I'm confused on: when you, based on A0, update based on their A0-updated belief, are you double-counting A0? If so, you should update to lesser degree. But is that so?
I don't think I completely follow everything you say, but let's take a concrete case. Suppose I believe that Dark is extremely powerful and clever and wishes to convince me he doesn't exist. I think you can conclude from this that if I believe he exists, he can't possibly exist (because he'd find a way to convince me otherwise), so I conclude he can't exist (or at least the probability is very low). Now I've convinced myself he doesn't exist. But maybe that's how he operates! So I have new evidence that he does in fact exist. I think there's some sort of paradox in this situation. You can't say that this evidence is screened off, since I haven't considered the result of my reasoning until I have arrived at it. It seems to me that your belief oscillates between 2 numbers, or else your updates get smaller and you converge to some number in between.
I believe he's not assuming similar priors.
I am not assuming they are Bayesians necessarily, but I think it's fine to take this case too. Let's suppose that Bob finds that whenever he calls upon Bright for help (in his head, so nobody can observe this), he gets unexpectedly high success rate in whatever he tries. Let's further suppose that it's believed that Dark hates kittens (and it's more important for him than trying to hide his existence), and Daisy is Faerie's chief veterinarian and is aware of a number of mysterious deaths of kittens that she can't rationally explain. She is afraid to discuss this with anyone, so it's private. For numeric probabilities you can take, say, 0.7, for each.
With respect to question 1, Aumann's Agreement Theorem would require that if they are acting rationally as you stated and with common knowledge, they would have to agree. That being the case, according to the formalism, question 2 is ill-posed.
Your proposed state of affairs could hold if they lack common knowledge (including lack of common knowledge of internal utility functions despite common knowledge of otherwise external facts and including differing prior probabilities). To resolve question 2 in that case you would have to assign probabilities to the various forms that the shared and unshared knowledge could take, to determine which state of affairs most probably prevails. For example, you may use your best estimation and determine that the state of affairs that prevails in Faerie is similar to the state of affairs that prevails in your own world/country/whatever, in which case you should weight the evidence provided by each person similar to how you would rate it if you were surveying your fellow countrymen. This is all fairly abstract because approaching such a thing formally is currently well outside our capabilities.
Thanks. I am of course assuming they lack common knowledge. I understand what you are saying, but I am interested in a qualitative answer (for #2): does the fact they have updated their knowledge according to this meta-reasoning process affect my own update of the evidence, or not?
Circular belief updating
This article is going to be in the form of a story, since I want to lay out all the premises in a clear way. There's a related question about religious belief.
Let's suppose that there's a country called Faerie. I have a book about this country which describes all people living there as rational individuals (in a traditional sense). Furthermore, it states that some people in Faerie believe that there may be some individuals there known as sorcerers. No one has ever seen one, but they may or may not interfere in people's lives in subtle ways. Sorcerers are believed to be such that there can't be more than one of them around and they can't act outside of Faerie. There are 4 common belief systems present in Faerie:
- Some people believe there's a sorcerer called Bright who (among other things) likes people to believe in him and may be manipulating people or events to do so. He is not believed to be universally successful.
- Or, there may be a sorcerer named Invisible, who interferes with people only in such ways as to provide no information about whether he exists or not.
- Or, there may be an (obviously evil) sorcerer named Dark, who would prefer that people don't believe he exists, and interferes with events or people for this purpose, likewise not universally successfully.
- Or, there may either be no sorcerers at all, or perhaps some other sorcerers that no one knows about, or perhaps some other state of things hold, such as that there are multiple sorcerers, or these sorcerers don't obey the above rules. However, everyone who lives in Faerie and is in this category simply believes there's no such thing as a sorcerer.
This is completely exhaustive, because everyone believes there can be at most one sorcerer. Of course, some individuals within each group have different ideas about what their sorcerer is like, but within each group they all absolutely agree with their dogma as stated above.
Since I don't believe in sorcery, a priori I assign very high probability for case 4, and very low (and equal) probability for the other 3.
I can't visit Faerie, but I am permitted to do a scientific phone poll. I call some random person, named Bob. It turns out he believes in Bright. Since P(Bob believes in Bright | case 1 is true) is higher than the unconditional probability, I believe I should adjust the probability of case 1 up, by Bayes rule. Does everyone agree? Likewise, the probability of case 3 should go up, since disbelief in Dark is evidence for existence of Dark in exactly the same way, although perhaps to a smaller degree. I also think the case 2 and case 4 have to lose some probability, since it adds up to 1. If I further call a second person, Daisy, who turns out to believe in Dark, I should adjust all probabilities in the opposite direction. I am not asking either of them about the actual evidence they have, just what they believe.
I think this is straightforward so far. Here's the confusing part. It turns out that both Bob and Daisy are themselves aware of this argument. So, Bob says, one of the reasons he believes in Bright, is because that's positive evidence for Bright's existence. And Daisy believes in Dark despite that being evidence against his existence (presumably because there's some other evidence that's overwhelming).
Here are my questions:
- Is it sane for Bob and Daisy to be in such a positive or negative feedback loop? How is this resolved?
- If Bob and Daisy took the evidence provided by their belief into account already, how does this affect my own evidence updating? Should I take it into account regardless, or not at all, or to a smaller degree?
I am looking forward to your thoughts.
If I have to do a bachelors degree, I expect that I can pick up an accredited degree quickly at that university that lets you test out of everything (I think it's called University of Phoenix these days?). No Masters, though, unless there's an org that will let me test out of that.
The rule of thumb here is pretty simple: I'm happy to take tests, I'm not willing to sit in a building for two years solely in order to get a piece of paper which indicates primarily that I sat in a building for two years.
If you don't have a bachelor's degree, that makes it rather unlikely that you could get a PhD. I agree with folks that you shouldn't bother - if you are right, you'll get your honorary degrees and Nobel prizes, and if not, then not. (I know I am replying to a five-year-old comment).
I also think you are too quick to dismiss the point of getting these degrees, since you in fact have no experience in what that involves.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I think your degree of belief in their rationality (and their trustworthiness in terms of not trying to mislead you, and their sanity in terms of having priors at least mildly compatible with yours) should have a very large effect on how much you update based on the evidence that they claim a belief.
The fact that they know of each other and still have wildly divergent beliefs indicates that they don't trust in each other's reasoning skills. Why would you give them much more weight than they gave each other?
For this experiment, I don't want to get involved in the social aspect of this. Suppose they aren't aware of each other, or it's very impolite to talk about sorcerers, or whatever. I am curious about their individual minds, and about an outside observer that can observe both (i.e. me).