paper-machine comments on Personal Evidence - Superstitions as Rational Beliefs - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (135)
I don't care to respond to the rest of your post, but I feel I should point out that saying a theorem is wrong because the hypotheses are not true is bad logic.
I'm interested in whether the axioms or theorem are even wrong in this case.
Why isn't this covered under the general observation "your observations [of haunting] are very little information and move a outsider's beliefs by [very small amount], and if your own beliefs don't converge, you're just demonstrating your irrationality by overweighting your experience and ignoring how many thousands of people throughout history have felt equally freaked out by 'haunted houses' only for detailed investigation to find nothing."?
I'm curious now whether and how the agreement theorem holds in cases where the environment includes agents that are selectively presenting different evidence to different rational observers. You'd think that'd ruin the result along the same lines as the no free lunch theorems.
If they're presenting false evidence and are otherwise indistinguishable from truth-tellers, then I would guess that agreement would fall a lot or cease to happen; if they're the equivalent of random noise, then I'm not sure what would happen, but probably bad stuff if we go by Hanson's paper on communicating rare evidence; and if they're merely being selective about evidence, you can still infer stuff from their reports (the Bullock thesis in my backfire effect page would be relevant here).
(This is obvious, but it took me a bit to explicitly notice: deceptive agents in the environment is exactly the same formally speaking as irrational agents in the notionally Bayesian community, so of course the agreement theorem doesn't apply.)
Imagine, for a moment, a society of N people, N being currently undefined but large. Every year 1 person out of this population is randomly selected for an award.
For what value of N does it become more likely that the process is nonrandom, provided you are chosen?
You're acting as though evidence should be evaluated strictly objectively. If this is truly the case, you shouldn't update your beliefs upon winning the award -for any value of N-, because no matter what happens to you personally, it had to happen to -somebody-. However, for a sufficiently large value of N, there reaches a point when the odds of a person winning the award is more likely -simulated- than actually winning the award. You expect -somebody- to win the award, therefore if somebody wins the award nothing unusual has happened. However, you cannot expect -yourself- to win the award, and if you do you should update your priors to reflect this fact.
Huh? What does simulation have to do with this?
I'll use your example against you: Mary Panick recently won $250k from a lottery. Should she increase her belief that someone in the lottery commission crooked the lottery to favor her? How much, exactly?
Objectively, no; as previously mentioned, it shouldn't surprise us that somebody won the lottery. Subjectively, yes; I would certainly update my odds that something other than pure chance is at work if I happened to win the lottery.
And simulation is coming from Robin Hanson's assertion that if you're an important person in the world, you should probably update your priors to suggest you are being simulated; it's a related argument. If the world is ever capable of simulating individual people, any given important person is more likely a simulation than the real thing - so, given that I'm not particularly important, I can probably assume I'm not simulated, unless something exceptionally unlikely happens to me. But if I were, say, Obama, maybe I -should- think I'm living in a simulation. From the outside, there's a president of the United States, so it's not particularly unusual that -somebody- is the president of the United States. From the inside, it would be unusual that -I- am the president of the United States. Same thing.
Again, why? Suppose we are comparing two models: in one world, there are 1000 haunted houses which are all explained by gaslamping and sleepwalking etc; in the second world, there are 1000 haunted houses and they are all supernatural etc. Upon encountering a haunted house, would you update in favor of 'I am in world two and houses are supernatural'? Would someone reading your experience update? I propose that neither would update, because the evidence is equally consistent with both worlds; so far so good.
Now, if in world 1 there are 1000 frightening houses with the mundane explanations mentioned, and in world 2 there are 1000 frightening houses with the mundane explanations (human biology and mentality and the laws of probability etc having not changed) plus 1000 frightening houses due to supernatural influences, upon encountering a frightening house would you update?
Of course; in world 2 there are more frightening houses and you have encountered a frightening house, which is twice as likely in world 2 than in world 1 (2000 houses versus 1000 houses), and so you are now more inclined to think you are in world 2 from whatever you were thinking before. But so would an observer reading your experience!
So where does this unique unconveyable evidence (that your post claims your experience has given you) come from?
Ah. It's coming from anthropics. You're making the claim that Aumannian agreement cannot convey anthropic information.
You realize that both the SIA and SSA are hotly debated because either seems to lead to absurd conclusions, right? While Aumann just leads to the conclusion 'people are irrational', which certainly doesn't seem absurd to me.
And since one man's modus ponens is another man's modus tollens, why isn't your post just further evidence that anthropic reasoning as currently understood by most people is completely broken and cannot be trusted in anything?
I think there are non-anthropic problems with even rational!humans communicating evidence.
One is that it's difficult to communicate that you're not lying, and it is also difficult to communicate that you're competent at assessing evidence. A rational agent may have priors saying that OrphanWilde is an average LW member, including the associated wide distribution in propensity to lie and competence at judging evidence. On the other hand, rational!OrphanWilde would (hopefully) have a high confidence assessment of himself (herself?) along both dimensions. However, this assessment is difficult to communicate, since there are strong incentives to lie about these assessments (and also a lot of potential for someone to turn out to not be entirely rational and just get these assessments wrong). So, the rational agent may read this post and update to believing it's much more likely that OrphanWilde either lies to people for fun (just look at all those improbable details!) or is incompetent at assessing evidence and falls prey to apophenia a lot.
This might not be an issue were it not for the second problem, which is that communication is costly. If communication were free, OrphanWilde could just tell us every single little detail about his life (including in this house and in other houses), and we could then ignore the problem of him potentially being a poor judge of evidence. Alternatively, he could probably perform some very large volume evidence-assessment test to prove that he is, in fact, competent. However, since communication is costly, this seems to be impractical in reality. (The lying issue is slightly different, but could perhaps be overcome with some sort of strong precommitment or an assumption constraining possible motivations combined with a lot of evidence.)
This doesn't invalidate Aumann agreement as such, but certainly seems to limit its practical applications even for rational agents.
I don't rule out mundane explanations. Hence my repeated disclaimers on each use of the word "haunted." If anything "supernatural" exists, it isn't supernatural, it's merely natural, and we simply haven't pinned down what's going on yet. Empiricism and reductionism don't get broken.
And anthropic reasoning is -unnecessary- to the logic, it simply provides the simplest examples. I could construct related examples without any anthropic reasoning at all:
You're shipwrecked on a deserted island with a friend, Johnny. You see a ship in the distance; Johnny's eyesight is not as good as yours, and he cannot. You've been trying to cheer him up for the past three days, because he's fallen into depression. He doesn't believe you when you tell him there's a ship; he cannot see it, and he believes it's just another attempt by you to cheer him up. -You cannot share the true evidence, because you cannot show him the ship he cannot see-.
Or, to put it in terms of framing:
You flip a coin ten times. It comes up heads each time. versus You say a coin will land heads-up ten times. You flip it ten times, and it comes up heads each time.
Even though the odds are strictly speaking equally likely in both cases, the framing of the first proposition in fact makes it [edited: less significant, not more likely]; you would have been similarly impressed if it had come up tails each time. So the second scenario is twice as significant as the first scenario.
The fact that it is happening to me, rather than another person, is a kind of contextual framing, in much the same sense that calling heads first frames the coin-flipping event.
Fine, replace 'mundane causes' with 'mundane causes minus cause X' and 'supernatural with 'cause X' in my examples. -_-
And they fail. In the desert island case, Aumann is perfectly applicable: if you have more evidence than he does, then this will be incorporated appropriately; in fact, the desert island case is a great example of Aumann in practice: that you've been lying to him merely shows that 'disagreements are not honest' (you are the dishonest party here).
How so? You didn't predict it would be a haunted house before it went, to point out the most obvious disanalogy.
I feel like your disagreement is getting a little slippery here.
My rejection of Aumann is that there is no common knowledge of our posteriors. It's not necessary for me to have lied to him before, after all; I could have been trying to cheer him up entirely honestly.
If I -had- predicted it would be a haunted house, I'd be suspicious of any evidence that suggested it was. The point isn't the prediction - prediction is just one mechanism of framing an outcome. The point is in the priors; my prior of -somebody- experiencing a series of weird events in a given house are pretty high, there's a lot of people out there to experience such weird events, and some of them will experience several. My prior odds of -me- experiencing a series of weird events in a given house should be pretty low. It's thus much more significant for -me- to experience a series of weird events in a given house than for some stranger who I wouldn't have known about except for their reporting such. If I'm not updating my priors after being surprised, what am I doing?
Then why does he distrust you? If you have never lied and will never lie in trying to cheer you up, then he is wrong to distrust you and this is simply an example of irrationality and not uncommunicable knowledge; if he is right to suspect that you or people like you would lie in such circumstances, then 'disagreements are not honest' and this is again not uncommunicable knowledge.
And you talk about me being slippery. We're right back to where we began:
You have not shown any examples which simultaneously involve uncommunicable knowledge which does not involve anthropics (what you are claiming is possible) and rationality and honesty on the part of all participants.
Suppose i type 693012316 693012316 . Maybe I typed same number twice, maybe I used quantum random number generator and got them separately on the first try. You use the equality as evidence of the former, even if you believe in many worlds, where it is basically a lottery played by parallel yous. Likewise, the winner of the lottery observes the same number twice, which is some evidence for various crazy hypotheses where the selection of "I" necessarily coincides with the winner. edit: you're totally correct though that such crazy hypotheses are quite improbable to begin with.
In my example of two worlds, the odds of observing the observed evidence is the same in both worlds and so there is no update.
What set of worlds are you postulating for your "two numbers" example? Because your example, as far as I understand it, doesn't seem at all analogous.
I'm talking specifically about supernatural explanations for you winning the lottery, I don't see either why people opt for supernatural explanations for haunting.
Suppose we do something like Solomonoff induction. Dealing with codes that match observations verbatim. There's a theory that reads bits off the tape to produce the ticket number, then more bits to produce the lottery draw, and there's a theory that reads bits off the tape and produces both numbers as equal. Suppose the lottery has the size of 2^20, about 1 million. Then the former theory will need 40 lucky bits to match the observation, whereas the latter theory will need only 20 lucky bits to match the observation. For mostly everyone the latter theory will be eliminated, except the lottery winner, for who it will linger, and now, with the required lucky bits, the difference in length between the theories will decrease by 20 bits. S.I. - using learning agent (AIXI and variations of it) which won the lottery will literally expect higher probability of victory on next lottery, because it didn't eliminate various "I always win" hypotheses. edit: and indeed, given sufficiently big N, the extra code required for "I always win" hack will be smaller than log2(N) so it may well become the dominant hypothesis after a single victory. Things like S.I. are only guaranteed to be eventually correct for almost everyone; if there's enough instances, the wrongmost ones can be arbitrarily wrong.
At the end of the day it's just how the agents learn - if you were constantly winning lotteries, at some point you would start believing you got supernatural powers, or MWI is true plus the consciousness preferentially transfers specifically to the happy winner, or the like. Any learning agent is subject to risk of learning wrong things.
edit: more concise explanation: if you choose a person by some unknown method, and then they win the lottery, that's distinct from you not choosing some person, then someone winning the lottery. Namely, in the former case you got evidence in favour of the hypothesis that "unknown method" picks lottery winners. For a lottery winner, their place in the world was chosen by some unknown method.
So let's see if I'm understanding you here.
You treat a lottery output as a bitstring and ask about SI on it. We can imagine a completely naive agent with no previous observations; what will this ignorant predict? Well, it seems reasonable that one of the top predictions will be for the initial bitstring to be repeated; this seems OK by Occam's razor (events often repeating are necessary for induction) and I understand that empirically investigating simple Turing machines that many (most? all?) terminating programs will repeat output. It will definitely rank the 'sequence repeats' hypotheses above that of possible PRNGs, or very complex physical theories encompassing atmospheric noise and balls dropping into baskets etc.
So far, so good.
I think I lose you when you go on to talk about inferring that you will always win and stuff like that. The repeating hypotheses aren't contingent on who they happen to. If the particular bitstring emitted by the lottery had also included '...and this number was picked by Jain Farstrider', then SI would seem to then also predict that this Jain will win the next one as well, by the same repeating logic. It certainly will not predict that the agent will win, and the hypothesis 'the agent (usually) wins' will drop.
Remember that my trichotomy was that you need to either 1) invoke anthropics; 2) break Aumann via something like dishonesty/incompetence; or 3) you actually do have communicable knowledge.
These SI musings doesn't seem to invoke anthropics or break Aumannian requirements, and looking at them, they seem communicable. 'AIXI-MC-MML*, why do you think Jain will win the lottery a second time?' '[translated from minimum-message-length model+message] Well, he won it last time and since I am ignorant of everything in the world, it seems reasonable that he will win it again'. 'Hmm, that's a good point.' And ditto if AIXI-MC-MML happened to be the beneficiary.
* I bring up minimum-message length because Patrick Robotham is supposed to be working on a version of AIXI-MC using MML so one would be able to examine the model of the world(s) a program has devised so far and so one could potentially ask 'why' it is making the predictions it is. Having a comprehensible approximation of SI would be pretty convenient for discussing what SI would or would not do.
The point is that if the lottery is biased it's more likely to be biased in such a way that the same number repeats.
"Important" people in most MMOs tend to be NPCs. You can't have every PC be King of Orgrimmar or whatever...
Well, the theorem calls for Bayesian agents, which humans are not...
It says if agents are rational, they will agree. Not agreeing then implies not being rational, which given the topic of OP hardly seems like a reason to modus tollens rather than modus ponens the result...
If my interpretation of your complaint is correct, it's probably a good thing I didn't do that, then.
ETA: My interpretation being that you're complaining that I rejected the hypothesis as being an incorrect derivation of the logic. The theorem is a perfectly fine derivation of its assumptions; I can't find anything wrong with its logic, and have no interest in trying to. My statement was simply meant to reflect the fact that the conclusion is wrong, which follows from my rejection of the assumption that all pertinent evidence can in any case be shared.
I don't really know much about this, but from what I recall the theorem doesn't require the hypothesis that info can be shared. The theorem says that two Bayesians with common priors and common knowledge of their posteriors have the same posteriors. They don't actually need to communicate their evidence at all, so the evidence need not be communicable.
In practice, though, how are they going to attain knowledge of each other's posteriors without communicating?
Actually, to agree on a proposition, they only need to have common knowledge of their posteriors for that proposition. (At least this is how Aumann describes his result.) And they can communicate those posteriors without communicating their evidence.
You're right, of course. It was wrong of me to confuse communicating their posterior with communicating their evidence.
If the objection is true, and the hypothesis is false, that seems like a great objection! If, on the other hand, he provided no evidence towards his objection, then it seems that the bad logic is in not offering evidence, not attacking the hypothesis directly.
Am I missing something, or just reading this in an overly pedantic way?
You're missing something by reading this in an insufficiently pedantic way.
The pedantic way is as follows. The theorem's claim is "If A, then B", where A is the hypothesis. Claiming that A is false does not invalidate the theorem; in fact, if A could be proven to be false, then "If A, then B" would be vacuously true, and so in a way, arguing with the hypotheses only supports the theorem.
You could, however, claim that the theorem is useless if the hypothesis never holds. One example of this is "If 2+2=5, then the moon is made of green cheese". This is a true statement, but it doesn't tell us anything about the moon because 2+2 is not 5.