Imagine that I write a computer program that starts by choosing a random integer W between 0 and 2. It then generates 10^(3W) random simple math problems, numbering each one and placing it in list P. It then chooses a random math problem from P and presents it to me, without telling me what the problem number is for that particular math problem.
In this case, being presented with a single math problem tells me nothing about the state of W - I expect it to do that in any case. Similarly, if I subsequently find out that I was shown P(50), that rules out W=0 and makes W=1 1,000 times more likely than W=2.
Given that W represents which world we're in, each math problem in P represents a unique person, and being presented with a math problem represents experiencing being that person or knowing that that person exists, the self indication assumption says that my model is flawed.
According to the self-indication assumption, my program needs to do an extra step to be a proper representation. After it generates a list of math problems, it needs to then choose a second random number, X, and present me with a math problem only if there's a math problem numbered X. In this case, being presented with a math problem or not does tell me something about W - I have a much higher chance of getting a math problem if W=2 and a much lower chance if W=0 - and finding out that the one math problem I was presented with was P(50) tells me much more about X than it does about W.
I don't see why this is a proper representation, or why my first model is flawed, though I suspect it relates to thinking about the issue in terms of specific people rather than any person in the relevant set, and I tend to get lost in the math of the usual discussions. Help?
Neal's approach (even according to Neal) doesn't work in Big Worlds, because then every observation occurs at least once. But full non-indexical conditioning tells us with near certainty that we are in a Big World. So if you buy the approach, it immediately tells you with near certainty that you're in the conditions under which it doesn't work.
Sure, that's a fair criticism.
What I especially like about FNC is that it refuses to play the anthropic game at all. That is, it doesn't pretend that you can 'unwind all of a person's observations' while retaining their Mind Essence and thereby return to an anthropic prior under which 'I' had just as much chance of being you as me. (In other words, it doesn't commit you to believing that you are an 'epiphenomenal passenger'.)
FNC is just 'what you get if you try to answer those questions for which anthropic reasoning is typically used, without doing somethi... (read more)