the justification for reasoning anthropically is that the set Ω of observers in your reference class maximizes its combined winnings on bets if all members of Ω reason anthropically
That is a justification for it, yes.
When most of the members of Ω arise from merely non-actual possible worlds, this reasoning is defensible.
Roko, on what do you base that statement? Non-actual observers do not participate in bets.
The SIA is not an example of anthropic reasoning; anthropic implies observers, not "non-actual observers".
See this post for an example of the difference, showing why the SIA is false.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I just wanted to follow up on this remark I made. There is a suble anthropic selection effect that I didn't include in my original analysis. As we will see, the result I derived applies if the time after is long enough, as in the SIA limit.
Let the amount of time before the killing be T1, and after (until all observers die), T2. So if there were no killing, P(after) = T2/(T2+T1). It is the ratio of the total measure of observer-moments after the killing divided by the total (after + before).
If the 1 red observer is killed (heads), then P(after|heads) = 99 T2 / (99 T2 + 100 T1)
If the 99 blue observers are killed (tails), then P(after|tails) = 1 T2 / (1 T2 + 100 T1)
P(after) = P(after|heads) P(heads) + P(after|tails) P(tails)
For example, if T1 = T2, we get P(after|heads) = 0.497, P(after|tails) = 0.0099, and P(after) = 0.497 (0.5) + 0.0099 (0.5) = 0.254
So here P(tails|after) = P(after|tails) P(tails) / P(after) = 0.0099 (.5) / (0.254) = 0.0195, or about 2%. So here we can be 98% confident to be blue observers if we are after the killing. Note, it is not 99%.
Now, in the relevant-to-SIA limit T2 >> T1, we get P(after|heads) ~ 1, P(after|tails) ~1, and P(after) ~1.
In this limit P(tails|after) = P(after|tails) P(tails) / P(after) ~ P(tails) = 0.5
So the SIA is false.