Both worlds have a prior probability of 50% of being correct. Is it the case that World B should instead be given a 10:1 odds due to there being ten times the number of people
I'm failing to parse this sentence. Is the prior probability .5 or .9?
If there are a million people in one possible world, but only one in another, it would seem to be an amazing coincidence for you to be that one.
Ah, this is true if this hidden assumption is true: you're as likely to be one of the observer across all the possible universes. It's the old Sleeping Beauty all over again.
The problem is: what count as instances?
Gedankenexperiment: in universe A there are two observers, one wakes up in a green room and the other wakes up in a red room; in universe B a million observers wake up in a green room and a trillion observers wake up in a red room. You wake up in a green room. Universe A or B?
I can’t actually remember the exact process/theorem in order to determine probabilities from betting odds. Can anyone link it to me?
From this article:
http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/
In the usual way of writing probabilities, probabilities are between 0 and 1. A coin might have a probability of 0.5 of coming up tails, or the weatherman might assign probability 0.9 to rain tomorrow.
This isn't the only way of writing probabilities, though. For example, you can transform probabilities into odds via the transformation O = (P / (1 - P)). So a probability of 50% would go to odds of 0.5/0.5 or 1, usually written 1:1, while a probability of 0.9 would go to odds of 0.9/0.1 or 9, usually written 9:1. To take odds back to probabilities you use P = (O / (1 + O)), and this is perfectly reversible, so the transformation is an isomorphism—a two-way reversible mapping. Thus, probabilities and odds are isomorphic, and you can use one or the other according to convenience.
The problem with that thought experiment is that actual observer can't have this information. We can't know how many being are in the universe, and there are probably infinitely many of them. But in your experiment you know how many of them exist and also that there is no infinitely many of them. So the experiment models completely impossible world and its results are non relevant.
But from the same logic we may claim that I belong to the biggest class of all actually exiting (if SSA is true https://en.wikipedia.org/wiki/Self-sampling_assumption) or possible observers (if SIA is true https://en.wikipedia.org/wiki/Self-indication_assumption). In infinitely big universe all possible observers exist, so there will not be difference between SIA and SSA.
The model is not dependent on there being infinite beings. Admittedly, if you believe that there are the same number of people in two universes if there is a bijection between people, then you'll get the result that this doesn't work when there are infinite people. This is the way infinities are usually dealt with in mathematics, but I would argue that this is flawed, that it is actually possible to have one infinite that is twice another.
Anyway, there is no requirement to know exactly how many observers exist. We can just guess.
In my last post, I wrote about how the anthropic principle was often misapplied, that it could not be used within a single model, but only for comparing two or more models. This post will explain why I think that the anthropic principle is valid in every case where we aren't making those mistakes.
There have been many probability problems discussed on this site and one popular viewpoint is that probabilities cannot be discussed as existing by themselves, but only as existing in relation to a series of bets. Imagine that there are two worlds: World A has 10 people and World B has 100. Both worlds have a prior probability of 50% of being correct. Is it the case that World B should instead be given a 10:1 odds due to there being ten times the number of people and the anthropic principle? This sounds surprising, but I would say yes as you’d have to be paid 10 times as much from each person in World A who is correct in order for you to be indifferent between the two worlds. What this means is that if there is a bet that gains or loses you money according to whether you are in world A or world B, you should bet as though the probability of you being in world B is 10 times as much. That doesn’t quite show that the probability is 10:1, but it is rather close. I can’t actually remember the exact process/theorem in order to determine probabilities from betting odds. Can anyone link it to me?
Another way to show that the anthropic principle is probably correct is to note that if world A had 0 people instead, then there would be 100% of observing world B rather than world A. This doesn’t prove much, but it does prove that anthropic effects exist on some level.
Suppose now that world A has 1 person and world B has 1 million people. Maybe you aren’t convinced that you are more likely to observe world B. Let’s consider an equivalent formulation where world A has 1 person who is extremely unobservant and only has a 1 in a million chance of noticing the giant floating A in world A and the other world has a single person, but this time with a 100% chance of noticing the giant floating B in their world. I think it is clear that it is more likely for you to notice a giant floating B than an A.
One more formulation is to have world A have 10 humans and 90 cyborgs and world B to have 100 humans. We can then ask about the probability of being in world B given that you are a human observing the world. It seems clear here that you have 10 times the probability of being in world B than world A given that you are a human. It seems that this should be equivalent to the original problem since the cyborgs don’t change anything.
I admit that none of this is fully rigorous philosophical reasoning, but I thought that I’d post it anyway a) to get feedback b) to see if anyone denied the use of the anthropic principle in this way (not the way described in my last post), which would provide me with more motivation to try making all of this more formal.
Update: I thought it was worth adding that applying the anthropic principle to two models is really very similar to null hypothesis testing to determine if it is likely that a coin is biased. If there are a million people in one possible world, but only one in another, it would seem to be an amazing coincidence for you to be that one.