I think that this is a really good approach. In terms of philosophy, I'd suggest starting with the philosophy of mathematics as sometimes being exposed to a mathematical proof will make you realise that what you previously believed was completely wrong.
If you are trying to calculate probability of X being true by using bets, then the way that is by seeing when you are indifferent between receiving $A if X is true and $B if X is false and then applying maths. You can't calculate probability by using a weird utility function. If you use a weird utility function then you end up calculating something completely different.
So let's look what happens in this process.
t=1: You know that you are the original t=2: We create a clone in such a way that you don't know whether you are a clone or not. At this time you have a subjective probability of 50% of being a clone. t=3: We tell clone 1 that they are a clone. Your subjective probability of being a clone is now 0% since you were not informed that you were a clone. t=4: We create another clone that provides you with a subjective probability of being a clone of 50% t=5: Clone 2 finds out that they are a clone. Since you weren't told you were a clone, you know you aren't a clone, so your subjective probability of being you goes back up to 100%.
Let's now imagine that we want no-one to know if they are clones or not. We will imagine that people initially know that they are not the new clone, but this information is erased.
t=1: We copy the original person so that we have two clones. We erase any information that would indicate who is original. t=2: We create a third clone, but allow the first two people to know they aren't the third clone t=3: We erase information from the first two people about whether or not they are the third clone.
At t=1, you have a 50% chance of being a clone and a 50% chance of being the original. At t=2, you still have a 50% chance as you know you aren't the third clone At t=3, you have lost information about whether you are the third clone. You can now be any of the clones and there is no distinguishing information, so the probability becomes 1/3 of being the original. Probability mass isn't just redistributed from the chance of you being the original but also from the chance of you being the first clone.
When we have created n clones, your odds of being the original will be 1/n.
It makes no difference whether the steps at t=2 and t=3 occur separately or together, I simply separately them to show that it was the loss of information about identity, not the cloning that changed the probability.
So if the clones weren't informed about their number after cloning, we would get the same result whether we produced 99 clones at once or one at a time.
Lastly, let's suppose that the clone is told that they are a clone, but the original doesn't know they won't be told. This won't affect the subjective probabilities of the original, only that of the clones, so again there isn't a paradox.
This paradox is based upon a misunderstanding of how cloning actually works. Once this is modelled as information loss, the solution is straightforward.
I downvoted this because it seems to be missing a very obvious point - that the reason why an early filter would be good is because we've already passed it. If we hadn't passed it, then of course we want the filter as late as possible.
On the other hand, I notice that this post has 15 upvotes. So I am wondering whether I have missed anything - generally posts that are this flawed do not get upvoted this much. I read through the comments and thought about this post a bit more, but I still came to the conclusion that this post is incredibly flawed.
I was recently reading Outlawing Anthropics and I thought of a very similar technique (that one random person would be given a button that would change what everyone did). I think that it is a shame that this post didn't receive much attention given that it seems to resolve these problems rather effectively.
There probably could have been a bit more that justifies this argument, apart from the fact that it works. I think a reasonable argument to note that we can either hold the group's choices as fixed and ask about whether an individual would want to change their choice given these fixed choices, or give an individual the ability to change everyone's choice/be a dictator and ask whether they'd want to change their choices then. The problem in outlawing anthropics is that it mixes and matches - it gives the decision to multiple individuals, but tries to calculate the individual benefit from a decision as though they were solely responsible for the choice and so it double-counts.
I now think I've got a fomalisation that works; I'll put it up in a subsequent post.
Was this ever written up?
I will bite the first horn of the trilemma. I'm will argue that the increase in subjective probability results from losing information and that it is no different from other situations where you lose information in such a way to make subjective probabilities seem higher. For example, if you watch the lotto draw, but then forget every number except those that match your ticket, your subjective probability that you won will be much higher than originally.
Let's imagine that if you win the lottery that a billion copies of you will be created.
t=0: The lottery is drawn t=1: If you won the lottery, then a billion clones are created. The original remembers that they are the original as they see the clones being created, but if clones were made, they don't know they are clones and don't know that the original knows that they are the original, so they can't figure it out that way. t=2: You have a bad memory and so you forget whether you are an original or a clone. t=3: If any clones exist, they are all killed off t=4: Everyone is informed about whether or not they ran the lottery.
Let's suppose that you know you are the original and that you are at t=1. Your chances of winning the lottery are still 1 in a million as the creation of clones does not affect your probability of waking up to a win at t=4 if you know that you are not a clone.
Now let's consider the probability at t=2. Your subjective odds of winning the lottery have rise massively, since you most probably are a copy. Even though there is only a one in a million chance that copies will be made, the fact that a billion copies will be made more than cancels this out.
What we have identified is that it is the information loss that is the key feature. Of course you can increase your subjective probabilities by erasing any information that is contrary. What is interesting about cloning is that if we are able to create clones with the exact same information, we are able to effectively remove knowledge without touching your brain. That is, if you know that you are not a clone, after we have cloned you exactly, then you no longer know you are not a clone, unless someone tells you or you see it happen.
Now at t=3 we kill off/merge any remaining clones. If you are still alive, you've gained information when you learned that you weren't killed off. In fact, you've been retaught the same information you've forgotten.
More near-equivalent reformulations of the problem (in support of the second horn):
A trillion copies will be created, believing they have won the lottery. All but one will be killed (1/trillion that your current state leads directly to your future state). If you add some uniportant differentiation between the copies - give each one a speratate number - then the situation is clearer: you have one chance in a trillion that the future self will remember your number (so your unique contribution will have 1/trillion chance of happening), while he will be certain to believe he has won the lottery (he gets that belief from everyone.
A trillion copies are created, each altruistically happy that one among the group has won the lottery. One of them at random is designated the lottery winner. Then everyone else is killed.
Follow the money: you (and your copies) are not deriving utility from winning the lottery, but from spending the money. If each copy is selfish, there is no dilema: the lottery winnings divided amongst a trillion cancels out the trillion copies. If each copy is altruistic, then the example is the same as above; in which case there is a mass of utility generated from the copies, which vanish when the copies vanish. But this extra mass of utility is akin to the utility generated by: "It's wonderful to be alive. Quick, I copy myself, so now many copies feel it's wonderful to be alive. Then I delete the copies, so the utility goes away".
"You (and your copies) are not deriving utility from winning the lottery, but from spending the money"
I would say that you derive utility from knowing that you've won money you can spend. But, if you only get $1, you haven't won very much.
I think that a better problem would be if you split if your favourite team won the super bowl. Then you'd have a high probability of experiencing this happiness, and the number of copies wouldn't reduce it.
Just an aside - this is obviously something that Eliezer - someone highly intelligent and thoughful - has thought deeply about, and has had difficulty answering.
Yet most of the answers - including my own - seem to be of the "this is the obvious solution to the dilemma" sort.
People often miss a solution that it obvious in retrospect.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
You'd want to defect, but you'd also happily trade away your ability to defect to both choose heads, but if you could, then you'd happily pretend to trade away your ability to defect, then actually defect.