I've been thinking about this topic, off and on, at least since September 1997, when I joined the Extropians mailing list, and sent off a "copying related probability question" (which is still in my "sent" folder but apparently no longer archived anywhere that Google can find). Both Eliezer and Nick were also participants in that discussion. What are the chances that we're still trying to figure this out 12 years later?
My current position, for what it's worth, is that anticipation and continuity of experience are both evolutionary adaptations that will turn maladaptive when mind copying/merging becomes possible. Theoretically, evolution could have programmed us to use UDT, in which case this dilemma wouldn't exist now, because anticipation and continuity of experience is not part of UDT.
So why don't we just switch over to UDT, and consider the problem solved (assuming this kind of self-modification is feasible)? The problem with that is that much of our preferences are specified in terms of anticipation of experience, and there is no obvious way how to map those onto UDT preferences. For example, suppose you’re about to be tortured in an hour. Should you make as...
I just happened to see this whilst it happens to be 12 years later. I wonder what your sense of this puzzle is now (object-level as well as meta-level).
I'm not really aware of any significant progress since 12 years ago. I've mostly given up working on this problem, or most object-level philosophical problems, due to slow pace of progress and perceived opportunity costs. (Spending time on ensuring a future where progress on such problems can continue to be made, e.g., fighting against x-risk and value/philosophical lock-in or drift, seems a better bet even for the part of me that really wants to solve philosophical problems.) It seems like there's a decline in other LWer's interest in the problem, maybe for similar reasons?
My thread of subjective experience is a fundamental part of how I feel from the inside. Exchanging it for something else would be pretty much equivalent to death - death in the human, subjective sense. I would not wish to exchange it unless the alternative was torture for a googol years or something of that ilk.
Why would you wish to switch to UDT?
Seeing that others here are trying to figure out how to make probabilities of anticipated subjective experiences work, I should perhaps mention that I spent quite a bit of time near the beginning of those 12 years trying to do the same thing. As you can see, I eventually gave up and decided that such probabilities shouldn't play a role in a decision theory for agents who can copy and merge themselves.
This isn't to discourage others from exploring this approach. There could easily be something that I overlooked, that a fresh pair of eyes can find. Or maybe someone can give a conclusive argument that explains why it can't work.
BTW, notice that UDT not only doesn't involve anticipatory probabilities, it doesn't even involve indexical probabilities (i.e. answers to "where am I likely to be, given my memories and observations?" as opposed to "what should I expect to see later?"). It seems fairly obvious that if you don't have indexical probabilities, then you can't have anticipatory probabilities. (See ETA below.) I tried to give an argument against indexical probabilities, which apparently nobody (except maybe Nesov) liked. Can anyone do better?
ETA: In the Absent-M...
This presumably places anticipation together with excitement and fear -- an aspect of human experience, but not a useful concept for decision theory.
Whatever the correct answer is, the first step towards it has to be to taboo words like "experience" in sentences like, "But if I make two copies of the same computer program, is there twice as much experience, or only the same experience?"
What making copies is, is creating multiple instances of the same pattern. If you make two copies of a pattern, there are twice as many instances but only one pattern, obviously.
Are there, then, two of 'you'? Depends what you mean by 'you'. Has the weight of experience increased? Depends what you mean by 'experience'. Think in terms of patterns and instances of patterns, and these questions become trivial.
I feel a bit strange having to explain this to Eliezer Yudkowsky, of all people.
I think the conflict is resolved by backing up to the point where you say that multiple copies of yourself count as more subjective experience weight (and therefore a higher chance of experiencing).
But if I make two copies of the same computer program, is there twice as much experience, or only the same experience? Does someone who runs redundantly on three processors, get three times as much weight as someone who runs on one processor?
Let's suppose that three copies get three times as much experience. (If not, then, in a Big universe, large enough that at least one copy of anything exists somewhere, you run into the Boltzmann Brain problem.)
I have a top-level post partly written up where I attempt to reduce "subjective experience" and show why your reductio about the Boltzmann Brain doesn't follow, but here's a summary of my reasoning:
Subjective experience appears to require a few components: first, forming mutual information with its space/time environment. Second, forming M/I with its past states, though of course not perfectly.
Now, look at the third trilemma horn: Britney Spears's mind does not have M/I with your past memories. So it is flat-out incoherent to ...
Actually, let me revise that: I made it more complicated than it needs to be. Unless I'm missing something (and this does seem too simple), you can easily resolve the dilemma this way:
Copying your upload self does multiply your identities but adds nothing to your anticipated probabilities that stem from quantum branching.
So here's what you should expect:
-There's still a 1 in a billion chance of experiencing winning the lottery.
-In the event you win the lottery, you will also experience being among a trillion copies of yourself, each of whom also have this experience. Note the critical point: since they all wake up in the same Everett branch, their subjective experience does not get counted in at the same "level" as the experience of the lottery loser.
-If you merge after winning the lottery you should expect, after the merge, to remember winning the lottery, and some random additional data that came from the different experiences the different copies had.
-This sums to: ~100% chance of losing the lottery, 1 in a billion chance of winning the lottery plus forgetting a few details.
-Regarding the implications of self-copying in general: Each copy (or original or instantiatio...
At some point you will surely admit that we now have 2 people and not just 1
Actually I won't. While I grok your approach completely, I'd rather say my concept of 'an individual' breaks down once I have two minds with one bit's difference, or two identical minds, or any of these borderline cases we're so fond of.
Say I have two optimisers with one bit's difference. If that bit means one copy converts to Sufism and the other to Mennonism, then sure, two different people. If that one bit is swallowed up in later neural computations due to the coarse-grained-ness of the wetware, then we're back to one person since the two are, once again, functionally identical. Faced with contradictions like that, I'm expecting our idea of personal identity to go out the window pretty fast once tech like this actually arrives. Greg Egan's Diaspora pretty much nails this for me, have a look.
All your 'contradictions' go out the window once you let go of the idea of a mind as an indivisible unit. If our concept of identity is to have any value (and it really has to) then we need to learn to think more like reality, which doesn't care about things like 'one bit's difference'.
Words are just labels, but in order to be able to converse at all, we have to hold at least most of them in one place while we play with the remainder. We should try to avoid emulating Humpty Dumpty. Someone who calls a tail a leg is either trying to add to the category originally described by "leg" (turning it into the category now identified with "extremity" or something like that), or is appropriating a word ("leg") for a category that already has a word ("tail"). The first exercise can be useful in some contexts, but typically these contexts start with somebody saying "Let's evaluate the content of the word "leg" and maybe revise it for consistency." The second is juvenile code invention.
To condense my response to a number of comments here:
It seems to me that there's some level on which, even if I say very firmly, "I now resolve to care only about future versions of myself who win the lottery! Only those people are defined as Eliezer Yudkowskys!", and plan only for futures where I win the lottery, then, come the next day, I wake up, look at the losing numbers, and say, "Damnit! What went wrong? I thought personal continuity was strictly subjective, and I could redefine it however I wanted!"
You reply, "But that's just because you're defining 'I' the old way in evaluating the anticipated results of the experiment."
And I reply, "...I still sorta think there's more to it than that."
To look at it another way, consider the Born probabilities. In this case, Nature seems to have very definite opinions about how much of yourself flows where, even though both copies exist. Now suppose you try to redefine your utility function so you only care about copies of yourself that see the quantum coin land heads up. Then you are trying to send all of your measure to the branch where the coin lands up heads, by exercising your right to red...
I'm sorry, I don't think I can help. It's not that I don't believe in personal continuity, it's that I can't even conceive of it.
At t=x there's an Eliezer pattern and there's a Bill Gates pattern. At t=x+1 there's an Eliezer+1 pattern and a Bill Gates+1 pattern. A few of the instances of those patterns live in worlds in which they won the lottery, but most don't. There's nothing more to it than that. How could there be?
Some Eliezer instances might have decided to only care about Eliezer+1 instances that won the lottery, but that wouldn't change anything. Why would it?
I can't be the only one who sees this discussion as parallel to the argument over free will, right down to the existence of people who proudly complain that they can't see the problem.
Do you see how this is the same as saying "Of course there's no such thing as free will; physical causality rules over the brain"? Not false, but missing completely that which actually needs to be explained: what it is that our brain does when we 'make a choice', and why we have a deeply ingrained aversion to the first question being answered by some kind of causality.
If I jumped off a cliff and decided not to care about hitting the ground, I would still hit the ground. If I played a quantum lottery and decided not to care about copies who lost, almost all of me would still see a screen saying "You lose". It seems to me that there is a rule governing what I see happen next, which does not care what I care about. I am asking how that rule works, because it does so happen that I care about it.
You-now doesn't want to jump off the cliff because, among all the blobs of protoplasm that will exist in 5 minutes, you-now cares especially about one of them: the one that is causally connected in a certain way to the blob that is you-now. You-now evidently doesn't get to choose the nature of the causal connection that induces this concern. That nature was probably fixed by natural selection. That is why all talk about "determining to be the person who doesn't jump off the cliff" is ineffectual.
The question for doubters is this. Suppose, contrary to your intuition, that matters were just as I described. How would what-you-are-experiencing be any different? If you concede that there would be no difference, perhaps your confusion is just with how to talk about "your" future experiences. So then, what precisely is lost if all such talk is in terms of the experiences of future systems causally connected to you-now in certain ways?
Of course, committing to think this way highlights our ignorance about which causal connections are among these "certain ways". But our ignorance about this question doesn't mean that there isn't a determinate answer. There most likely is a determinate answer, fixed by natural selection and other determinants of what you-now cares about.
It's helpful in these sorts of problems to ask the question "What would evolution do?". It always turns out to be coherent, reality-based actions. Even though evolution, to the extent that it "values" things, values different things than I do, I'd like my actions to be comparably coherent and reality-based.
Regarding the first horn: Regardless of whether simple algorithms move "subjective experience" around like a fluid, if the simple algorithms take some resources, evolution would not perform them.
Regarding the second horn: If there was an organism that routinely split, merged, played lottery, and priced side-bets on whether it had won the lottery, then, given zero information about whether it had won the lottery, it would price the side-bet at the standard lottery odds. Splitting and merging, so long as the procedure did not provide any new information, would not affect its price.
Regarding the third horn: Evolution would certainly not create an organism that hurls itself off cliffs without fear. However, this is not because of it "cares" about any thread of subjective experience. Rather, this is because of the physical continuity. Compare this with evolution's choice in an environment where there are "transporters" that accurately convey entities by molecular disassembly/reassembly. Creatures which had evolved in that environment would certainly step through those transporters without fear.
I can't answer the fourth or fifth horns; I'm not sure I understand them.
When you wake up, you will almost certainly have won (a trillionth of the prize). The subsequent destruction of winners (sort of - see below) reduces your probability of being the surviving winner back to one in a billion.
Merging N people into 1 is the destruction of N-1 people - the process may be symmetrical but each of the N can only contribute 1/N of themself to the outcome.
The idea of being (N-1)/N th killed may seem a little odd at first, but less so if you compare it to the case where half of one person's brain is merged with half of a different person's (and the leftovers discarded).
EDIT: Note that when the trillion were told they won, they were actually being lied to - they had won a trillionth part of the prize, one way or another.
Whenever I read about "weight of experience", "quantum goo", "existentness" etc. I can't keep myself of also thinking of "vital spark", "phlogiston", "ether" and other similar stuff... And it somehow spoils the whole fun...
In the history of mankind, hard looking (meta-)physical dilemmas were much more often resolved by means of elimination rather than by introduction of new "essences". The moral of the history of physics so far is that relativity typically trumps absoluteness in the long run.
For example, I would not be surprised at all, if it turned out that experienced Born probabilities would not be absolute, but would depend on some reference frame (in a very high dimensional space) just like the experience of time, speed, mass, etc. depends on the relativistic frame of reference.
Following Nominull and Furcas, I bite the third bullet without qualms for the perfectly ordinary obvious reasons. Once we know how much of what kinds of experiences will occur at different times, there's nothing left to be confused about. Subjective selfishness is still coherent because you're not just an arbitrary observer with no distinguishing characteristics at all; you're a very specific bundle of personality traits, memories, tendencies of thought, and so forth. Subjective selfishness corresponds to only caring about this one highly specific bundle: only caring about whether someone falls off a cliff if this person identifies as such-and-such and has such-and-these specific memories and such-and-those personality traits: however close a correspondence you need to match whatever you define as personal identity.
The popular concepts of altruism and selfishness weren't designed for people who understand materialism. Once you realize this, you can just recast whatever it was you were already trying to do in terms of preferences over histories of the universe. It all adds up to, &c., &c.
Just an aside - this is obviously something that Eliezer - someone highly intelligent and thoughful - has thought deeply about, and has had difficulty answering.
Yet most of the answers - including my own - seem to be of the "this is the obvious solution to the dilemma" sort.
Bonus: if you're uncomfortable with merging/deleting copies, you can skip that part! Just use the lottery money to buy some computing equipment and keep your extra copies running in lockstep forever. Is this now an uncontroversial algorithm for winning the lottery, or what?
More near-equivalent reformulations of the problem (in support of the second horn):
A trillion copies will be created, believing they have won the lottery. All but one will be killed (1/trillion that your current state leads directly to your future state). If you add some uniportant differentiation between the copies - give each one a speratate number - then the situation is clearer: you have one chance in a trillion that the future self will remember your number (so your unique contribution will have 1/trillion chance of happening), while he will be certain to believe he has won the lottery (he gets that belief from everyone.
A trillion copies are created, each altruistically happy that one among the group has won the lottery. One of them at random is designated the lottery winner. Then everyone else is killed.
Follow the money: you (and your copies) are not deriving utility from winning the lottery, but from spending the money. If each copy is selfish, there is no dilema: the lottery winnings divided amongst a trillion cancels out the trillion copies. If each copy is altruistic, then the example is the same as above; in which case there is a mass of utility generated from the copies, which vanish when the copies vanish. But this extra mass of utility is akin to the utility generated by: "It's wonderful to be alive. Quick, I copy myself, so now many copies feel it's wonderful to be alive. Then I delete the copies, so the utility goes away".
Time and (objective or subjective) continuity are emergent notions. The more basic notion they emerge from is memory. (Eliezer, you read this idea in Barbour's book, and you seemed to like it when you wrote about that book.)
Considering this: yes, caring about the well-being of "agents that have memories of formerly being me" is incoherent. It is just as incoherent as caring about the well-being of "agents mostly consisting of atoms that currently reside in my body". But in typical cases, both of these lead to the same well known and evolutionarily useful heuristics.
I don't think any of the above implies that "thread of subjective experience" is a ridiculous thing, or that you can turn into being Britney Spears. Continuity being an emergent phenomenon does not mean that it is a nonexistent one.
As for what happens ten seconds after that, you have no way of knowing how many processors you run on, so you shouldn't feel a thing
Here's the problem, as far as I can see. You shouldn't feel a thing, but that would also be true if none of you ever woke up again. "I won't notice being dead" is not an argument that you won't be dead, so lottery winners should anticipate never waking up again, though they won't experience it (we don't anticipate living forever in the factual world, even though no one ever notices being dead).
I'm sure there's some reason this is considered invalid, since quantum suicide is looked on so favorably around here. :)
The reason is simply that, in the multiple worlds interpretation, we do survive-- we just also die. If we ask "Which of the two will I experience?" then it seems totally valid to argue "I won't experience being dead."
I still have trouble biting that bullet for some reason. Maybe I'm naive, I know, but there's a sense in which I just can't seem to let go of the question, "What will I see happen next?" I strive for altruism, but I'm not sure I can believe that subjective selfishness - caring about your own future experiences - is an incoherent utility function; that we are forced to be Buddhists who dare not cheat a neighbor, not because we are kind, but because we anticipate experiencing their consequences just as much as we anticipate experiencing our own. I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground.
I don't really understand your reasoning here. It's not a different person that will experience the consequences of hitting the ground, it's Eliezer+5. Sure, Eliezer+5 is not identical to Eliezer, but he's really, really, really similar. If Eliezer is selfish, it makes perfect sense to care about Eliezer+5 too, and no sense at all to care equally about Furcas+5, who is really different from Eliezer.
...And the third horn of the trilemma is to reject the idea of the personal future - that there's any meaningful sense in which I can anticipate waking up as myself tomorrow, rather than Britney Spears. Or, for that matter, that there's any meaningful sense in which I can anticipate being myself in five seconds, rather than Britney Spears. In five seconds there will be an Eliezer Yudkowsky, and there will be a Britney Spears, but it is meaningless to speak of the current Eliezer "continuing on" as Eliezer+5 rather than Britney+5; these are simply
Oddly, I feel myself willing to bite all three bullets. Maybe I am too willing to bite bullets? There is a meaningful sense in which I can anticipate myself being one of the future people who will remember being me, though perhaps there isn't a meaningful way to talk about which of those many people I will be; I will be all of them.
I suggested that, in some situations, questions like "What is your posterior probability?" might not have answers, unless they are part of decision problems like "What odds should you bet at?" or even "What should you rationally anticipate to get a brain that trusts rational anticipation?". You didn't comment on the suggestion, so I thought about problems you might have seen in it.
In the suggestion, the "correct" subjective probability depends on a utility function and a UDT/TDT agent's starting probabilities, which ...
I'm terribly sorry, but I had to delete that comment because it used the name of M-nt-f-x, which, if spoken in plain text, causes that one to appear. He Googles his name.
Here was the original comment by outlawpoet:
Marc Geddes was a participant in the SL4 list back when it was a bit more active, and kind of proverbial for degenerating into posting very incoherent abstract theory. He's basically carrying on the torch for M-nt-f-x and Achimedes Plutonium and other such celebrities of incommunicable genius.
Now please stop replying to this thread!
The problem is that copying and merging is not as harmless as it seems. You are basically doing invasive surgery on the mind, but because it's performed using intuitively "non-invasive" operations, it looks harmless. If, for example, you replaced the procedure with rewriting "subjective probability" by directly modifying the brain, the fact that you'd have different "subjective probability" as a result won't be surprising.
Thus, on one hand, there is an intuition that the described procedure doesn't damage the brain, and on the...
The problem with anthropic reasoning and evidence is that unlike ordinary reasoning and evidence, it can't be transferred between observers. Even if "anthropic psychic powers" actually do work, you still should expect all other observers to report that they don't.
I bite the third bullet. I am not as gifted with words as you are to describe why biting it is just and good and even natural if you look at it from a certain point of view, but...
You are believing in something mystical. You are believing in personal identity as something meaningful in reality, without giving any reason why it ought to be, because that is how your algorithm feels from the inside. There is nothing special about your brain as compared to your brain+your spinal cord, or as compared to your brain+this pencil I am holding. How could there be...
Let's explore this scenario in computational rather than quantum language.
Suppose a computer with infinite working memory, running a virtual world with a billion inhabitants, each of whom has a private computational workspace consisting of an infinite subset of total memory.
The computer is going to run an unusual sort of 'lottery' in which a billion copies of the virtual world are created, and in each one, a different inhabitant gets to be the lottery winner. So already the total population after the lottery is not a billion, it's a billion billion, spre...
....you have to throw away the idea that your joint subjective probabilities are the product of your conditional subjective probabilities....If you win the lottery, the subjective probability of having still won the lottery, ten seconds later, is ~1.
If copying increases your measure, merging decreases it. When you notice yourself winning the lottery, you are almost certainly going to cease to exist after ten seconds.
I don't think you have the third horn quite right. It's not that you're equally likely to wake up as Brittany Spears. It's that the only meaningful idea of "you" exists only right now. Your subjective anticipation of winning the lottery in five minutes should be zero. You clearly aren't winning the lottery or in five minutes.
Also, isn't that more of a quintlemma?
Maybe I am too late to comment here and it is already covered in collapsed comments, but it looks like that it is possible to make this experiment in real life.
Imagine that instead of copying, I will use waking up. If I win, I will be waked up 3 times and informed that I won and will be given a drug which will make me forget the act of awakening. If I lose, I will be wakened only one time and informed that I lost. Now I have 3 to 1 observer moments where I informed about winning.
In such setup in is exactly the Sleeping beauty problem, with all its pro and contra, which I will not try to explore here.
Tafter thinking about the Anthropic Trilemma for awhile, I've come up with an alternative resolution which I think is better than, or at least simpler than, any of the other resolutions. Rather than try to construct a path for consciousness to follow inductively forwards through time, start at the end and go backwards: from the set of times at which an entity I consider to be a copy of me dies, choose one at random weighted by quantum measure, then choose uniformly at random from all paths ending there.
The trick is that this means that while copying your m...
I deny that increasing the number of physical copies increases the weight of an experience. If I create N copies of myself, there is still just one of me, plus N other agents running my decision-making algorithms. If I then merge all N copies back into myself, the resulting composite contains the utility of each copy weighted by 1/(N+1).
My feeling about the Boltzmann Brain is: I cheerfully admit that there is some chance that my experience has been produced by a random experience generator. However, in those cases, nothing I do matters anyway. Thus I d...
I don't know the answer either. My best guess is that the question turns out to involve comparing incommensurable things, but I haven't pinned down which things. (As I remarked previously, I think the best answer for policy purposes is to just optimize for total utility, but that doesn't answer the question about subjective experience.)
But one line of attack that occurs to me is the mysterious nature of the Born probabilities.
Suppose they are not fundamental, suppose the ultimate layer of physics -- maybe superstrings, maybe something else -- generates the...
The third option seems awfully close to the second. In the second, you anticipate winning the lottery for a few seconds, and then going back to not. In the third, the universe anticipates winning the lottery for a few seconds, and then going back to Britney.
Now you could just bite this bullet. You could say, "Sounds to me like it should work fine." You could say, "There's no reason why you shouldn't be able to exert anthropic psychic powers." You could say, "I have no problem with the idea that no one else could see you exerting your anthropic psychic powers, and I have no problem with the idea that different people can send different portions of their subjective futures into different realities."
I think there are other problems that may prevent the "anthropic psychic powers" example from working (maybe co
...The odds of winning the lottery are ordinarily a billion to one. But now the branch in >which you win has your "measure", your "amount of experience", temporarily >multiplied by a trillion. So with the brief expenditure of a little extra computing power, >you can subjectively win the lottery - be reasonably sure that when next you open >your eyes, you will see a computer screen flashing "You won!"
As I see it, the odds of being any one of those trillion "me"s in 5 seconds is 10^21 to one(one trillion...
I'm coming in a bit late, and not reading the rest of the posts, but I felt I had to comment on the third horn of the trilemma, as it's an option I've been giving a lot of thought.
I managed to independently invent it (with roughly the same reasoning) back in high school, though I haven't managed to convince myself of it or, for that matter, to explain it to anyone else. Your explanation is better, and I'll be borrowing it.
At any rate. One of your objections seems to be "...to assert that you can hurl yourself off a cliff without fear, because whoever ...
In quantum copying and merging, every "branch" operation preserves the total measure of the original branch,
Maybe branch quantum operations don't make new copies, but represent already existing but identical copies "becoming" no longer identical?
In the computer program analogy: instead of having one program at time t and n slightly different versions at time t+1, start out with n copies already existing (but identical) at time t, and have each one change in the branching. If you expect a t+2, you need to start with at least n^2 copie...
The "second horn" seems to be phrased incorrectly. It says:
"you can coherently anticipate winning the lottery after five seconds, anticipate the experience of having lost the lottery after fifteen seconds, and anticipate that once you experience winning the lottery you will experience having still won it ten seconds later."
That's not really right - the fate of most of those agents that experience a win of the lottery is to be snuffed out of existence. They don't actually win the lottery - and they don't experience having won it eleven...
I strive for altruism, but I'm not sure I can believe that subjective selfishness - caring about your own future experiences - is an incoherent utility function; that we are forced to be Buddhists who dare not cheat a neighbor, not because we are kind, but because we anticipate experiencing their consequences just as much as we anticipate experiencing our own. I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground.
It strikes me that this is a p...
I will bite the first horn of the trilemma. I'm will argue that the increase in subjective probability results from losing information and that it is no different from other situations where you lose information in such a way to make subjective probabilities seem higher. For example, if you watch the lotto draw, but then forget every number except those that match your ticket, your subjective probability that you won will be much higher than originally.
Let's imagine that if you win the lottery that a billion copies of you will be created.
t=0: The lottery i...
1) the probability of ending up in the set of winners is
1/billion
2) the probability of being (a specific) one of the trillion is
1/(b * t)
the probability of being a 2) given you are awake is
p(awake)
= 1 * 1E-21
---------
1
= very small
Buy a ticket. Suspend your computer program just before the lottery drawing - which should of course be a quantum lottery, so that every ticket wins somewhere. Program your computational environment to, if you win, make a trillion copies of yourself, and wake them up for ten seconds, long enough to experience winning the lottery. Then suspend the programs, merge them again, and start the result. If you don't win the lottery, then just wake up automatically.
How would you do that???
The odds of winning the lottery are ordinarily a billion to one. But now the branch in which you win has your "measure", your "amount of experience", temporarily multiplied by a trillion.
This seems perhaps too obvious, but how can branches multiply probability by anything greater than 1? Conditional branches follow the rules of conjunctive probability . . .
Probability in regards to the future is simply a matter of counting branches. The subset of branches in which you win is always only one in a billion of all branches - and any f...
The flaw is that anticipation should not be treated as a brute thing. Anticipation should be a tool used in the service of your decision theory. Once you bring in some particular decision theory and utility function, the question is dissolved (if you use TDT and your utility function is just the total quality of simulated observer moments, then you can reverse engineer exactly Nick Bostrom's notion of "anticipate." So if I had to go with an answer, that would be mine.)
Two people disagreeing about what they should anticipate is like two people arguing about whether a tree falling in an empty forest makes a sound. They disagree about what they anticipate, yes, but they behave identically.
I'm curious why no one mentioned Solomonoff prior here. Anticipation of subjective experience can be expressed as: what is a probability of experiencing X, given my prior experiences. Thus we "swap" ontological status of objective reality and subjective experience, then we can use Solomonoff prior to infer probabilities.
When one wakes up as a copy, one experiences instantaneous arbitrary space-time travel, thus Solomonoff prior for this experience should be lower, than that of wake-up-as-original one (if original one can wake up at all).
Given that approach, it seems that our subjective experience will tend to be as much "normal" as it allowed by simplest computable laws of physics.
a truly remarkable observation: quantum measure seems to behave in a way that would avoid this trilemma completely
Which is why Roger Penrose is so keen to show that consciousness is a quantum phenomenon.
We have a strong subjective sense of personal experience which is optimized for passing on genes, and which thus coincides with the Born probabilities. In addition, it seems biased toward "only one of me" thinking (evidence: most people's intuitive rejection of MWI as absurd even before hearing any of the physics, and most people's intuitive sense that if duplicated, 'they' will be the original and 'someone else' will be the copy). The plausible ev-psych explanation for this, ISTM, is that you won't ever encounter another version of your actual...
The third horn is a fake, being capable of being defined in or out of existence at will. It I am indifferent to my state ten seconds from now, it is true; if my current utility function includes a term for my state ten seconds from now, it is false.
The 'thread of subjective experience' is not the issue; whether I throw myself of the cliff, will depend on whether I am currently indifferent as to whether my future self will die.
"I still have trouble biting that bullet for some reason.... I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground."
Replace "jump off a cliff" with "heroin overdose." Sure you could, and many do. Not caring about the future is actually very common in humans, and it's harder for smart people to understand this attitude because many of us have very good impulse control. But I still find it strange that you seem to want ...
Is there a contradiction in supposing that the total subjective weight increases as unconnected threads of subjective experience come into existence, but copies branching off of an existing thread of subjective experience divide up the weight of the parent thread?
Sentences in this comment asserted with only 95% probability of making sense, read on at own peril.
There's a mainstream program to derive the Born probabilities from physics and decision theory which David Wallace, especially, has done a lot of work on. If I remember correctly, he distinguishes two viewpoints:
In quantum copying and merging, every "branch" operation preserves the total measure of the original branch,
Maybe branch quantum operations don't make new copies, but represent already existing but identical copies "becoming" no longer identical?
In the computer program analogy: instead of having one program at time t and n slightly different versions at time t+1, start out with n copies already existing (but identical) at time t, and have each one change in the branching. If you expect a t+2, you need to start with at least n^2 copie...
In quantum copying and merging, every "branch" operation preserves the total measure of the original branch,
Maybe branch quantum operations don't make new copies, but represent already existing but identical copies "becoming" no longer identical?
In the computer program analogy: instead of having one program at time t and n slightly different versions at time t+1, start out with n copies already existing (but identical) at time t, and have each one change in the branching. If you expect a t+2, you need to start with at least n^2 copie...
Just an aside - this is obviously something that Eliezer - someone highly intelligent and thoughful - has thought deeply about, and has had difficulty answering.
Yet most of the answers - including my own - seem to be of the "this is the obvious solution to the dilemma" sort.
Since I have a theory of Correlated decision making, let's use it! :-)
Let's look longer at the Nick Bostrom solution. How much contribution is there towards "feeling I will have won the lottery ten seconds from now" from "feeling I have currently won the lottery"? By the rules of this set-up, each of the happy copies contributes one trillionth towards that result.
(quick and dirty argument to convince you of that: replace the current rules by one saying "we will take the average feeling of victory across the trillion copies"; s...
I've thought about this before and I think I'd have to take the second horn. Argument: assuming we can ignore quantum effects for the moment, imagine setting up a computer running one instance of some mind. There're no other instances anywhere. Shut the machine down. Assuming no dust theory style immortality (which, if there was such a thing, would seem to violate born stats, and given that we actually observe the validity of born stats...), total weight/measure/reality-fluid assigned to that mind goes from 1 to 0, so it looks reasonable that second horn t...
I have an Othello/Reversi playing program.
I tried making it better by applying probabilistic statistics to the game tree, quite like antropic reasoning. It then became quite bad at playing.
Ordinary minimax with A-B did very well.
Game algorithms that ignore density of states in the game tree, and only focus on minimaxing, do much better. This is a close analogy to the experience trees of Eliezer, and therefore a hint that antropic reasoning here has some kind of error.
Kim0
Re: "But if I make two copies of the same computer program, is there twice as much experience, or only the same experience? Does someone who runs redundantly on three processors, get three times as much weight as someone who runs on one processor?"
Do they get three times as much "weight" - in some moral system?
Er, that depends on the moral system in question.
Speaking of problems I don't know how to solve, here's one that's been gnawing at me for years.
The operation of splitting a subjective worldline seems obvious enough - the skeptical initiate can consider the Ebborians, creatures whose brains come in flat sheets and who can symmetrically divide down their thickness. The more sophisticated need merely consider a sentient computer program: stop, copy, paste, start, and what was one person has now continued on in two places. If one of your future selves will see red, and one of your future selves will see green, then (it seems) you should anticipate seeing red or green when you wake up with 50% probability. That is, it's a known fact that different versions of you will see red, or alternatively green, and you should weight the two anticipated possibilities equally. (Consider what happens when you're flipping a quantum coin: half your measure will continue into either branch, and subjective probability will follow quantum measure for unknown reasons.)
But if I make two copies of the same computer program, is there twice as much experience, or only the same experience? Does someone who runs redundantly on three processors, get three times as much weight as someone who runs on one processor?
Let's suppose that three copies get three times as much experience. (If not, then, in a Big universe, large enough that at least one copy of anything exists somewhere, you run into the Boltzmann Brain problem.)
Just as computer programs or brains can split, they ought to be able to merge. If we imagine a version of the Ebborian species that computes digitally, so that the brains remain synchronized so long as they go on getting the same sensory inputs, then we ought to be able to put two brains back together along the thickness, after dividing them. In the case of computer programs, we should be able to perform an operation where we compare each two bits in the program, and if they are the same, copy them, and if they are different, delete the whole program. (This seems to establish an equal causal dependency of the final program on the two original programs that went into it. E.g., if you test the causal dependency via counterfactuals, then disturbing any bit of the two originals, results in the final program being completely different (namely deleted).)
So here's a simple algorithm for winning the lottery:
Buy a ticket. Suspend your computer program just before the lottery drawing - which should of course be a quantum lottery, so that every ticket wins somewhere. Program your computational environment to, if you win, make a trillion copies of yourself, and wake them up for ten seconds, long enough to experience winning the lottery. Then suspend the programs, merge them again, and start the result. If you don't win the lottery, then just wake up automatically.
The odds of winning the lottery are ordinarily a billion to one. But now the branch in which you win has your "measure", your "amount of experience", temporarily multiplied by a trillion. So with the brief expenditure of a little extra computing power, you can subjectively win the lottery - be reasonably sure that when next you open your eyes, you will see a computer screen flashing "You won!" As for what happens ten seconds after that, you have no way of knowing how many processors you run on, so you shouldn't feel a thing.
Now you could just bite this bullet. You could say, "Sounds to me like it should work fine." You could say, "There's no reason why you shouldn't be able to exert anthropic psychic powers." You could say, "I have no problem with the idea that no one else could see you exerting your anthropic psychic powers, and I have no problem with the idea that different people can send different portions of their subjective futures into different realities."
I find myself somewhat reluctant to bite that bullet, personally.
Nick Bostrom, when I proposed this problem to him, offered that you should anticipate winning the lottery after five seconds, but anticipate losing the lottery after fifteen seconds.
To bite this bullet, you have to throw away the idea that your joint subjective probabilities are the product of your conditional subjective probabilities. If you win the lottery, the subjective probability of having still won the lottery, ten seconds later, is ~1. And if you lose the lottery, the subjective probability of having lost the lottery, ten seconds later, is ~1. But we don't have p("experience win after 15s") = p("experience win after 15s"|"experience win after 5s")*p("experience win after 5s") + p("experience win after 15s"|"experience not-win after 5s")*p("experience not-win after 5s").
I'm reluctant to bite that bullet too.
And the third horn of the trilemma is to reject the idea of the personal future - that there's any meaningful sense in which I can anticipate waking up as myself tomorrow, rather than Britney Spears. Or, for that matter, that there's any meaningful sense in which I can anticipate being myself in five seconds, rather than Britney Spears. In five seconds there will be an Eliezer Yudkowsky, and there will be a Britney Spears, but it is meaningless to speak of the current Eliezer "continuing on" as Eliezer+5 rather than Britney+5; these are simply three different people we are talking about.
There are no threads connecting subjective experiences. There are simply different subjective experiences. Even if some subjective experiences are highly similar to, and causally computed from, other subjective experiences, they are not connected.
I still have trouble biting that bullet for some reason. Maybe I'm naive, I know, but there's a sense in which I just can't seem to let go of the question, "What will I see happen next?" I strive for altruism, but I'm not sure I can believe that subjective selfishness - caring about your own future experiences - is an incoherent utility function; that we are forced to be Buddhists who dare not cheat a neighbor, not because we are kind, but because we anticipate experiencing their consequences just as much as we anticipate experiencing our own. I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground.
Bound to my naive intuitions that can be explained away by obvious evolutionary instincts, you say? It's plausible that I could be forced down this path, but I don't feel forced down it quite yet. It would feel like a fake reduction. I have rather the sense that my confusion here is tied up with my confusion over what sort of physical configurations, or cascades of cause and effect, "exist" in any sense and "experience" anything in any sense, and flatly denying the existence of subjective continuity would not make me feel any less confused about that.
The fourth horn of the trilemma (as 'twere) would be denying that two copies of the same computation had any more "weight of experience" than one; but in addition to the Boltzmann Brain problem in large universes, you might develop similar anthropic psychic powers if you could split a trillion times, have each computation view a slightly different scene in some small detail, forget that detail, and converge the computations so they could be reunified afterward - then you were temporarily a trillion different people who all happened to develop into the same future self. So it's not clear that the fourth horn actually changes anything, which is why I call it a trilemma.
I should mention, in this connection, a truly remarkable observation: quantum measure seems to behave in a way that would avoid this trilemma completely, if you tried the analogue using quantum branching within a large coherent superposition (e.g. a quantum computer). If you quantum-split into a trillion copies, those trillion copies would have the same total quantum measure after being merged or converged.
It's a remarkable fact that the one sort of branching we do have extensive actual experience with - though we don't know why it behaves the way it does - seems to behave in a very strange way that is exactly right to avoid anthropic superpowers and goes on obeying the standard axioms for conditional probability.
In quantum copying and merging, every "branch" operation preserves the total measure of the original branch, and every "merge" operation (which you could theoretically do in large coherent superpositions) likewise preserves the total measure of the incoming branches.
Great for QM. But it's not clear to me at all how to set up an analogous set of rules for making copies of sentient beings, in which the total number of processors can go up or down and you can transfer processors from one set of minds to another.
To sum up:
I will be extremely impressed if Less Wrong solves this one.