Speaking of problems I don't know how to solve, here's one that's been gnawing at me for years.
The operation of splitting a subjective worldline seems obvious enough - the skeptical initiate can consider the Ebborians, creatures whose brains come in flat sheets and who can symmetrically divide down their thickness. The more sophisticated need merely consider a sentient computer program: stop, copy, paste, start, and what was one person has now continued on in two places. If one of your future selves will see red, and one of your future selves will see green, then (it seems) you should anticipate seeing red or green when you wake up with 50% probability. That is, it's a known fact that different versions of you will see red, or alternatively green, and you should weight the two anticipated possibilities equally. (Consider what happens when you're flipping a quantum coin: half your measure will continue into either branch, and subjective probability will follow quantum measure for unknown reasons.)
But if I make two copies of the same computer program, is there twice as much experience, or only the same experience? Does someone who runs redundantly on three processors, get three times as much weight as someone who runs on one processor?
Let's suppose that three copies get three times as much experience. (If not, then, in a Big universe, large enough that at least one copy of anything exists somewhere, you run into the Boltzmann Brain problem.)
Just as computer programs or brains can split, they ought to be able to merge. If we imagine a version of the Ebborian species that computes digitally, so that the brains remain synchronized so long as they go on getting the same sensory inputs, then we ought to be able to put two brains back together along the thickness, after dividing them. In the case of computer programs, we should be able to perform an operation where we compare each two bits in the program, and if they are the same, copy them, and if they are different, delete the whole program. (This seems to establish an equal causal dependency of the final program on the two original programs that went into it. E.g., if you test the causal dependency via counterfactuals, then disturbing any bit of the two originals, results in the final program being completely different (namely deleted).)
So here's a simple algorithm for winning the lottery:
Buy a ticket. Suspend your computer program just before the lottery drawing - which should of course be a quantum lottery, so that every ticket wins somewhere. Program your computational environment to, if you win, make a trillion copies of yourself, and wake them up for ten seconds, long enough to experience winning the lottery. Then suspend the programs, merge them again, and start the result. If you don't win the lottery, then just wake up automatically.
The odds of winning the lottery are ordinarily a billion to one. But now the branch in which you win has your "measure", your "amount of experience", temporarily multiplied by a trillion. So with the brief expenditure of a little extra computing power, you can subjectively win the lottery - be reasonably sure that when next you open your eyes, you will see a computer screen flashing "You won!" As for what happens ten seconds after that, you have no way of knowing how many processors you run on, so you shouldn't feel a thing.
Now you could just bite this bullet. You could say, "Sounds to me like it should work fine." You could say, "There's no reason why you shouldn't be able to exert anthropic psychic powers." You could say, "I have no problem with the idea that no one else could see you exerting your anthropic psychic powers, and I have no problem with the idea that different people can send different portions of their subjective futures into different realities."
I find myself somewhat reluctant to bite that bullet, personally.
Nick Bostrom, when I proposed this problem to him, offered that you should anticipate winning the lottery after five seconds, but anticipate losing the lottery after fifteen seconds.
To bite this bullet, you have to throw away the idea that your joint subjective probabilities are the product of your conditional subjective probabilities. If you win the lottery, the subjective probability of having still won the lottery, ten seconds later, is ~1. And if you lose the lottery, the subjective probability of having lost the lottery, ten seconds later, is ~1. But we don't have p("experience win after 15s") = p("experience win after 15s"|"experience win after 5s")*p("experience win after 5s") + p("experience win after 15s"|"experience not-win after 5s")*p("experience not-win after 5s").
I'm reluctant to bite that bullet too.
And the third horn of the trilemma is to reject the idea of the personal future - that there's any meaningful sense in which I can anticipate waking up as myself tomorrow, rather than Britney Spears. Or, for that matter, that there's any meaningful sense in which I can anticipate being myself in five seconds, rather than Britney Spears. In five seconds there will be an Eliezer Yudkowsky, and there will be a Britney Spears, but it is meaningless to speak of the current Eliezer "continuing on" as Eliezer+5 rather than Britney+5; these are simply three different people we are talking about.
There are no threads connecting subjective experiences. There are simply different subjective experiences. Even if some subjective experiences are highly similar to, and causally computed from, other subjective experiences, they are not connected.
I still have trouble biting that bullet for some reason. Maybe I'm naive, I know, but there's a sense in which I just can't seem to let go of the question, "What will I see happen next?" I strive for altruism, but I'm not sure I can believe that subjective selfishness - caring about your own future experiences - is an incoherent utility function; that we are forced to be Buddhists who dare not cheat a neighbor, not because we are kind, but because we anticipate experiencing their consequences just as much as we anticipate experiencing our own. I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground.
Bound to my naive intuitions that can be explained away by obvious evolutionary instincts, you say? It's plausible that I could be forced down this path, but I don't feel forced down it quite yet. It would feel like a fake reduction. I have rather the sense that my confusion here is tied up with my confusion over what sort of physical configurations, or cascades of cause and effect, "exist" in any sense and "experience" anything in any sense, and flatly denying the existence of subjective continuity would not make me feel any less confused about that.
The fourth horn of the trilemma (as 'twere) would be denying that two copies of the same computation had any more "weight of experience" than one; but in addition to the Boltzmann Brain problem in large universes, you might develop similar anthropic psychic powers if you could split a trillion times, have each computation view a slightly different scene in some small detail, forget that detail, and converge the computations so they could be reunified afterward - then you were temporarily a trillion different people who all happened to develop into the same future self. So it's not clear that the fourth horn actually changes anything, which is why I call it a trilemma.
I should mention, in this connection, a truly remarkable observation: quantum measure seems to behave in a way that would avoid this trilemma completely, if you tried the analogue using quantum branching within a large coherent superposition (e.g. a quantum computer). If you quantum-split into a trillion copies, those trillion copies would have the same total quantum measure after being merged or converged.
It's a remarkable fact that the one sort of branching we do have extensive actual experience with - though we don't know why it behaves the way it does - seems to behave in a very strange way that is exactly right to avoid anthropic superpowers and goes on obeying the standard axioms for conditional probability.
In quantum copying and merging, every "branch" operation preserves the total measure of the original branch, and every "merge" operation (which you could theoretically do in large coherent superpositions) likewise preserves the total measure of the incoming branches.
Great for QM. But it's not clear to me at all how to set up an analogous set of rules for making copies of sentient beings, in which the total number of processors can go up or down and you can transfer processors from one set of minds to another.
To sum up:
- The first horn of the anthropic trilemma is to confess that there are simple algorithms whereby you can, indetectably to anyone but yourself, exert the subjective equivalent of psychic powers - use a temporary expenditure of computing power to permanently send your subjective future into particular branches of reality.
- The second horn of the anthropic trilemma is to deny that subjective joint probabilities behave like probabilities - you can coherently anticipate winning the lottery after five seconds, anticipate the experience of having lost the lottery after fifteen seconds, and anticipate that once you experience winning the lottery you will experience having still won it ten seconds later.
- The third horn of the anthropic trilemma is to deny that there is any meaningful sense whatsoever in which you can anticipate being yourself in five seconds, rather than Britney Spears; to deny that selfishness is coherently possible; to assert that you can hurl yourself off a cliff without fear, because whoever hits the ground will be another person not particularly connected to you by any such ridiculous thing as a "thread of subjective experience".
- The fourth horn of the anthropic trilemma is to deny that increasing the number of physical copies increases the weight of an experience, which leads into Boltzmann brain problems, and may not help much (because alternatively designed brains may be able to diverge and then converge as different experiences have their details forgotten).
- The fifth horn of the anthropic trilemma is to observe that the only form of splitting we have accumulated experience with, the mysterious Born probabilities of quantum mechanics, would seem to avoid the trilemma; but it's not clear how to have analogous rules could possibly govern information flows in computer processors.
I will be extremely impressed if Less Wrong solves this one.
I've been thinking about this topic, off and on, at least since September 1997, when I joined the Extropians mailing list, and sent off a "copying related probability question" (which is still in my "sent" folder but apparently no longer archived anywhere that Google can find). Both Eliezer and Nick were also participants in that discussion. What are the chances that we're still trying to figure this out 12 years later?
My current position, for what it's worth, is that anticipation and continuity of experience are both evolutionary adaptations that will turn maladaptive when mind copying/merging becomes possible. Theoretically, evolution could have programmed us to use UDT, in which case this dilemma wouldn't exist now, because anticipation and continuity of experience is not part of UDT.
So why don't we just switch over to UDT, and consider the problem solved (assuming this kind of self-modification is feasible)? The problem with that is that much of our preferences are specified in terms of anticipation of experience, and there is no obvious way how to map those onto UDT preferences. For example, suppose you’re about to be tortured in an hour. Should you make as many copies as you can of yourself (who won’t be tortured) before the hour is up, in order to reduce your anticipation of the torture experience? You have to come up with a way to answer that question before you can switch to UDT.
One approach that I think is promising, which Johnicholas already suggested, is to ask "what would evolution do?" The way I interpret that is, whenever there’s an ambiguity in how to map our preferences onto UDT, or where our preferences are incoherent, pick the UDT preference that maximizes evolutionary success.
But a problem with that, is that what evolution does depends on where you look. For example, suppose you sample Reality using some weird distribution. (Let’s say you heavily favor worlds where lottery numbers always come out to be the digits of pi.) Then you might find a bunch of Bayesians who use that weird distribution as their prior (or the UDT equivalent of that), since they would be the ones having the most evolutionary success in that part of Reality.
The next thought is that perhaps algorithmic complexity and related concepts can help here. Maybe there is a natural way to define a measure over Reality, to say that most of Reality is here, and not there. And then say we want to maximize evolutionary success under this measure.
How to define “evolutionary success” is another issue that needs to be resolved in this approach. I think some notion of “amount of Reality under one’s control/influence” (and not “number of copies/descendants”) would make the most sense.
I'm not really aware of any significant progress since 12 years ago. I've mostly given up working on this problem, or most object-level philosophical problems, due to slow pace of progress and perceived opportunity costs. (Spending time on ensuring a future where progress on such problems can continue to be made, e.g., fighting against x-risk and value/philosophical lock-in or drift, seems a better bet even for the part of me that really wants to solve philosophical problems.) It seems like there's a decline in other LWer's interest in the problem, maybe for similar reasons?