Wei_Dai comments on The Anthropic Trilemma - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (218)
I've been thinking about this topic, off and on, at least since September 1997, when I joined the Extropians mailing list, and sent off a "copying related probability question" (which is still in my "sent" folder but apparently no longer archived anywhere that Google can find). Both Eliezer and Nick were also participants in that discussion. What are the chances that we're still trying to figure this out 12 years later?
My current position, for what it's worth, is that anticipation and continuity of experience are both evolutionary adaptations that will turn maladaptive when mind copying/merging becomes possible. Theoretically, evolution could have programmed us to use UDT, in which case this dilemma wouldn't exist now, because anticipation and continuity of experience is not part of UDT.
So why don't we just switch over to UDT, and consider the problem solved (assuming this kind of self-modification is feasible)? The problem with that is that much of our preferences are specified in terms of anticipation of experience, and there is no obvious way how to map those onto UDT preferences. For example, suppose you’re about to be tortured in an hour. Should you make as many copies as you can of yourself (who won’t be tortured) before the hour is up, in order to reduce your anticipation of the torture experience? You have to come up with a way to answer that question before you can switch to UDT.
One approach that I think is promising, which Johnicholas already suggested, is to ask "what would evolution do?" The way I interpret that is, whenever there’s an ambiguity in how to map our preferences onto UDT, or where our preferences are incoherent, pick the UDT preference that maximizes evolutionary success.
But a problem with that, is that what evolution does depends on where you look. For example, suppose you sample Reality using some weird distribution. (Let’s say you heavily favor worlds where lottery numbers always come out to be the digits of pi.) Then you might find a bunch of Bayesians who use that weird distribution as their prior (or the UDT equivalent of that), since they would be the ones having the most evolutionary success in that part of Reality.
The next thought is that perhaps algorithmic complexity and related concepts can help here. Maybe there is a natural way to define a measure over Reality, to say that most of Reality is here, and not there. And then say we want to maximize evolutionary success under this measure.
How to define “evolutionary success” is another issue that needs to be resolved in this approach. I think some notion of “amount of Reality under one’s control/influence” (and not “number of copies/descendants”) would make the most sense.
Seeing that others here are trying to figure out how to make probabilities of anticipated subjective experiences work, I should perhaps mention that I spent quite a bit of time near the beginning of those 12 years trying to do the same thing. As you can see, I eventually gave up and decided that such probabilities shouldn't play a role in a decision theory for agents who can copy and merge themselves.
This isn't to discourage others from exploring this approach. There could easily be something that I overlooked, that a fresh pair of eyes can find. Or maybe someone can give a conclusive argument that explains why it can't work.
BTW, notice that UDT not only doesn't involve anticipatory probabilities, it doesn't even involve indexical probabilities (i.e. answers to "where am I likely to be, given my memories and observations?" as opposed to "what should I expect to see later?"). It seems fairly obvious that if you don't have indexical probabilities, then you can't have anticipatory probabilities. (See ETA below.) I tried to give an argument against indexical probabilities, which apparently nobody (except maybe Nesov) liked. Can anyone do better?
ETA: In the Absent-Minded Driver problem, suppose after you make the decision to EXIT or CONTINUE, you get to see which intersection you're actually at (and this is also forgotten by the time you get to the next intersection). Then clearly your anticipatory probability for seeing 'X', if it exists, ought to be the same as your indexical probability of being at X.