Johnicholas comments on The Anthropic Trilemma - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (218)
I've been thinking about this topic, off and on, at least since September 1997, when I joined the Extropians mailing list, and sent off a "copying related probability question" (which is still in my "sent" folder but apparently no longer archived anywhere that Google can find). Both Eliezer and Nick were also participants in that discussion. What are the chances that we're still trying to figure this out 12 years later?
My current position, for what it's worth, is that anticipation and continuity of experience are both evolutionary adaptations that will turn maladaptive when mind copying/merging becomes possible. Theoretically, evolution could have programmed us to use UDT, in which case this dilemma wouldn't exist now, because anticipation and continuity of experience is not part of UDT.
So why don't we just switch over to UDT, and consider the problem solved (assuming this kind of self-modification is feasible)? The problem with that is that much of our preferences are specified in terms of anticipation of experience, and there is no obvious way how to map those onto UDT preferences. For example, suppose you’re about to be tortured in an hour. Should you make as many copies as you can of yourself (who won’t be tortured) before the hour is up, in order to reduce your anticipation of the torture experience? You have to come up with a way to answer that question before you can switch to UDT.
One approach that I think is promising, which Johnicholas already suggested, is to ask "what would evolution do?" The way I interpret that is, whenever there’s an ambiguity in how to map our preferences onto UDT, or where our preferences are incoherent, pick the UDT preference that maximizes evolutionary success.
But a problem with that, is that what evolution does depends on where you look. For example, suppose you sample Reality using some weird distribution. (Let’s say you heavily favor worlds where lottery numbers always come out to be the digits of pi.) Then you might find a bunch of Bayesians who use that weird distribution as their prior (or the UDT equivalent of that), since they would be the ones having the most evolutionary success in that part of Reality.
The next thought is that perhaps algorithmic complexity and related concepts can help here. Maybe there is a natural way to define a measure over Reality, to say that most of Reality is here, and not there. And then say we want to maximize evolutionary success under this measure.
How to define “evolutionary success” is another issue that needs to be resolved in this approach. I think some notion of “amount of Reality under one’s control/influence” (and not “number of copies/descendants”) would make the most sense.
Note - there is a difference between investigating "what would evolution do?", as a jumping-off point for other strategies, and recommending "we should do what evolution does".
Why is it that if I set up a little grid-world on my computer and evolve little agents, I seem to get answers to the question "what does evolution do"? Am I encoding "where to look" into the grid-world somehow?