Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Anthropic Trilemma

25 Post author: Eliezer_Yudkowsky 27 September 2009 01:47AM

Speaking of problems I don't know how to solve, here's one that's been gnawing at me for years.

The operation of splitting a subjective worldline seems obvious enough - the skeptical initiate can consider the Ebborians, creatures whose brains come in flat sheets and who can symmetrically divide down their thickness.  The more sophisticated need merely consider a sentient computer program: stop, copy, paste, start, and what was one person has now continued on in two places.  If one of your future selves will see red, and one of your future selves will see green, then (it seems) you should anticipate seeing red or green when you wake up with 50% probability.  That is, it's a known fact that different versions of you will see red, or alternatively green, and you should weight the two anticipated possibilities equally.  (Consider what happens when you're flipping a quantum coin: half your measure will continue into either branch, and subjective probability will follow quantum measure for unknown reasons.)

But if I make two copies of the same computer program, is there twice as much experience, or only the same experience?  Does someone who runs redundantly on three processors, get three times as much weight as someone who runs on one processor?

Let's suppose that three copies get three times as much experience.  (If not, then, in a Big universe, large enough that at least one copy of anything exists somewhere, you run into the Boltzmann Brain problem.)

Just as computer programs or brains can split, they ought to be able to merge.  If we imagine a version of the Ebborian species that computes digitally, so that the brains remain synchronized so long as they go on getting the same sensory inputs, then we ought to be able to put two brains back together along the thickness, after dividing them.  In the case of computer programs, we should be able to perform an operation where we compare each two bits in the program, and if they are the same, copy them, and if they are different, delete the whole program.  (This seems to establish an equal causal dependency of the final program on the two original programs that went into it.  E.g., if you test the causal dependency via counterfactuals, then disturbing any bit of the two originals, results in the final program being completely different (namely deleted).)

So here's a simple algorithm for winning the lottery:

Buy a ticket.  Suspend your computer program just before the lottery drawing - which should of course be a quantum lottery, so that every ticket wins somewhere.  Program your computational environment to, if you win, make a trillion copies of yourself, and wake them up for ten seconds, long enough to experience winning the lottery.  Then suspend the programs, merge them again, and start the result.  If you don't win the lottery, then just wake up automatically.

The odds of winning the lottery are ordinarily a billion to one.  But now the branch in which you win has your "measure", your "amount of experience", temporarily multiplied by a trillion.  So with the brief expenditure of a little extra computing power, you can subjectively win the lottery - be reasonably sure that when next you open your eyes, you will see a computer screen flashing "You won!"  As for what happens ten seconds after that, you have no way of knowing how many processors you run on, so you shouldn't feel a thing.

Now you could just bite this bullet.  You could say, "Sounds to me like it should work fine."  You could say, "There's no reason why you shouldn't be able to exert anthropic psychic powers."  You could say, "I have no problem with the idea that no one else could see you exerting your anthropic psychic powers, and I have no problem with the idea that different people can send different portions of their subjective futures into different realities."

I find myself somewhat reluctant to bite that bullet, personally.

Nick Bostrom, when I proposed this problem to him, offered that you should anticipate winning the lottery after five seconds, but anticipate losing the lottery after fifteen seconds.

To bite this bullet, you have to throw away the idea that your joint subjective probabilities are the product of your conditional subjective probabilities.  If you win the lottery, the subjective probability of having still won the lottery, ten seconds later, is ~1.  And if you lose the lottery, the subjective probability of having lost the lottery, ten seconds later, is ~1.  But we don't have p("experience win after 15s") = p("experience win after 15s"|"experience win after 5s")*p("experience win after 5s") + p("experience win after 15s"|"experience not-win after 5s")*p("experience not-win after 5s").

I'm reluctant to bite that bullet too.

And the third horn of the trilemma is to reject the idea of the personal future - that there's any meaningful sense in which I can anticipate waking up as myself tomorrow, rather than Britney Spears.  Or, for that matter, that there's any meaningful sense in which I can anticipate being myself in five seconds, rather than Britney Spears.  In five seconds there will be an Eliezer Yudkowsky, and there will be a Britney Spears, but it is meaningless to speak of the current Eliezer "continuing on" as Eliezer+5 rather than Britney+5; these are simply three different people we are talking about.

There are no threads connecting subjective experiences.  There are simply different subjective experiences.  Even if some subjective experiences are highly similar to, and causally computed from, other subjective experiences, they are not connected.

I still have trouble biting that bullet for some reason.  Maybe I'm naive, I know, but there's a sense in which I just can't seem to let go of the question, "What will I see happen next?"  I strive for altruism, but I'm not sure I can believe that subjective selfishness - caring about your own future experiences - is an incoherent utility function; that we are forced to be Buddhists who dare not cheat a neighbor, not because we are kind, but because we anticipate experiencing their consequences just as much as we anticipate experiencing our own.  I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground.

Bound to my naive intuitions that can be explained away by obvious evolutionary instincts, you say?  It's plausible that I could be forced down this path, but I don't feel forced down it quite yet.  It would feel like a fake reduction.  I have rather the sense that my confusion here is tied up with my confusion over what sort of physical configurations, or cascades of cause and effect, "exist" in any sense and "experience" anything in any sense, and flatly denying the existence of subjective continuity would not make me feel any less confused about that.

The fourth horn of the trilemma (as 'twere) would be denying that two copies of the same computation had any more "weight of experience" than one; but in addition to the Boltzmann Brain problem in large universes, you might develop similar anthropic psychic powers if you could split a trillion times, have each computation view a slightly different scene in some small detail, forget that detail, and converge the computations so they could be reunified afterward - then you were temporarily a trillion different people who all happened to develop into the same future self.  So it's not clear that the fourth horn actually changes anything, which is why I call it a trilemma.

I should mention, in this connection, a truly remarkable observation: quantum measure seems to behave in a way that would avoid this trilemma completely, if you tried the analogue using quantum branching within a large coherent superposition (e.g. a quantum computer).  If you quantum-split into a trillion copies, those trillion copies would have the same total quantum measure after being merged or converged.

It's a remarkable fact that the one sort of branching we do have extensive actual experience with - though we don't know why it behaves the way it does - seems to behave in a very strange way that is exactly right to avoid anthropic superpowers and goes on obeying the standard axioms for conditional probability.

In quantum copying and merging, every "branch" operation preserves the total measure of the original branch, and every "merge" operation (which you could theoretically do in large coherent superpositions) likewise preserves the total measure of the incoming branches.

Great for QM.  But it's not clear to me at all how to set up an analogous set of rules for making copies of sentient beings, in which the total number of processors can go up or down and you can transfer processors from one set of minds to another.

To sum up:

  • The first horn of the anthropic trilemma is to confess that there are simple algorithms whereby you can, indetectably to anyone but yourself, exert the subjective equivalent of psychic powers - use a temporary expenditure of computing power to permanently send your subjective future into particular branches of reality.
  • The second horn of the anthropic trilemma is to deny that subjective joint probabilities behave like probabilities - you can coherently anticipate winning the lottery after five seconds, anticipate the experience of having lost the lottery after fifteen seconds, and anticipate that once you experience winning the lottery you will experience having still won it ten seconds later.
  • The third horn of the anthropic trilemma is to deny that there is any meaningful sense whatsoever in which you can anticipate being yourself in five seconds, rather than Britney Spears; to deny that selfishness is coherently possible; to assert that you can hurl yourself off a cliff without fear, because whoever hits the ground will be another person not particularly connected to you by any such ridiculous thing as a "thread of subjective experience".
  • The fourth horn of the anthropic trilemma is to deny that increasing the number of physical copies increases the weight of an experience, which leads into Boltzmann brain problems, and may not help much (because alternatively designed brains may be able to diverge and then converge as different experiences have their details forgotten).
  • The fifth horn of the anthropic trilemma is to observe that the only form of splitting we have accumulated experience with, the mysterious Born probabilities of quantum mechanics, would seem to avoid the trilemma; but it's not clear how to have analogous rules could possibly govern information flows in computer processors.

I will be extremely impressed if Less Wrong solves this one.

Comments (218)

Comment author: Furcas 27 September 2009 03:03:24AM *  17 points [-]

Whatever the correct answer is, the first step towards it has to be to taboo words like "experience" in sentences like, "But if I make two copies of the same computer program, is there twice as much experience, or only the same experience?"

What making copies is, is creating multiple instances of the same pattern. If you make two copies of a pattern, there are twice as many instances but only one pattern, obviously.

Are there, then, two of 'you'? Depends what you mean by 'you'. Has the weight of experience increased? Depends what you mean by 'experience'. Think in terms of patterns and instances of patterns, and these questions become trivial.

I feel a bit strange having to explain this to Eliezer Yudkowsky, of all people.

Comment author: Eliezer_Yudkowsky 27 September 2009 04:42:58AM 8 points [-]

Are there, then, two of 'you'? Depends what you mean by 'you'.

Can I redefine what I mean by "me" and thereby expect that I will win the lottery? Can I anticipate seeing "You Win" when I open my eyes? It still seems to me that expectation exists at a level where I cannot control it quite so freely, even by modifying my utility function. Perhaps I am mistaken.

Comment author: SilasBarta 28 September 2009 06:11:10PM 9 points [-]

I think the conflict is resolved by backing up to the point where you say that multiple copies of yourself count as more subjective experience weight (and therefore a higher chance of experiencing).

But if I make two copies of the same computer program, is there twice as much experience, or only the same experience? Does someone who runs redundantly on three processors, get three times as much weight as someone who runs on one processor?

Let's suppose that three copies get three times as much experience. (If not, then, in a Big universe, large enough that at least one copy of anything exists somewhere, you run into the Boltzmann Brain problem.)

I have a top-level post partly written up where I attempt to reduce "subjective experience" and show why your reductio about the Boltzmann Brain doesn't follow, but here's a summary of my reasoning:

Subjective experience appears to require a few components: first, forming mutual information with its space/time environment. Second, forming M/I with its past states, though of course not perfectly.

Now, look at the third trilemma horn: Britney Spears's mind does not have M/I with your past memories. So it is flat-out incoherent to speak of "you" bouncing between different people: the chain of mutual information (your memories) is your subjective experience. This puts you in the position of having to say that "I know everything about the universe's state, but I also must posit a causally-impotent thing called the 'I' of Silas Barta." -- which is an endorsement of epiphenominalism.

Now, look back at the case of copying yourself: these copies retain mutual information with each other. They have each other's exact memory. They are experiencing (by stipulation) the same inputs. So they have a total of one being's subjective experience, and only count once. From the perspective of some computer that runs the universe, it does not need additional data to store each copy, but rather, just the first.

The reason the Boltzmann Brain scenario doesn't follow is this: while each copy knows the output of a copy, they would still not have mutual information with the far-off Big Universe copy, because they don't know where it is! In the same way, a wall's random molecual motions do not have a copy of me, even though, under some interpretation, they will emulate me at some point.

Comment author: Eliezer_Yudkowsky 28 September 2009 07:40:36PM 4 points [-]

I see! So you're identifying the number of copies with the number of causally distinct copies - distinct in the causality of a physical process. So copying on a computer does not produce distinct people, but spontaneous production in a distant galaxy does. Thus real people would outweigh Boltzmann brains.

But what about causally distinct processes that split, see different tiny details, and then merge via forgetting?

(Still, this idea does seem to me like progress! Like we could get a bit closer to the "magical rightness" of the Born rules this way.)

Comment author: SilasBarta 03 October 2009 04:00:31AM *  9 points [-]

Actually, let me revise that: I made it more complicated than it needs to be. Unless I'm missing something (and this does seem too simple), you can easily resolve the dilemma this way:

Copying your upload self does multiply your identities but adds nothing to your anticipated probabilities that stem from quantum branching.

So here's what you should expect:

-There's still a 1 in a billion chance of experiencing winning the lottery.

-In the event you win the lottery, you will also experience being among a trillion copies of yourself, each of whom also have this experience. Note the critical point: since they all wake up in the same Everett branch, their subjective experience does not get counted in at the same "level" as the experience of the lottery loser.

-If you merge after winning the lottery you should expect, after the merge, to remember winning the lottery, and some random additional data that came from the different experiences the different copies had.

-This sums to: ~100% chance of losing the lottery, 1 in a billion chance of winning the lottery plus forgetting a few details.

-Regarding the implications of self-copying in general: Each copy (or original or instantiation or whatever -- I'll just say "copy" for brevity) feels just like you. Depending on how the process was actually carried out, the group of you could trace back which one was the source, and which one's algorithm was instilled into an empty shell. If the process was carried out while you were asleep, you should assign an equal probability of being any given copy.

After the copy, your memories diverge and you have different identities. Merging combines the post-split memories into one person and then deletes such memories until you're left with as much subjective time-history as if you one person the whole time, meaning you forget most of what happened in any given copy -- kind of like the memory you have of your dreams when you wake up.

Comment author: UnholySmoke 29 September 2009 03:42:50PM 2 points [-]

Yeah I get into trouble there. It feels as though two identical copies of a person = 1 pattern = no more people than before copying. But flip one bit and do you suddenly have two people? Can't be right.

That said, the reason we value each person is because of their individuality. The more different two minds, the closer they are to two separate people? Erk.

Silas, looking forward to that post.

Comment author: gwern 10 October 2009 02:43:15AM *  4 points [-]

But flip one bit and do you suddenly have two people? Can't be right.

Why not? Imagine that bit is the memory/knowledge of which copy they are. After the copying, each copy naturally is curious what happened, and recall that bit. Now, if you had 1 person appearing in 2 places, it should be that every thought would be identical, right? Yet one copy will think '1!'; the other will think '0!'. As 1 != 0, this is a contradiction.

Not enough of a contradiction? Imagine further that the original had resolved to start thinking about hot sexy Playboy pinups if it was 1, but to think about all his childhood sins if 0. Or he decides quite arbitrarily to become a Sufi Muslim if 0, and a Mennonite if 1. Or... (insert arbitrarily complex mental processes contingent on that bit).

At some point you will surely admit that we now have 2 people and not just 1; but the only justifiable step at which to say they are 2 and not 1 is the first difference.

Comment author: UnholySmoke 12 October 2009 03:49:55PM 8 points [-]

At some point you will surely admit that we now have 2 people and not just 1

Actually I won't. While I grok your approach completely, I'd rather say my concept of 'an individual' breaks down once I have two minds with one bit's difference, or two identical minds, or any of these borderline cases we're so fond of.

Say I have two optimisers with one bit's difference. If that bit means one copy converts to Sufism and the other to Mennonism, then sure, two different people. If that one bit is swallowed up in later neural computations due to the coarse-grained-ness of the wetware, then we're back to one person since the two are, once again, functionally identical. Faced with contradictions like that, I'm expecting our idea of personal identity to go out the window pretty fast once tech like this actually arrives. Greg Egan's Diaspora pretty much nails this for me, have a look.

All your 'contradictions' go out the window once you let go of the idea of a mind as an indivisible unit. If our concept of identity is to have any value (and it really has to) then we need to learn to think more like reality, which doesn't care about things like 'one bit's difference'.

Comment author: gwern 12 October 2009 11:21:49PM 0 points [-]

If that one bit is swallowed up in later neural computations due to the coarse-grained-ness of the wetware, then we're back to one person since the two are, once again, functionally identical.

Ack. So if I understand you right, your alternative to bit-for-bit identity is to loosen it to some sort of future similarity, which can depend on future actions and outcomes; or in other words, there's a radical indeterminacy about even the minds in our example: are they same or are they different, who knows, it depends on whether the Sufism comes out in the wash! Ask me later; but then again, even then I won't be sure whether those 2 were the same when we started them running (always in motion the future is).

That seems like quite a bullet to bite, and I wonder whether you can hold to any meaningful 'individual', whether the difference be bit-wise or no. Even 2 distant non-borderline mindsmight grow into each other.

Comment author: UnholySmoke 13 October 2009 01:19:00PM 0 points [-]

I wonder whether you can hold to any meaningful 'individual', whether the difference be bit-wise or no.

Indeed, that's what I'm driving at.

Harking back to my earlier comment, changing a single bit and suddenly having a whole new person is where my problem arises. If you change that bit back, are you back to one person? I might not be thinking hard enough, but my intuition doesn't accept that. With that in mind, I prefer to bite that bullet than talk about degrees of person-hood.

Comment author: gwern 14 October 2009 12:38:50AM 0 points [-]

If you change that bit back, are you back to one person? I might not be thinking hard enough, but my intuition doesn't accept that.

Here's an intuition for you: you take the number 5 and add 1 to it; then you subtract 1 from it; don't you have what you started with?

With that in mind, I prefer to bite that bullet than talk about degrees of person-hood.

Well, I can't really argue with that. As long as you realize you're biting that bullet, I think we're still in a situation where it's just dueling intuitions. (Your intuition says one thing, mine another.)

Comment author: Psy-Kosh 10 October 2009 02:47:49AM 0 points [-]

The downside is that it's not really that reductionistic.

What if you flip a bit in part of an offline memory store that you're not consciously thinking about at the time or such?

Comment author: gwern 10 October 2009 03:05:16AM 1 point [-]

What if I hack & remove $100 from your bank account. Are you just as wealthy as you were before, because you haven't looked? If the 2 copies simply haven't looked or otherwise are still unaware, that doesn't mean they are the same. Their possible futures diverge.

And, sure, it's possible they might never realize - we could merge them back before they notice, just as I could restore the money before the next time you checked, but I think we would agree that I still committed a crime (theft) with your money; why couldn't we feel that there was a crime (murder) in the merging?

Comment author: UnholySmoke 19 February 2010 12:03:44PM 1 point [-]

What if I hack & remove $100 from your bank account. Are you just as wealthy as you were before, because you haven't looked?

Standard Dispute. If wealthy = same amount of money in the account, no. If wealthy = how rich you judge yourself to be. The fact that 'futures diverge' is irrelevant up until the moment those two different pieces of information have causal contact with the brain. Until that point, yes, they are 'the same

Comment author: Psy-Kosh 10 October 2009 05:22:19AM 0 points [-]

Huh? My point is a bitflip in a non conscious part, before it influences any of the conscious processing, well, if prior to that bit flip you would have said there was only one being, then I'd say after that they'd still not yet diverged. Or at least, not entirely.

As far as a merging, well, in that case who, precisely, is the one that's being killed?

Comment author: gwern 10 October 2009 03:54:43PM 0 points [-]

So only anything in immediate consciousness counts? Fine, let's remove all of the long-term memories of one of the copies - after all, he's not thinking about his childhood...

As far as a merging, well, in that case who, precisely, is the one that's being killed?

Obviously whichever one isn't there afterwards; if the bit is 1, then 0 got killed off & vice versa. If we erase both copies and replace with the original, then both were killed.

Comment author: SilasBarta 28 September 2009 09:30:58PM 1 point [-]

I don't know; I'm still working through the formalism and drawing causal networks. And I just realized I should probably re-assimilate all the material in your Timeless Identity post, to see the relationship between identity and subjective experience. My brain hurts.

For now, let me just mention that I was trying to do something similar to what you did when identifying what d-connects the output of a calculator on Mars and Venus doing the same calculation. There's an (imperfect) analog to that, if you imagine a program "causing" its two copies, which each then get different input. They can still make inferences about each other despite being d-separated by knowledge of their pre-fork state. The next step is to see how this mutual information relates to the kind between one sentient program's subsequent states.

And, for bonus points, make sure to eliminate time by using the thermodynamic arrow and watch the entropy gain from copying a program.

Comment author: Eliezer_Yudkowsky 29 September 2009 03:55:13PM 0 points [-]

...okay, that part didn't make any particular sense to me.

Comment author: SilasBarta 29 September 2009 04:24:42PM *  0 points [-]

Heh, maybe you just had read more insight into my other comment than there actually was. Let me try to rephrase the last:

I'm starting from the perspective of viewing subjective experience as something that forms mutual information with its space/time surroundings, and with its past states (and has some other attributes I'll add later). This means that identifying which experience you will have in the future is a matter of finding which bodies have mutual information with which.

M/I can be identified by spotting inferences in a Bayesian causal network. So what would a network look like that has a sentient program being copied? You'd show the initial program as being the parent of two identical programs. But, as sentient programs with subjective experience, they remember (most of) their state before the split. This knowledge has implications for what inferences one of them can make about the other, and therefore how much mutual information they will have, which in turn has implications for how their subjective experiences are linked.

My final sentence was noting the importance of checking the thermodynamic constraints on the processes going on, and the related issue, of making time removable from the model. So, I suggested that instead of phrasing questions about "previous/future times", you should phrase such questions as being about "when the universe had lower/higher total entropy". This will have implications for what the sentience will regard as "its past".

Furthermore, the entropy calculation is affected by copy (and merge) operations. Copying involves deleting to make room for the new copies, whereas merging throws away information if the copies aren't identical.

Now, does that make it any clearer, or does it just make it look like you overestimated my first post?

Comment author: Nominull 27 September 2009 03:27:53PM 3 points [-]

Can I redefine what I mean by "me" and thereby expect that I will win the lottery?

Yes? Obviously? You can go around redefining anything as anything. You can redefine a ham sandwich as a steel I-beam and thereby expect that a ham sandwich can support hundreds of pounds of force. The problem is that in that case you lose the property of ham sandwiches that says they are delicious.

In the case of redefining you as someone who wins the lottery, the property you are likely to lose is the property of generating warm fuzzy feelings of identification inside Eliezer Yudkowsky.

Comment author: Alicorn 27 September 2009 03:33:49PM 5 points [-]

"If you call a tail a leg, how many legs does a dog have...? Four. Calling a tail a leg doesn't make it one."

Comment author: Furcas 27 September 2009 04:27:19PM *  2 points [-]

That was said by someone who didn't realize that words are just labels.

Comment author: RichardChappell 27 September 2009 08:31:09PM 6 points [-]

Speakers Use Their Actual Language, so someone who uses 'leg' to mean leg or tail speaks truly when they say 'dogs have five legs.' But it remains the case that dogs have only four legs, and nobody can reasonably expect a ham sandwich to support hundreds of pounds of force. This is because the previous sentence uses English, not the counterfactual language we've been invited to imagine.

Comment author: Alicorn 27 September 2009 04:48:58PM 7 points [-]

Words are just labels, but in order to be able to converse at all, we have to hold at least most of them in one place while we play with the remainder. We should try to avoid emulating Humpty Dumpty. Someone who calls a tail a leg is either trying to add to the category originally described by "leg" (turning it into the category now identified with "extremity" or something like that), or is appropriating a word ("leg") for a category that already has a word ("tail"). The first exercise can be useful in some contexts, but typically these contexts start with somebody saying "Let's evaluate the content of the word "leg" and maybe revise it for consistency." The second is juvenile code invention.

Comment author: SilasBarta 28 September 2009 02:07:03PM *  2 points [-]

Someone who calls a tail a leg is either trying to add to the category originally described by "leg" (turning it into the category now identified with "extremity" or something like that), or is appropriating a word ("leg") for a category that already has a word ("tail"). The first exercise can be useful in some contexts, but typically these contexts start with somebody saying "Let's evaluate the content of the word "leg" and maybe revise it for consistency." The second is juvenile code invention.

What about if evolution repurposed some genus's tail to function as a leg? The question wouldn't be so juvenile or academic then. And before you roll your eyes, I can imagine someone saying,

"How many limbs does a mammal have, if you count the nose as a limb? Four. Calling a nose a limb doesn't make it one."

And then realizing they forgot about elephants, whose trunks have muscles that allow it to grip things as if it had a hand.

Comment author: Alicorn 28 September 2009 03:24:22PM 3 points [-]

That looks like category reevaluation, not code-making, to me. If you think an elephant's trunk should be called a limb, and you think that elephants have five limbs, that's category reevaluation; if you think that elephant trunks should be called limbs and elephants have one limb, that's code.

Comment author: Christian_Szegedy 30 September 2009 10:00:59PM 10 points [-]

Whenever I read about "weight of experience", "quantum goo", "existentness" etc. I can't keep myself of also thinking of "vital spark", "phlogiston", "ether" and other similar stuff... And it somehow spoils the whole fun...

In the history of mankind, hard looking (meta-)physical dilemmas were much more often resolved by means of elimination rather than by introduction of new "essences". The moral of the history of physics so far is that relativity typically trumps absoluteness in the long run.

For example, I would not be surprised at all, if it turned out that experienced Born probabilities would not be absolute, but would depend on some reference frame (in a very high dimensional space) just like the experience of time, speed, mass, etc. depends on the relativistic frame of reference.

Comment author: Eliezer_Yudkowsky 30 September 2009 10:37:20PM 12 points [-]

Marcello and I use the term "reality-fluid" to remind ourselves that we're confused.

Comment author: Psy-Kosh 30 September 2009 10:09:56PM *  2 points [-]

Dunno about others but I agree that terms like that seem to indicate a serious confusion, or at least something that I am very confused about, and seems others here do too. We're more using them as a way of talking about our confusion. Just noting "there's something here we're failing to comprehend" isn't enough to help us comprehend it. It's more a case of "we're not sure what concepts to replace those terms with", at least so it is to me.

Comment author: Stuart_Armstrong 27 September 2009 04:53:18PM *  9 points [-]

More near-equivalent reformulations of the problem (in support of the second horn):

  • A trillion copies will be created, believing they have won the lottery. All but one will be killed (1/trillion that your current state leads directly to your future state). If you add some uniportant differentiation between the copies - give each one a speratate number - then the situation is clearer: you have one chance in a trillion that the future self will remember your number (so your unique contribution will have 1/trillion chance of happening), while he will be certain to believe he has won the lottery (he gets that belief from everyone.

  • A trillion copies are created, each altruistically happy that one among the group has won the lottery. One of them at random is designated the lottery winner. Then everyone else is killed.

  • Follow the money: you (and your copies) are not deriving utility from winning the lottery, but from spending the money. If each copy is selfish, there is no dilema: the lottery winnings divided amongst a trillion cancels out the trillion copies. If each copy is altruistic, then the example is the same as above; in which case there is a mass of utility generated from the copies, which vanish when the copies vanish. But this extra mass of utility is akin to the utility generated by: "It's wonderful to be alive. Quick, I copy myself, so now many copies feel it's wonderful to be alive. Then I delete the copies, so the utility goes away".

Comment author: casebash 16 April 2016 05:18:43AM 1 point [-]

"You (and your copies) are not deriving utility from winning the lottery, but from spending the money"

I would say that you derive utility from knowing that you've won money you can spend. But, if you only get $1, you haven't won very much.

I think that a better problem would be if you split if your favourite team won the super bowl. Then you'd have a high probability of experiencing this happiness, and the number of copies wouldn't reduce it.

Comment author: Stuart_Armstrong 18 April 2016 02:03:01PM 0 points [-]

I think that a better problem would be if you split if your favourite team won the super bowl. Then you'd have a high probability of experiencing this happiness, and the number of copies wouldn't reduce it.

Neat!

Comment author: Stuart_Armstrong 27 September 2009 09:32:15AM 9 points [-]

Just an aside - this is obviously something that Eliezer - someone highly intelligent and thoughful - has thought deeply about, and has had difficulty answering.

Yet most of the answers - including my own - seem to be of the "this is the obvious solution to the dilemma" sort.

Comment author: DanArmak 27 September 2009 01:03:09PM 5 points [-]

...Only each obvious solution proposed is different.

Comment author: Stuart_Armstrong 27 September 2009 04:46:21PM 0 points [-]

Bien entendue...

Comment author: casebash 16 April 2016 05:07:37AM 0 points [-]

People often miss a solution that it obvious in retrospect.

Comment author: Johnicholas 27 September 2009 02:39:19AM 8 points [-]

It's helpful in these sorts of problems to ask the question "What would evolution do?". It always turns out to be coherent, reality-based actions. Even though evolution, to the extent that it "values" things, values different things than I do, I'd like my actions to be comparably coherent and reality-based.

Regarding the first horn: Regardless of whether simple algorithms move "subjective experience" around like a fluid, if the simple algorithms take some resources, evolution would not perform them.

Regarding the second horn: If there was an organism that routinely split, merged, played lottery, and priced side-bets on whether it had won the lottery, then, given zero information about whether it had won the lottery, it would price the side-bet at the standard lottery odds. Splitting and merging, so long as the procedure did not provide any new information, would not affect its price.

Regarding the third horn: Evolution would certainly not create an organism that hurls itself off cliffs without fear. However, this is not because of it "cares" about any thread of subjective experience. Rather, this is because of the physical continuity. Compare this with evolution's choice in an environment where there are "transporters" that accurately convey entities by molecular disassembly/reassembly. Creatures which had evolved in that environment would certainly step through those transporters without fear.

I can't answer the fourth or fifth horns; I'm not sure I understand them.

Comment author: DanielVarga 29 September 2009 11:53:03PM *  6 points [-]

Time and (objective or subjective) continuity are emergent notions. The more basic notion they emerge from is memory. (Eliezer, you read this idea in Barbour's book, and you seemed to like it when you wrote about that book.)

Considering this: yes, caring about the well-being of "agents that have memories of formerly being me" is incoherent. It is just as incoherent as caring about the well-being of "agents mostly consisting of atoms that currently reside in my body". But in typical cases, both of these lead to the same well known and evolutionarily useful heuristics.

I don't think any of the above implies that "thread of subjective experience" is a ridiculous thing, or that you can turn into being Britney Spears. Continuity being an emergent phenomenon does not mean that it is a nonexistent one.

Comment author: Steve_Rayhawk 28 September 2009 12:31:03AM *  6 points [-]

I suggested that, in some situations, questions like "What is your posterior probability?" might not have answers, unless they are part of decision problems like "What odds should you bet at?" or even "What should you rationally anticipate to get a brain that trusts rational anticipation?". You didn't comment on the suggestion, so I thought about problems you might have seen in it.

In the suggestion, the "correct" subjective probability depends on a utility function and a UDT/TDT agent's starting probabilities, which never change. The most important way the suggestion is incomplete is that it doesn't itself explain something we do naturally: we care about the way our "existentness" has "flowed" to us, and if we learn things about how "existentness" or "experiencedness" works, we change what we care about. So when we experiment on quantum systems, and we get experimental statistics that are more probable under a Born rule with a power of 2 than (hand-waving normalization problems) under a Born rule with a power of 4, we change our preferences, so that we care about what happens in possible future worlds in proportion to their integrated squared amplitude, and not in proportion to the integral of the fourth power. But, if there were people who consistently got experimental statistics that were more probable under a Born rule with a power of 4 (whatever that would mean), we would want them to care about possible future worlds in proportion to the integral of the fourth power of their amplitude.

This can even be done in classical decision theory. Suppose you were creating an agent to be put into a world with Ebborean physics, and you had uncertainty about whether, in the law relating world-thickness ratios (at splitting time) to "existentness" ratios, the power was 2 or 4. It would be easy to put a prior probability of 1/2 on each power, and then have "the agent" update from measurements of the relative thicknesses of the sides of the split worlds it (i.e. its local copy) ended up on. But this doesn't explain why you would want to do that.

What would a UDT/TDT prior belief distribution or utility function have to look like in order to define agents that can "update" in this way, while only thinking in terms of copying and not subjective probability? Suppose you were creating an agent to be put into a world with Ebborean physics, and you had uncertainty about whether, in the relation between world thickness ratios and "existentness" ratios, the power was 2 or 4. And this time, suppose the agent was to be an updateless decision theory agent. I think a UDT agent which uses "probability" can be converted by an expected utility calculation into a behaviorally equivalent UDT agent which uses no probability. Instead of probability, the agent uses only "importances": relative strengths of its (linearly additive) preferences about what happens in the various deterministic worlds the agent "was" copied into at the time of its creation. To make such an agent in Ebborean physics "update" on "evidence" about existentness, you could take the relative importance you assigned to influencing world-sheets, split it into two halves, and distribute each half across world-sheets in a different way. Half of the importance would be distributed in proportion to the cumulative products of the squares of the worlds' thickness ratios at their times of splitting, and half of the importance would be distributed in proportion to the cumulative products of the fourth powers of the worlds' thickness ratios at their times of splitting. Then, in each world-sheet, the copy of the agent in that world-sheet would make some measurements of the relative thicknesses on its side of a split, and it would use use those measurements to decide what kinds of local futures it should prioritize influencing.

But, again, this doesn't explain why you would want to do that. (Maybe you wanted the agents to take a coordinated action at the end of time using the world-sheets they controlled, and you didn't know which kinds of world-sheets would become good general-purpose resources for that action?)

I think there was another way my suggestion is incomplete, which has something to do with the way your definition of altruism doesn't work without a definition of "correct" subjective probability. But I don't remember what your definition of altruism was or why it didn't work without subjective probability.

I still think the right way to answer the question, "What is the correct subjective probability?" might be partly to derive "Bayesian updating" as an approximation that can be used by computationally limited agents implementing an updateless or other decision theory, with a utility function defined over mathematical descriptions of worlds containing some number of copies of the agent, when the differences in utility that result from the agent's decisions fulfill certain independence and linearity assumptions. I need to mathematically formalize those assumptions. "Subjective probability" would then be a variable used in that approximation, which would be meaningless or undefined when the assumptions failed.

Comment author: cousin_it 01 August 2011 10:52:15AM *  5 points [-]

Bonus: if you're uncomfortable with merging/deleting copies, you can skip that part! Just use the lottery money to buy some computing equipment and keep your extra copies running in lockstep forever. Is this now an uncontroversial algorithm for winning the lottery, or what?

Comment author: Wei_Dai 27 September 2009 09:40:16AM 18 points [-]

I've been thinking about this topic, off and on, at least since September 1997, when I joined the Extropians mailing list, and sent off a "copying related probability question" (which is still in my "sent" folder but apparently no longer archived anywhere that Google can find). Both Eliezer and Nick were also participants in that discussion. What are the chances that we're still trying to figure this out 12 years later?

My current position, for what it's worth, is that anticipation and continuity of experience are both evolutionary adaptations that will turn maladaptive when mind copying/merging becomes possible. Theoretically, evolution could have programmed us to use UDT, in which case this dilemma wouldn't exist now, because anticipation and continuity of experience is not part of UDT.

So why don't we just switch over to UDT, and consider the problem solved (assuming this kind of self-modification is feasible)? The problem with that is that much of our preferences are specified in terms of anticipation of experience, and there is no obvious way how to map those onto UDT preferences. For example, suppose you’re about to be tortured in an hour. Should you make as many copies as you can of yourself (who won’t be tortured) before the hour is up, in order to reduce your anticipation of the torture experience? You have to come up with a way to answer that question before you can switch to UDT.

One approach that I think is promising, which Johnicholas already suggested, is to ask "what would evolution do?" The way I interpret that is, whenever there’s an ambiguity in how to map our preferences onto UDT, or where our preferences are incoherent, pick the UDT preference that maximizes evolutionary success.

But a problem with that, is that what evolution does depends on where you look. For example, suppose you sample Reality using some weird distribution. (Let’s say you heavily favor worlds where lottery numbers always come out to be the digits of pi.) Then you might find a bunch of Bayesians who use that weird distribution as their prior (or the UDT equivalent of that), since they would be the ones having the most evolutionary success in that part of Reality.

The next thought is that perhaps algorithmic complexity and related concepts can help here. Maybe there is a natural way to define a measure over Reality, to say that most of Reality is here, and not there. And then say we want to maximize evolutionary success under this measure.

How to define “evolutionary success” is another issue that needs to be resolved in this approach. I think some notion of “amount of Reality under one’s control/influence” (and not “number of copies/descendants”) would make the most sense.

Comment author: DanArmak 27 September 2009 01:07:01PM 9 points [-]

My thread of subjective experience is a fundamental part of how I feel from the inside. Exchanging it for something else would be pretty much equivalent to death - death in the human, subjective sense. I would not wish to exchange it unless the alternative was torture for a googol years or something of that ilk.

Why would you wish to switch to UDT?

Comment author: Wei_Dai 27 September 2009 02:04:23PM 4 points [-]

That's a good point. I probably wouldn't want to give up my thread of subjective experience either. But unless I switch (or someone comes up with a better solution than UDT), when mind copying/merging becomes possible I'm probably going to start making some crazy decisions.

I'm not sure what the solution is, but here's one idea. An UDT agent doesn't use anticipation or continuity of experience to make decisions, but perhaps it can run some computations on the side to generate the qualia of anticipation and continuity.

Another idea, which may be more intuitively acceptable, is don't make the switch yourself. Create a copy, and have the copy switch to UDT (before it starts running). Then give most of your resources to the copy and live a single-threaded life under its protection. (I guess the copy in this case isn't so much a copy but more of a personal FAI.)

Comment author: DanArmak 27 September 2009 02:32:52PM *  1 point [-]

That's what I was thinking, too. You make tools the best way you can. The distinction between tools that are or aren't part of you will ultimately become meaningless anyway. We're going to populate the galaxy with huge Jupiter brains that are incredibly smart and powerful but whose only supergoal is to protect a tiny human-nugget inside.

Comment author: Eliezer_Yudkowsky 27 September 2009 04:51:30PM 6 points [-]

So why don't we just switch over to UDT, and consider the problem solved

Because we can't interpret UDT's decision algorithm as providing epistemic advice. It says to never update our priors and even to go on putting weight on logical impossibilities after they're known to be impossible. UDT tells us what to do - but not what to anticipate seeing happen next.

Comment author: Vladimir_Nesov 27 September 2009 04:58:02PM 8 points [-]

This presumably places anticipation together with excitement and fear -- an aspect of human experience, but not a useful concept for decision theory.

Comment author: Eliezer_Yudkowsky 28 September 2009 04:20:33PM 4 points [-]

I'm not convinced that "It turns out that pi is in fact greater than three" is a mere aspect of human experience.

Comment author: Vladimir_Nesov 28 September 2009 04:29:33PM *  0 points [-]

If you appeal to intuitions about rigor, it's not so much an outlier since fear and excitement must be aspects of rigorously reconstructed preference as well.

Comment author: UnholySmoke 28 September 2009 12:42:44PM 2 points [-]

I find myself simultaneously convinced and unconvinced by this! Anticipation (dependent, of course, on your definition) is surely a vital tool in any agent that wants to steer the future? Or do you mean 'human anticipation' as differentiated from other kinds? In which case, what demarcates that from whatever an AI would do in thinking about the future?

However, Dai, your top level comment sums up my eventual thoughts on this problem very well. I've been trying for a long time to resign myself to the idea that a notion of discrete personal experience is incompatible with what we know about the world. Doesn't make it any easier though.

My two cents - the answer to this trilemma will come from thinking about the system as a whole rather than personal experience. Can we taboo 'personal experience' and find a less anthropocentric way to think about this?

Comment author: Wei_Dai 29 September 2009 12:10:58PM 2 points [-]

UDT tells us what to do - but not what to anticipate seeing happen next.

Ok, we can count that as a disadvantage when comparing UDT with alternative solutions, but why is it a deal-killer for you, especially since you're mainly interested in decision theory as a tool for programming FAI? As long as the FAI knows what to do, why do you care so much that it doesn't anticipate seeing what happen next?

Comment author: Eliezer_Yudkowsky 29 September 2009 03:56:11PM 3 points [-]

Because I care about what I see next.

Therefore the FAI has to care about what I see next - or whatever it is that I should be caring about.

Comment author: Vladimir_Nesov 29 September 2009 05:36:51PM *  0 points [-]

There is no problem with FAI looking at both past and future you -- intuition only breaks down when you speak of first-person anticipation. You don't care what FAI anticipates to see for itself and whether it does. The dynamic of past->future you should be good with respect to anticipation, just as it should be good with respect to excitement.

Comment author: Nick_Tarleton 29 September 2009 06:24:25PM 1 point [-]

There is no problem with FAI looking at both past and future you -- intuition only breaks down when you speak of first-person anticipation.

But part of the question is: must past/future me be causally connected to me?

Comment author: Vladimir_Nesov 29 September 2009 06:28:32PM 0 points [-]

Part of which question? And whatever you call "causally connected" past/future persons is a property of the stuff-in-general that FAI puts into place in the right way.

Comment author: Wei_Dai 29 September 2009 10:10:49PM 0 points [-]

Ok, but that appears to be the same reason that I gave (right after I asked the question) for why we can't switch over to UDT yet. So why did you give a another answer without reference to mine? That seems to be needlessly confusing. Here's how I put it:

The problem with that is that much of our preferences are specified in terms of anticipation of experience, and there is no obvious way how to map those onto UDT preferences.

There's more in that comment where I explored one possible approach to this problem. Do you have any thoughts on that?

Also, do you agree (or think it's a possibility) that specifying preferences in terms of anticipation (instead of, say, world histories) was an evolutionary "mistake", because evolution couldn't anticipate that one day there would be mind copying/merging technology? If so, that doesn't necessarily mean we should discard such preferences, but I think it does mean that there is no need to treat it as somehow more fundamental than other kinds of preferences, such as, for example, the fear of stepping into a teleporter that uses destructive scanning, or the desire not to be consigned to a tiny portion of Reality due to "mistaken" preferences.

Comment author: Eliezer_Yudkowsky 29 September 2009 11:02:39PM 2 points [-]

I can't switch over to UDT because it doesn't tell me what I'll see next, except to the extent it tells me to expect to see pi < 3 with some measure. It's not that it doesn't map. It's that UDT goes on assigning measure to 2 + 2 = 5, but I'll never see that happen. UDT is not what I want to map my preferences onto, it's not a difficulty of mapping.

Comment author: Wei_Dai 29 September 2009 11:12:17PM *  1 point [-]

UDT goes on assigning measure to 2 + 2 = 5

That's not what happens in my conception of UDT. Maybe in Nesov's, but he hasn't gotten it worked out, and I'm not sure it's really going to work. My current position on this is still that you should update on your own internal computations, but not on input from the outside.

ETA:

UDT is not what I want to map my preferences onto, it's not a difficulty of mapping.

Is that the same point that Dan Armak made, which I responded to, or a different one?

Comment author: Vladimir_Nesov 29 September 2009 11:11:54PM *  0 points [-]

I can't switch over to UDT because it doesn't tell me what I'll see next, except to the extent it tells me to expect to see pi < 3 with some measure.

It's not you who should use UDT, it's the world. This is a salient point of departure between FAI and humanity. FAI is not in the business of saying in words what you should expect. People are stuff of the world, not rules of the world or strategies to play by those rules. Rules and strategies don't depend on particular moves, they specify how to handle them, but plays consist of moves, of evidence. This very distinction between plays and strategies is the true origin of updatelessness. It is the fault to make this distinction that causes the confusion UDT resolves.

Comment author: Wei_Dai 30 September 2009 12:20:04AM 6 points [-]

Nesov, your writings are so hard to understand sometimes. Let me take this as an example and give you some detailed feedback. I hope it's useful to you to determine in the future where you might have to explain in more detail or use more precise language.

It's not you who should use UDT, it's the world.

Do you mean "it's not only you", or "it's the world except you"? If it's the latter, it doesn't seem to make any sense. If it's the former, it doesn't seem to answer Eliezer's objection.

This is a salient point of departure between FAI and humanity.

Do you mean FAI should use UDT, and humanity shouldn't?

FAI is not in the business of saying in words what you should expect.

Ok, this seems clear. (Although why not, if that would make me feel better?)

People are stuff of the world, not rules of the world or strategies to play by those rules.

By "stuff", do you mean "part of the state of the world"? And people do in some sense embody strategies (what they would do in different situations), so what do you mean by "people are not strategies"?

Rules and strategies don't depend on particular moves, they specify how to handle them, but plays consist of moves, of evidence. This very distinction between plays and strategies is the true origin of updatelessness. It is the fault to make this distinction that causes the confusion UDT resolves.

This part makes sense, but I don't see the connection to what Eliezer wrote.

Comment author: Vladimir_Nesov 30 September 2009 12:48:58AM *  1 point [-]

It's not you who should use UDT, it's the world.

Do you mean "it's not only you", or "it's the world except you"? If it's the latter, it doesn't seem to make any sense. If it's the former, it doesn't seem to answer Eliezer's objection.

I mean the world as substrate, with "you" being implemented on the substrate of FAI. FAI runs UDT, you consist of FAI's decisions (even if in the sense of "influenced by", there seems to be no formal difference). The decisions are output of the strategy optimized for by UDT, two levels removed from running UDT themselves.

Do you mean FAI should use UDT, and humanity shouldn't?

Yes, in the sense that humanity runs on the FAI-substrate that uses UDT or something on the level of strategy-optimization anyway, but humanity itself is not about optimization.

By "stuff", do you mean "part of the state of the world"? And people do in some sense embody strategies (what they would do in different situations), so what do you mean by "people are not strategies"?

I suspect that people should be found in plays (what actually happens given the state of the world), not strategies (plans for every eventuality).

Comment author: SilasBarta 29 September 2009 04:19:22PM 0 points [-]

Unless I'm misunderstanding UDT, isn't speed another issue? An FAI must know what's likely to be happening in the near future in order to prioritize its computational resources so they're handling the most likely problems. You wouldn't want it churning through the implications of the Loch Ness monster being real while a mega-asteroid is headed for the earth.

Comment author: Eliezer_Yudkowsky 29 September 2009 05:33:07PM *  2 points [-]

Wei Dai should not be worrying about matters of mere efficiency at this point. First we need to know what to compute via a fast approximation.

(There are all sorts of exceptions to this principle, and they mostly have to do with "efficient" choices of representation that affect the underlying epistemology. You can view a Bayesian network as efficiently compressing a raw probability distribution, but it can also be seen as committing to an ontology that includes primitive causality.)

Comment author: SilasBarta 29 September 2009 11:33:25PM *  1 point [-]

Wei Dai should not be worrying about matters of mere efficiency at this point. First we need to know what to compute via a fast approximation.

But that path is not viable here. If UDT claims to make decisions independently of any anticipation, then it seems it must be optimal on average over all the impossibilities it's prepared to compute an output for. That means it must be sacrificing optimality in this world-state (by No Free Lunch), even given infinite computing time, so having a quick approximation doesn't help.

If an AI running UDT is just as prepared to find Nessie as to find out how to stop the incoming asteroid, it will be inferior to a program designed just to find out how to stop asteroids. Expand the Nessie possibility to improbable world-states, and the asteroid possibility to probable ones, and you see the problem.

Though I freely admit I may be completely lost on this.

Comment author: Wei_Dai 28 September 2009 11:19:08PM *  5 points [-]

Seeing that others here are trying to figure out how to make probabilities of anticipated subjective experiences work, I should perhaps mention that I spent quite a bit of time near the beginning of those 12 years trying to do the same thing. As you can see, I eventually gave up and decided that such probabilities shouldn't play a role in a decision theory for agents who can copy and merge themselves.

This isn't to discourage others from exploring this approach. There could easily be something that I overlooked, that a fresh pair of eyes can find. Or maybe someone can give a conclusive argument that explains why it can't work.

BTW, notice that UDT not only doesn't involve anticipatory probabilities, it doesn't even involve indexical probabilities (i.e. answers to "where am I likely to be, given my memories and observations?" as opposed to "what should I expect to see later?"). It seems fairly obvious that if you don't have indexical probabilities, then you can't have anticipatory probabilities. (See ETA below.) I tried to give an argument against indexical probabilities, which apparently nobody (except maybe Nesov) liked. Can anyone do better?

ETA: In the Absent-Minded Driver problem, suppose after you make the decision to EXIT or CONTINUE, you get to see which intersection you're actually at (and this is also forgotten by the time you get to the next intersection). Then clearly your anticipatory probability for seeing 'X', if it exists, ought to be the same as your indexical probability of being at X.

Comment author: MichaelHoward 27 September 2009 01:44:40PM 3 points [-]

I've been thinking about this topic, off and on, at least since September 1997, when I joined the Extropians mailing list... What are the chances that we're still trying to figure this out 12 years later?

Not small. I read that list and similar forums in the early 90s before becoming an AGI relinquishmentarian till about 2 years ago. When coming back to the discussions, I was astonished how most of the topics on discussion were essentially the same ones I remembered from 15 years earlier.

Comment author: Johnicholas 27 September 2009 01:50:41PM 2 points [-]

Note - there is a difference between investigating "what would evolution do?", as a jumping-off point for other strategies, and recommending "we should do what evolution does".

But a problem with that, is that what evolution does depends on where you look.

Why is it that if I set up a little grid-world on my computer and evolve little agents, I seem to get answers to the question "what does evolution do"? Am I encoding "where to look" into the grid-world somehow?

Comment author: Eliezer_Yudkowsky 27 September 2009 04:47:58AM 11 points [-]

To condense my response to a number of comments here:

It seems to me that there's some level on which, even if I say very firmly, "I now resolve to care only about future versions of myself who win the lottery! Only those people are defined as Eliezer Yudkowskys!", and plan only for futures where I win the lottery, then, come the next day, I wake up, look at the losing numbers, and say, "Damnit! What went wrong? I thought personal continuity was strictly subjective, and I could redefine it however I wanted!"

You reply, "But that's just because you're defining 'I' the old way in evaluating the anticipated results of the experiment."

And I reply, "...I still sorta think there's more to it than that."

To look at it another way, consider the Born probabilities. In this case, Nature seems to have very definite opinions about how much of yourself flows where, even though both copies exist. Now suppose you try to redefine your utility function so you only care about copies of yourself that see the quantum coin land heads up. Then you are trying to send all of your measure to the branch where the coin lands up heads, by exercising your right to redefine personal continuity howsoever you please; whereas Nature only wants to send half your measure there. Now flip the coin a hundred times. I think Nature is gonna win this one.

Tired of being poor? Redefine personal continuity so that tomorrow you continue as Bill Gates and Bill Gates continues as you - just better hope Gates doesn't swap again the next day.

It seems to me that experience and anticipation operate at a more primitive level than my utility function. Perhaps I am wrong. But I would like a cleaner demonstration of how I am wrong, than pointing out how convenient it would be if there were no question.

Of course it must be a wrong question - it is unanswerable, therefore, it is a wrong question. That is not the same as there being no question.

Comment author: Furcas 27 September 2009 05:20:46AM 10 points [-]

I'm sorry, I don't think I can help. It's not that I don't believe in personal continuity, it's that I can't even conceive of it.

At t=x there's an Eliezer pattern and there's a Bill Gates pattern. At t=x+1 there's an Eliezer+1 pattern and a Bill Gates+1 pattern. A few of the instances of those patterns live in worlds in which they won the lottery, but most don't. There's nothing more to it than that. How could there be?

Some Eliezer instances might have decided to only care about Eliezer+1 instances that won the lottery, but that wouldn't change anything. Why would it?

Comment author: orthonormal 28 September 2009 06:50:58PM 9 points [-]

I can't be the only one who sees this discussion as parallel to the argument over free will, right down to the existence of people who proudly complain that they can't see the problem.

Do you see how this is the same as saying "Of course there's no such thing as free will; physical causality rules over the brain"? Not false, but missing completely that which actually needs to be explained: what it is that our brain does when we 'make a choice', and why we have a deeply ingrained aversion to the first question being answered by some kind of causality.

Comment author: Furcas 28 September 2009 07:09:38PM 5 points [-]

There's a strong similarity, all right. In both cases, the bullet-biters describe reality as we have every reason to believe it is, and ask the deniers how reality would be different if free will / personal continuity existed. The deniers don't have an answer, but they're very insistent about this feeling they have that this undefined free will or continuity thing exists.

Explaining this feeling could be interesting, but it has very little to do with the question of whether what the feeling is about, is real.

Comment author: Eliezer_Yudkowsky 28 September 2009 05:20:44PM 3 points [-]

Okay, let me try another tack.

One of the last greatest open questions in quantum mechanics, and the only one that seems genuinely mysterious, is where the Born statistics come from - why our probability of seeing a particular result of a quantum experiment, ending up in a particular decoherent blob of the wavefunction, goes as the squared modulus of the complex amplitude.

Is it the case that the Born probabilities are necessarily explained - can only be explained - by some hidden component of our brain which says that we care about the alternatives in proportion to their squared modulus?

Since (after all) if we only cared about results that went a particular way, then, from our perspective, we would always anticipate seeing the results go that way? And so what we anticipate seeing, is entirely and only dependent on what we care about?

Or is there a sense in which we end up seeing results with a certain probability, a certain measure of ourselves going into those worlds, regardless of what we care about?

If you look at it closely, this is really about an instantaneous measure of the weight of experience, not about continuity between experiences. But why don't the same arguments on continuity work on measure in general?

Comment author: Christian_Szegedy 03 October 2009 06:26:34PM *  4 points [-]

Is it the case that the Born probabilities are necessarily explained - can only be explained - by some hidden component of our brain which says that we care about the alternatives in proportion to their squared modulus?

I have been thinking about quite a bit in the last few days and I have to say, I find this close to impossible.

The solution must be much more fundamental: Assumptions like the above ignore that the Born rule is also necessary for almost everything to work: For example the working of our most basic building blocks are tied to this rule. It is much more then just our psychological "caring". Everything in our "hardware" and environment would immediately cease to exist if the rule would be different

Based on this, I think that attempts (like that of David Wallace, even if it would be correct) trying to prove the Born rule based on rationality and decision theory have no chance to be conclusive or convincing. A good theory to explain the rule should also explain why we see the reality as we see it, even if never really make conscious measurements on particles.

In our lives, we (may) see different type of apparent randomnesses:

  • incomplete information
  • inherent (quantum) randomness

To some extent these two type of randomness are connected and look isomorphic on surface (in the macro-world).

The real question is: "Why are they connected?"

Or more specifically: "Why does the amplitude of the wave function result in (measured) probabilities that resembles to those of random geometric perturbations of the wave function?"

If you flip a real coin, for you it does not look very different from flipping a quantum coin. However the 50/50 chance of heads and tails can be explained purely by considering the geometric symmetry of the object. If you assume that the random perturbing events are distributed in a geometrically uniform way, you will immediately deduce the necessity of even chance. I think the clue of the Born rule will be to relate similar geometric considerations to relate perturbation based probability to quantum probability.

Comment author: Vladimir_Nesov 03 October 2009 07:46:19PM *  2 points [-]

Quantum probability is only "inherent" because by default you are looking at it from the system that only includes one world. With a coin, the probability is merely "epistemic" because there is a definite answer (heads or tails) in the system that includes one world, but this same probability is as inherent for the system that only includes you, the person who is uncertain, and doesn't include the coin. The difference between epistemic and inherent randomness is mainly in the choice of the system for which the statement is made, with epistemic probability meaning the same thing as inherent probability with respect to the system that doesn't include the fact in question. (Of course, this doesn't take into account the specifics of QM, but is right for the way "quantum randomness" is usually used in thought experiments.)

Comment author: Christian_Szegedy 03 October 2009 09:17:52PM *  0 points [-]

I don't dispute this. Still, my posting implicitly assumed the MWI.

My argument is that the brain as an information processing unit has a generic way of estimating probabilities based on a single-worldline of the Multiverse. This world both contains randomness stemming from missing information and quantum branching, but our brain does not differentiate between these two kind of randomnesses.

The question is how to calibrate our brain's expectation of the quantum branch it will end up. What I speculate is that the quantum randomness to some extent approximates an "incomplete information" type of randomness on the large scale. I don't know the math (if I'd knew I'd be writing a paper :)), but I have a very specific intuitive idea, that could be turned into a concrete mathematical argument:

I expect the calibration to be performed based on geometric symmetries of our 3 dimensional space: if we construct a sufficiently symmetric but unstable physical process (e.g. throwing a coin) than we can deduce a probability for the outcome to be 50/50 assuming a uniform geometric distribution of possible perturbations. Such a process must somehow be related to the magnitudes of wave function and has to be shown to behave similarly on the macro level.

Admitted, this is just a speculation, but it is not really philosophical in nature, rather an intuitive starting point on what I think has a fair chance ending up in a concrete mathematical explanation of the Born probabilities in a formal setting.

Comment author: Eliezer_Yudkowsky 04 October 2009 10:50:51PM 0 points [-]

Does your notion of "incomplete information" take into account Bell's Theorem? It seems pretty hard to make the Born probabilities represent some other form of uncertainty than indexical uncertainty.

Comment author: Christian_Szegedy 05 October 2009 06:22:33AM 0 points [-]

I don't suggest hidden variables. The idea is that quantum randomness should resemble incomplete information type of randomness on the large scale and the reason that we perceive the world according to the Born rule is that our brain can't distinguish between the two kind of randomnesses.

Comment author: Johnicholas 28 September 2009 05:48:07PM 1 point [-]

QM has to add up to normality.

We know it is a dumb idea to attempt (quantum) suicide. We're pretty confident it is a dumb idea to do simple algorithms increasing one's redundancy before pleasant realizations and reducing it afterward.

It sounds as if you are refusing to draw inferences from normal experience regarding (the correct interpretation of) QM. There is no "Central Dogma" that inferences can only go from micro-scale to macro-scale.

From the macro-scale values that we do hold (e.g. we care about macro-scale probable outcomes), we can derive the micro-scale values that we should hold (e.g. care about Born weights).

I don't have an explanation why Born weights are nonlinear - but the science is almost completely irrelevant to the decision theory and the ethics. The mysterious, nonintuitive nature of QM doesn't percolate up that much. That is why we have different fields called "physics", "decision theory", and "ethics".

Comment author: Wei_Dai 29 September 2009 08:00:35AM 1 point [-]

There are beings out there in other parts of Reality, who either anticipate seeing results with non-Born probabilities, or care about future alternatives in non-Born proportions. But (as I speculated earlier) those beings have much less measure under a complexity-based measure than us.

Or is there a sense in which we end up seeing results with a certain probability, a certain measure of ourselves going into those worlds, regardless of what we care about?

In other words, what you're asking is, is there is an objective measure over Reality, or is it just a matter of how much we care about about each part of it. I've switched positions on this several times, and I'm still undecided now. But here are my current thoughts.

First, considerations from algorithmic complexity suggest that the measure we use can't be completely arbitrary. For example, we certainly can't use one that takes an infinite amount of information to describe, since that wouldn't fit into our brain.

Next, it doesn't seem to make sense to assign zero measure to any part of Reality. Why should there be a part of it that we don't care about at all?

So that seems to narrow down the possibilities quite a bit, even if there is no objective measure. Maybe we can find other considerations to further narrow down the list of possibilities?

If you look at it closely, this is really about an instantaneous measure of the weight of experience, not about continuity between experiences.

I'd say that "continuity between experiences" is a separate problem. Even if the measure problem is solved, I might still be afraid to step into a transporter based on destructive scanning and reconstruction, and need to figure out whether I should edit that fear away, tell the FAI to avoid transporting me that way, or do something else.

But why don't the same arguments on continuity work on measure in general?

I don't understand this one. What "arguments on continuity" are you referring to?

Comment author: Psy-Kosh 28 September 2009 05:49:39PM 0 points [-]

Since (after all) if we only cared about results that went a particular way, then, from our perspective, we would always anticipate seeing the results go that way? And so what we anticipate seeing, is entirely and only dependent on what we care about?

I read that part several times, and I'm still not quite following. Mind elaborating or rephrasing that bit? Thanks.

Comment author: Nominull 27 September 2009 04:53:25AM 2 points [-]

I would be careful about declaring future yous who don't win the lottery to not be you; if Many-Worlds happens not to be true, you've just committed a different sort of quantum suicide.

Comment author: Z_M_Davis 27 September 2009 04:55:52AM *  1 point [-]

But all the resulting observers who see the coin come up tails aren't you. You just specified that they weren't. Who cares what they think?

Comment author: Eliezer_Yudkowsky 27 September 2009 10:20:46AM 10 points [-]

If I jumped off a cliff and decided not to care about hitting the ground, I would still hit the ground. If I played a quantum lottery and decided not to care about copies who lost, almost all of me would still see a screen saying "You lose". It seems to me that there is a rule governing what I see happen next, which does not care what I care about. I am asking how that rule works, because it does so happen that I care about it.

Comment author: Tyrrell_McAllister 27 September 2009 09:28:20PM 7 points [-]

You-now doesn't want to jump off the cliff because, among all the blobs of protoplasm that will exist in 5 minutes, you-now cares especially about one of them: the one that is causally connected in a certain way to the blob that is you-now. You-now evidently doesn't get to choose the nature of the causal connection that induces this concern. That nature was probably fixed by natural selection. That is why all talk about "determining to be the person who doesn't jump off the cliff" is ineffectual.

The question for doubters is this. Suppose, contrary to your intuition, that matters were just as I described. How would what-you-are-experiencing be any different? If you concede that there would be no difference, perhaps your confusion is just with how to talk about "your" future experiences. So then, what precisely is lost if all such talk is in terms of the experiences of future systems causally connected to you-now in certain ways?

Of course, committing to think this way highlights our ignorance about which causal connections are among these "certain ways". But our ignorance about this question doesn't mean that there isn't a determinate answer. There most likely is a determinate answer, fixed by natural selection and other determinants of what you-now cares about.

Comment author: CronoDAS 28 September 2009 05:46:21PM *  1 point [-]

Wouldn't that just mean that, there was someone who was very much like Eliezer Yudkowsky and who remembered being Eliezer Yudkowsky, but woke up and discovered they were no longer Eliezer Yudkowsky?

/me suspects that he just wrote a very confused sentence

It seems to me as though our experience of personal continuity has an awful lot to do with memory... I remember being me, so I'm still the same me. I think.

It feels like there's a wrong question in here somewhere, but I don't know what it is!

Comment author: Mitchell_Porter 27 September 2009 05:42:49AM *  4 points [-]

Let's explore this scenario in computational rather than quantum language.

Suppose a computer with infinite working memory, running a virtual world with a billion inhabitants, each of whom has a private computational workspace consisting of an infinite subset of total memory.

The computer is going to run an unusual sort of 'lottery' in which a billion copies of the virtual world are created, and in each one, a different inhabitant gets to be the lottery winner. So already the total population after the lottery is not a billion, it's a billion billion, spread across a billion worlds.

Virtual Yu'el perceives that he could utilize his workspace as described by Eliezer: pause himself, then have a single copy restored from backup if he didn't win the lottery, but have a trillion copies made if he did. So first he wonders whether it's correct to see this as making his victory in the lottery all but certain. Then he notices that if after winning he then does a merge, the certain victory turns back into certain loss, and becomes really worried about the fundamental soundness of his decision procedures and understanding of probability, etc.

Stating the scenario in these concrete terms brings out, for me, aspects that aren't so obvious in the original statement. For example: If everyone else has the same option (the trillionfold copying), Yu'el is no longer favored. Is the trilemma partly due to supposing that only one lottery participant has this radical existential option? Also, it seems important to keep the other worlds where Yu'el loses in sight. By focusing on that one special world, where we go from a billion people, to a trillion people, mostly Yu'els, and then back to a billion, we are not even thinking about the full population elsewhere.

I think a lot of the assumptions going into this thought experiment as originally proposed are simply wrong. But there might be a watered-down version involving copies of decision-making programs on a single big computer, etc, to which I could not object. The question for me is how much of the impression of paradox will remain after the problem has been diluted in this fashion.

Comment author: Z_M_Davis 27 September 2009 04:39:32AM 8 points [-]

Following Nominull and Furcas, I bite the third bullet without qualms for the perfectly ordinary obvious reasons. Once we know how much of what kinds of experiences will occur at different times, there's nothing left to be confused about. Subjective selfishness is still coherent because you're not just an arbitrary observer with no distinguishing characteristics at all; you're a very specific bundle of personality traits, memories, tendencies of thought, and so forth. Subjective selfishness corresponds to only caring about this one highly specific bundle: only caring about whether someone falls off a cliff if this person identifies as such-and-such and has such-and-these specific memories and such-and-those personality traits: however close a correspondence you need to match whatever you define as personal identity.

The popular concepts of altruism and selfishness weren't designed for people who understand materialism. Once you realize this, you can just recast whatever it was you were already trying to do in terms of preferences over histories of the universe. It all adds up to, &c., &c.

Comment author: Wei_Dai 30 September 2009 10:09:41AM *  2 points [-]

I agree that giving up anticipation does not mean giving up selfishness. But as Dan Armak pointed out there is another reason why you may not want to give up anticipation: you may prefer to keep the qualia of anticipation itself, or more generally do not want to depart too much from the subjective experience of being human.

Eliezer, if you are reading this, why do you not want to give up anticipation? Do you still think it means giving up selfishness? Is it for Dan Armak's reason? Or something else?

Comment author: komponisto 27 September 2009 09:41:56PM *  1 point [-]

The (only) trouble with this is that it doesn't answer the question about what probabilities you_0 should assign to various experiences 5 seconds later. Personal identity may not be ontologically fundamental, it may not even be the appropriate sort of thing to be programmed into a utility function -- but at the level of our everyday existence (that is, at whatever level we actually do exist), we still have to be able to make plans for "our own" future.

Comment author: Z_M_Davis 28 September 2009 03:36:28AM 3 points [-]

I would say that the ordinarily very useful abstraction of subjective probability breaks down in situations that involve copying and remerging people, and that our intuitive morality breaks down when it has to deal with measure of experience. In the current technological regime, this isn't a problem at all, because the only branching we do is quantum branching, and there we have this neat correspondence between quantum measure and subjective probability, so you can plan for "your own" future in the ordinary obvious way. How you plan for "your own" future in situations where you expect to be copied and merged depends on the details of your preferences about measure of experience. For myself, I don't know how I would go about forming such preferences, because I don't understand consciousness.

Comment author: Douglas_Knight 28 September 2009 04:58:18AM 1 point [-]

In the current technological regime, this isn't a problem...[the future] depends on the details of your preferences about measure of experience.

Quantum suicide is already a problem in the current regime, if you allow preference over measure.

Splitting and merging adds another problem, but I think it is a factual problem, not an ethical problem. At least, I think that there is a factual problem before you come to the ethical problem, which may be the same as for Born measure.

Comment author: Furcas 27 September 2009 03:25:45AM *  7 points [-]

I still have trouble biting that bullet for some reason. Maybe I'm naive, I know, but there's a sense in which I just can't seem to let go of the question, "What will I see happen next?" I strive for altruism, but I'm not sure I can believe that subjective selfishness - caring about your own future experiences - is an incoherent utility function; that we are forced to be Buddhists who dare not cheat a neighbor, not because we are kind, but because we anticipate experiencing their consequences just as much as we anticipate experiencing our own. I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground.

I don't really understand your reasoning here. It's not a different person that will experience the consequences of hitting the ground, it's Eliezer+5. Sure, Eliezer+5 is not identical to Eliezer, but he's really, really, really similar. If Eliezer is selfish, it makes perfect sense to care about Eliezer+5 too, and no sense at all to care equally about Furcas+5, who is really different from Eliezer.

Comment author: orthonormal 28 September 2009 04:54:58PM *  9 points [-]

Suppose I'm duplicated, and both copies are told that one of us will be thrown off a cliff. While it makes some kind of sense for Copy 1 to be indifferent (or nearly indifferent) to whether he or Copy 2 gets tossed, that's not what would actually occur. Copy 1 would probably prefer that Copy 2 gets tossed (as a first-order thing; Copy 1's morals might well tell him that if he can affect the choice, he ought to prefer getting tossed to seeing Copy 2 getting tossed; but in any case we're far from mere indifference).

There's something to "concern for my future experience" that is distinct from concern for experiences of beings very like me.

Comment author: Furcas 28 September 2009 07:11:48PM 2 points [-]

I have the same instincts, and I would have a very hard time overriding them, were my copy and I put in the situation you described, but those instincts are wrong.

Comment author: orthonormal 28 September 2009 07:18:05PM 1 point [-]

Emend "wrong" to "maladapted for the situation" and I'll agree.

Comment author: DanielVarga 29 September 2009 09:53:49PM 4 points [-]

These instincts are only maladapted for situations found in very contrived thought experiments. For example, you have to assume that Copy 1 can inspect Copy 2's source code. Otherwise she could be tricked into believing that she has an identical copy. (What a stupid way to die.) I think our intuitions are already failing us when we try to imagine such source code inspections. (To put it another way: we have very few in common with agents that can do such things.)

Comment author: orthonormal 30 September 2009 11:13:43PM 1 point [-]

For example, you have to assume that Copy 1 can inspect Copy 2's source code.

It would suffice, instead, to have strong evidence that the copying process is trustworthy; in the limit as the evidence approaches certainty, the more adaptive instinct would approach indifference between the cases.

Comment author: solipsist 05 November 2013 02:51:08AM *  3 points [-]

....you have to throw away the idea that your joint subjective probabilities are the product of your conditional subjective probabilities....If you win the lottery, the subjective probability of having still won the lottery, ten seconds later, is ~1.

If copying increases your measure, merging decreases it. When you notice yourself winning the lottery, you are almost certainly going to cease to exist after ten seconds.

Comment author: ESRogs 20 December 2013 06:04:06AM 1 point [-]

Which is to pick Nick's suggestion, and something like horn #2, except perhaps without the last part where you "anticipate that once you experience winning the lottery you will experience having still won it ten seconds later."

Scanning through the comments for "quantum suicide," it sounds like a few others agree with you.

Comment author: KatjaGrace 19 May 2011 04:59:02PM 3 points [-]
Comment author: jimrandomh 27 September 2009 03:25:25PM 3 points [-]

The problem with anthropic reasoning and evidence is that unlike ordinary reasoning and evidence, it can't be transferred between observers. Even if "anthropic psychic powers" actually do work, you still should expect all other observers to report that they don't.

Comment author: DanArmak 27 September 2009 03:40:58PM 3 points [-]

And not only do they report that they don't see it working for you. The bigger problem is that they report that it doesn't even work for them when they try it.

Comment author: Nominull 27 September 2009 02:37:03AM *  3 points [-]

I bite the third bullet. I am not as gifted with words as you are to describe why biting it is just and good and even natural if you look at it from a certain point of view, but...

You are believing in something mystical. You are believing in personal identity as something meaningful in reality, without giving any reason why it ought to be, because that is how your algorithm feels from the inside. There is nothing special about your brain as compared to your brain+your spinal cord, or as compared to your brain+this pencil I am holding. How could there be? Reality doesn't know what a brain is. Brains are not baked into the fundaments of reality, they are emergent from within. How could there be anything meaningful about them?

Consider this thought experiment. We take "you", and for a brief timespan, say, epsilon seconds, we replace "you" with "Brittany Spears". Then, after the epsilon seconds have passed, we swap "you" back in. Does this have a greater than order epsilon effect on anything? If so, what accounts for this discontinuity?

Comment author: Johnicholas 27 September 2009 02:45:32AM 1 point [-]

EY seems to have equated the third bullet with throwing oneself off of cliffs. Do you throw yourself off of cliffs? Why or why not?

Comment author: Nick_Tarleton 27 September 2009 03:49:38AM *  6 points [-]

It sounds to me like EY is equating the second bullet with "perfect altruism is coherent, as is caring only about one's self at the current moment, but nothing in between is." To that, though, as Furcas says, one can be selfish according to similarity of pattern rather than ontologically privileged continuity.

Comment author: torekp 02 April 2010 02:22:57PM 1 point [-]

Or one could be selfish according to a non-fundamental, ontologically reducible continuity. At least, I don't see why not. Has anyone offered an argument for pattern over process?

randallsquared has it dead right, I think.

Comment author: Z_M_Davis 27 September 2009 04:41:57AM 3 points [-]

I don't throw myself off cliffs for very roughly the same reason I don't throw other people off cliffs.

Comment author: SilasBarta 28 September 2009 04:21:45PM 7 points [-]

And for the same reason you buy things for yourself more often than for other people? And for the same reason you (probably) prefer someone else falling off a cliff than yourself?

Comment author: Z_M_Davis 28 September 2009 08:35:31PM 2 points [-]

I was trying to be cute.

Comment author: SilasBarta 28 September 2009 09:13:34PM 3 points [-]

Considering that your cute comment was consistent with your other comments in this discussion, I think I can be forgiven for thinking you were serious.

Actually, which of your other comments here are just being cute?

Comment author: Z_M_Davis 29 September 2009 12:23:21AM 4 points [-]

Right, so of course I'm rather selfish in the sense of valuing things-like-myself, and so of course I buy more things for myself than I do for random strangers, and so forth. But I also know that I'm not ontologically fundamental; I'm just a conjunction of traits that can be shared by other observers to various degrees. So "I don't throw myself off cliffs for very roughly the same reason I don't throw other people off cliffs" is this humorously terse and indirect way of saying that identity is a scalar, not a binary attribute. (Notice that I said "very roughly the same reason" and not "exactly the same reason"; that was intentional.)

Comment author: SilasBarta 29 September 2009 12:46:08AM *  -1 points [-]

And ... you expected everyone else to get that out of your cute comment?

You know, sometimes you just have to throw in the towel and say, "Oops. I goofed."

ETA: I'm sure that downmod was because this comment was truly unhelpful to the discussion, rather than because it made someone look bad.

Comment author: Z_M_Davis 29 September 2009 02:37:22AM *  2 points [-]

Oops. I goofed.

Comment author: Douglas_Knight 29 September 2009 02:50:11AM *  3 points [-]

I am sad to see this comment. Perhaps you were mistaken in how clear the comment was to how broad an audience, but I think the original comment was valuable and that we lose a lot of our ability to communicate if we are too careful.

Comment author: Alicorn 29 September 2009 01:00:47AM *  1 point [-]

Is there any chance that you will soon mature / calm down / whatever it is you need to do to stop being so hostile, so frequently? This is only the latest example of you coming down on people with the utmost contempt for fairly minor offenses, if they're offenses at all. It looks increasingly like you think anyone who conducts themselves in the comments differently than you prefer ought to be beheaded or something. It's really unpleasant to read, and I don't think it's likely to be effective in getting people to adopt your standards of behavior. (I can think of few posters I'm less motivated to emulate than you, as one data point.)

Edit: I downvoted the parent of this comment because I would like to see fewer comments that resemble it.

Comment author: SilasBarta 29 September 2009 01:29:09AM *  0 points [-]

I hate to play tu quoque, but it's rather strange of you to make this criticism considering that just a few months ago you gave a long list of very confining restrictions you wanted on commenters, enforced by bannings. Despite their best efforts, no one could discern the pattern behind what made something beyond-the-pale offensive, so you were effectively asking for unchecked, arbitrary authority to remove comments you don't like.

You even went so far as to ask to deputize your own hand-picked cadre of posters "with their heads on right" to assist in classifying speech as unacceptable!

Yes, Alicorn, I've been very critical of those who claim objectivity in modding people they're flaming, but I don't think I've ever demanded the sort of arbitrary authority over the forum that you feel entitled to.

I will gladly make my criticisms more polite in the future, but I'm not going to apologize for having lower karma than if I abused the voting system the way some of you seem to.

And in the meantime, perhaps you could make it a habit of checking if the criticisms you make of others could apply to yourself. I'm not asking that you be perfect or unassailable in this respect. I'm not even asking that you try to adhere to your own standards. I just ask that you check for whether you're living up to it.

Edit: I didn't downvote the parent of this comment because I'm not petty like that.

Comment author: SilasBarta 29 September 2009 01:55:12AM -2 points [-]

I downvoted the parent of this comment because I would like to see fewer comments that resemble it.

Yes, you would like to see fewer comments that have "SilasBarta" at the top.

Comment author: orthonormal 28 September 2009 04:40:06PM 0 points [-]

The SIA predicts that you will say "no".

Comment author: Nominull 27 September 2009 02:51:25AM 0 points [-]

A lot of people, especially religious people, equate lack of belief in a fundamental meaning of life with throwing oneself off cliffs. Eliezer is committing the same sort of mistake.

Comment author: loqi 27 September 2009 12:16:44PM 3 points [-]

No, I think he's just pointing out that the common intuitions behind anticipatory fear are grossly violated by the third horn.

I'd like to see you chew this bullet a bit more, so try this version. You are to be split (copied once). One of you is randomly chosen to wake up in a red room and be tortured for 50 years, while the other wakes up in a green room and suffers a mere dust speck. Ten minutes will pass for both copies before the torture or specking commences.

How much do you fear being tortured before the split? Does this level of fear go up/down when you wake up in a red/green room? To accept the third horn seems to imply that you should feel no relief upon waking in the green room.

Comment author: Nick_Tarleton 27 September 2009 04:48:47PM 2 points [-]

To accept the third horn seems to imply that you should feel no relief upon waking in the green room.

Only if you assume feelings of relief should bind to reality (or reality+preferences) in a particular way.

Comment author: loqi 27 September 2009 07:59:37PM 3 points [-]

Good point, I should have phrased that differently: "To accept the third horn seems to imply that any relief you feel upon waking in the green room is just 'legacy' human intuition, rather than any rational expectation of having avoided future suffering."

Comment author: gwern 10 October 2009 03:26:15AM 1 point [-]

You know, your example is actually making that horn look more attractive: replace the torture to the person with '50000 utilities subtracted from the cosmos', etc, and then it's obvious that the green-room is no grounds for relief since the -50000 is still a fact. More narrowly, if you valued other persons equal to yourself, then the green room is definitely no cause for relief.

We could figure out how much you value other people by varying how bad the torture, is and maybe adding a deal where if the green-room person will flip a fair coin (heads, the punishment is swapped; tails, no change), the torture is lessened by n. If you value the copy equal to yourself, you'll be willing to swap for any difference right down to 1, since if it's tails, there's no loss or gain, but if it's heads, there's a n profit.

Now, of course even if the copy is identical to yourself, and even if we postulate that somehow the 2 minds haven't diverged (we could do this by making the coinflip deal contingent on being the tortured one - 2 identical rooms, neither of which knows whether they are the tortured one; by making it contingent, there's no risk in not taking the bet), I think essentially no human would take the coinflip for just +1 - they would only take it if there was a major amelioration of the torture. Why? Because pain is so much realer and overriding to us, which is a fact about us and not about agents we can imagine.

(If you're not convinced, replace the punishments with rewards and modify the bet to increase the reward but possibly switch it to the other fellow; and imagine a parallel series of experiments being run with rational agents who don't have pain/greed. After a lot of experiments, who will have more money?)

Comment author: loqi 10 October 2009 08:38:59PM 0 points [-]

More narrowly, if you valued other persons equal to yourself, then the green room is definitely no cause for relief.

Yes, and this hypothesis can even be weakened a bit, since the other persons involved are nearly identical to you. All it takes is a sufficiently "fuzzy" sense of self.

Now, of course even if the copy is identical to yourself, and even if we postulate that somehow the 2 minds haven't diverged [...] I think essentially no human would take the coinflip for just +1 - they would only take it if there was a major amelioration of the torture.

To clarify what you mean by "haven't diverged"... does that include the offer of the flip? E.g., both receive the offer, but only one of the responses "counts"? Because I can't imagine not taking the flip if I knew I was in such a situation... my anticipation would be cleanly split between both outcomes due to indexical uncertainty. It's a more complicated question once I know which room I'm in.

Comment author: gwern 10 October 2009 09:17:42PM 0 points [-]

To clarify what you mean by "haven't diverged"... does that include the offer of the flip? E.g., both receive the offer, but only one of the responses "counts"? Because I can't imagine not taking the flip if I knew I was in such a situation... my anticipation would be cleanly split between both outcomes due to indexical uncertainty. It's a more complicated question once I know which room I'm in.

Well, maybe I wasn't clear. I'm imagining that there are 2 green rooms, say, however, one room has been secretly picked out for the torture and the other gets the dustspeck.

Each person now is made the offer: if you flip this coin, and you are not the torture room, the torture will be reduced by n and the room tortured may be swapped if the coin came up heads; however, if you are the torture room, the coin flip does nothing.

Since the minds are the same, in the same circumstances, with the same offer, we don't need to worry about what happens if the coins fall differently or if one accepts and the other rejects. The logic they should follow is: if I am not the other, then by taking the coin flip I am doing myself a disservice by risking torture, and I gain under no circumstance and so should never take the bet; but if I am the other as well, then I lose under no circumstance so I should always take the bet.

(I wonder if I am just very obtusely reinventing the prisoner's dilemma or newcomb's paradox here, or if by making the 2 copies identical I've destroyed an important asymmetry. As you say, if you don't know whether "you" have been spared torture, then maybe the bet does nothing interesting.)

Comment author: loqi 11 October 2009 05:24:12AM *  0 points [-]

The logic they should follow is: if I am not the other, then by taking the coin flip I am doing myself a disservice by risking torture, and I gain under no circumstance and so should never take the bet; but if I am the other as well, then I lose under no circumstance so I should always take the bet.

I'm not sure what "not being the other" means here, really. There may be two underlying physical processes, but they're only giving rise to one stream of experience. From that stream's perspective, its future is split evenly between two possibilities, so accepting the bet strictly dominates. Isn't this just straightforward utility maximization?

The reason the question becomes more complicated if the minds diverge is that the concept of "self" must be examined to see how the agent weights the experiences of an extremely similar process in its utility function. It's sort of a question of which is more defining: past or future. A purely forward-looking agent says "ain't my future" and evaluates the copy's experiences as those of a stranger. A purely backward-looking agent says "shares virtually my entire past" and evaluates the copy's experiences as though they were his own. This all assumes some coherent concept of "selfishness" - clearly a purely altruistic agent would take the flip.

I wonder if I am just very obtusely reinventing the prisoner's dilemma or newcomb's paradox here, or if by making the 2 copies identical I've destroyed an important asymmetry.

The identical copies scenario is a prisoner's dilemma where you make one decision for both sides, and then get randomly assigned to a side. It's just plain crazy to defect in a degenerate prisoner's dilemma against yourself. I think this does destroy an important asymmetry - in the divergent scenario, the green-room agent knows that only his decision counts.

Speaking for my own values, I'm still thoroughly confused by the divergent scenario. I'd probably be selfish enough not to take the flip for a stranger, but I'd be genuinely unsure of what to do if it was basically "me" in the red room.

Comment author: randallsquared 28 September 2009 12:00:57PM 6 points [-]

As for what happens ten seconds after that, you have no way of knowing how many processors you run on, so you shouldn't feel a thing

Here's the problem, as far as I can see. You shouldn't feel a thing, but that would also be true if none of you ever woke up again. "I won't notice being dead" is not an argument that you won't be dead, so lottery winners should anticipate never waking up again, though they won't experience it (we don't anticipate living forever in the factual world, even though no one ever notices being dead).

I'm sure there's some reason this is considered invalid, since quantum suicide is looked on so favorably around here. :)

Comment author: abramdemski 28 September 2009 08:26:49PM 7 points [-]

The reason is simply that, in the multiple worlds interpretation, we do survive-- we just also die. If we ask "Which of the two will I experience?" then it seems totally valid to argue "I won't experience being dead."

Comment author: randallsquared 01 October 2009 01:08:35PM 1 point [-]

So it basically comes back to pattern-as-identity instead of process-as-identity. Those of me who survive won't experience being dead. I think you can reach my conclusions by summing utility across measure, though, which would make an abrupt decrease in measure equivalent to any other mass death.

Comment author: Nubulous 27 September 2009 07:43:10PM *  6 points [-]

When you wake up, you will almost certainly have won (a trillionth of the prize). The subsequent destruction of winners (sort of - see below) reduces your probability of being the surviving winner back to one in a billion.

Merging N people into 1 is the destruction of N-1 people - the process may be symmetrical but each of the N can only contribute 1/N of themself to the outcome.

The idea of being (N-1)/N th killed may seem a little odd at first, but less so if you compare it to the case where half of one person's brain is merged with half of a different person's (and the leftovers discarded).

EDIT: Note that when the trillion were told they won, they were actually being lied to - they had won a trillionth part of the prize, one way or another.

Comment author: PlatypusNinja 02 October 2009 12:25:49AM 2 points [-]

Note that when the trillion were told they won, they were actually being lied to - they had won a trillionth part of the prize, one way or another.

Suppose that, instead of winning the lottery, you want your friend to win the lottery. (Or you want your random number generator to crack someone's encryption key, or you want a meteor to fall on your hated enemy, etc.) Then each of the trillion people would experience the full satisfaction from whatever random result happened.

Comment author: snarles 20 May 2011 07:40:18PM *  1 point [-]

This.

How does Yudkosky's careless statement "Just as computer programs or brains can split, they ought to be able to merge" not immediately light up as the weakest link of the entire post?

If you think merging ought to work, then why not also think that quantum suicide ought to work?

Comment author: JamesAndrix 29 September 2009 04:43:01AM 1 point [-]

In the case where the people are computer programs, none of that works.

Comment author: Nubulous 29 September 2009 07:34:41PM 0 points [-]

If you mean that a quantitative merge on a digital computer is generally impossible, you may be right. But the example I gave suggests that merging is death in the general case, and is presumably so even for identical merges, which can be done on a computer.

Comment author: JamesAndrix 29 September 2009 10:06:35PM 1 point [-]

I fail to see why that is the general case.

For that matter, I fail to see why losing some(many, most) of my atoms and having them be quickly replaced by atoms doing the exact same job should be viewed as me dying at all.

Comment author: Nubulous 01 October 2009 05:27:27AM *  3 points [-]

I fail to see why that is the general case.

If you have two people to start with, and one when you've finished, without any further stipulation about which people they are, then you have lost a person somewhere. To come to a different conclusion would require an additional rule, which is why it's the general case.
That additional rule would have to specify that a duplicate doesn't count as a second person. But since that duplicate could subsequently go on to have a separate different life of its own, the grounds for denying it personhood seem quite weak.

For that matter, I fail to see why losing some(many, most) of my atoms and having them be quickly replaced by atoms doing the exact same job should be viewed as me dying at all.

It's not dying in the sense of there no longer being a you, but it is still dying in the sense of there being fewer of you.
To take the example of you being merged with someone, those atoms you lose, together with the ones you don't take from the other person, make enough atoms, doing the right jobs, to make a whole new person. In the symmetrical case, a second "you". That "you" could have gone on to live its own life, but now won't. Hence a "you" has died in the process.

In other words, merge is equivalent to "swap pieces then kill".
The above looks as though it will work just as well with bits, or the physical representation of bits, rather than atoms (for the symmetrical case).

Comment author: JamesAndrix 01 October 2009 06:39:01AM 2 points [-]

If a person were running on a inefficiently designed computer with transistors and wires much larger than they needed to be, it would be possible to peel away and discard (perhaps) half of the atoms in the computer without affecting it's operation or the person. This would be much like ebborian reproduction, but merely a shedding of atoms.

In any sufficiently large information processing device, there are two or more sets of atoms (or whatever its made of) processing the same information, such they they could operate independently of each other if they weren't spatially intertwined.

Why are they one person when spatially intertwined, but two people when they are apart? That they 'could have' gone on independently is a counterfactual in the situation that they are both receiving the same inputs. You 'could' be peeled apart into two people, but both halves of your parts are currently still making up 1 person.

Personhood is in the pattern. Not the atoms or memory or whatever. There's only another person when there is another sufficiently different pattern.

merge is equivalent to 'spatially or logically reintegrate, then shed atoms or memory allocation as desired'

Comment author: RobinHanson 27 September 2009 11:11:32PM 5 points [-]

Oddly, I feel myself willing to bite all three bullets. Maybe I am too willing to bite bullets? There is a meaningful sense in which I can anticipate myself being one of the future people who will remember being me, though perhaps there isn't a meaningful way to talk about which of those many people I will be; I will be all of them.

Comment author: DanielLC 17 February 2011 04:57:02PM 2 points [-]

I don't think you have the third horn quite right. It's not that you're equally likely to wake up as Brittany Spears. It's that the only meaningful idea of "you" exists only right now. Your subjective anticipation of winning the lottery in five minutes should be zero. You clearly aren't winning the lottery or in five minutes.

Also, isn't that more of a quintlemma?

Comment author: PlatypusNinja 28 September 2009 09:04:05PM 2 points [-]

I deny that increasing the number of physical copies increases the weight of an experience. If I create N copies of myself, there is still just one of me, plus N other agents running my decision-making algorithms. If I then merge all N copies back into myself, the resulting composite contains the utility of each copy weighted by 1/(N+1).

My feeling about the Boltzmann Brain is: I cheerfully admit that there is some chance that my experience has been produced by a random experience generator. However, in those cases, nothing I do matters anyway. Thus I don't give them any weight in my decision-making algorithm.

This solution still works correctly if the N copies of me have slightly different experiences and then forget them.

Comment author: Douglas_Knight 27 September 2009 03:28:18AM 2 points [-]

The third option seems awfully close to the second. In the second, you anticipate winning the lottery for a few seconds, and then going back to not. In the third, the universe anticipates winning the lottery for a few seconds, and then going back to Britney.

Comment author: Vladimir_Nesov 27 September 2009 10:00:53AM *  5 points [-]

The problem is that copying and merging is not as harmless as it seems. You are basically doing invasive surgery on the mind, but because it's performed using intuitively "non-invasive" operations, it looks harmless. If, for example, you replaced the procedure with rewriting "subjective probability" by directly modifying the brain, the fact that you'd have different "subjective probability" as a result won't be surprising.

Thus, on one hand, there is an intuition that the described procedure doesn't damage the brain, and on the other the intuition about what subjective probability should look like in an undamaged brain, no matter in what form this outcome is delivered (that is, probability is always the same, you can just learn about it in different ways, and this experiment is one of them). The problem is that the experiment is not an instance of normal experience to which one can generalize the rule that subjective probability works fine, but an instance of arbitrary modification of the brain, from which you can expect anything.

Assuming that the experiment with copying/merging doesn't damage the brain, the resulting subjective probability must be correct, and so we get a perception of modifying the correct subjective probability arbitrarily.

Thought experiments with doing strange things to decision-theoretic agents are only valid if the agents have an idea about what kind of situation they are in, and so can try to find a good way out. Anything less, and it's just phenomenology: throw a rat in magma and see how it burns. Human intuitions about subjective expectation are optimized for agents who don't get copied or merged.

Comment author: Ishaan 26 September 2013 04:20:43AM *  2 points [-]

And the third horn of the trilemma is to reject the idea of the personal future - that there's any meaningful sense in which I can anticipate waking up as myself tomorrow, rather than Britney Spears. Or, for that matter, that there's any meaningful sense in which I can anticipate being myself in five seconds, rather than Britney Spears. In five seconds there will be an Eliezer Yudkowsky, and there will be a Britney Spears, but it is meaningless to speak of the current Eliezer "continuing on" as Eliezer+5 rather than Britney+5; these are simply three different people we are talking about.

There are no threads connecting subjective experiences. There are simply different subjective experiences. Even if some subjective experiences are highly similar to, and causally computed from, other subjective experiences, they are not connected.

I still have trouble biting that bullet for some reason. Maybe I'm naive, I know, but there's a sense in which I just can't seem to let go of the question, "What will I see happen next?" I strive for altruism, but I'm not sure I can believe that subjective selfishness - caring about your own future experiences - is an incoherent utility function; that we are forced to be Buddhists who dare not cheat a neighbor, not because we are kind, but because we anticipate experiencing their consequences just as much as we anticipate experiencing our own. I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground.

I do bite the bullet, but I think you are wrong about the implications of biting this bullet.

Eliezer Yudkowsky cares about what happens to Eliezer Yudkowsky+5 seconds, in a way that he doesn't care about what happens to Ishaan+5 or Brittany+5.

E+5 holds a special place in E's utility function. To E, universes in which E+5 is happy are vastly superior to universes in which E+5 is unhappy or dead.

It makes no difference to E that E+5 is not identical to E. E still cares about E+5, and E aught not need any magic subjective thread connecting E and E+5 to justify this preference. It's not incoherent to prefer a future where certain entities that are causally connected to you continue to thrive - That's all "selfishness" really means.

E anticipates the universe that E+5 will experience. E+5 will carry the memory of this anticipation. If there are lotteries and clones, E will anticipate a universe with a 1% chance of a bunch of E+5 clones winning the lottery and a 99% chance of no E+5 clones winning the lottery. Anticipation is expectation concerning what you+5 will experience in the future. You're basically imagining your future self and experiencing a specialized and extreme version of "empathy". It doesn't matter whether or not there is a magical thread tying you to your future self. If you strip the emotional connotation on "anticipation" and just call it "prediction", you can even predict what happens after you die (it's just that there is no future version of you to "empathize" with anymore)

There are no souls. That holds spatially and temporally.

Comment author: kim0 28 September 2009 07:04:36AM 1 point [-]

I have an Othello/Reversi playing program.

I tried making it better by applying probabilistic statistics to the game tree, quite like antropic reasoning. It then became quite bad at playing.

Ordinary minimax with A-B did very well.

Game algorithms that ignore density of states in the game tree, and only focus on minimaxing, do much better. This is a close analogy to the experience trees of Eliezer, and therefore a hint that antropic reasoning here has some kind of error.

Kim0

Comment author: rwallace 28 September 2009 01:47:39PM 2 points [-]

That's because those games are nonrandom, and your opponent can be expected to single out the best move.

Algorithms for games like backgammon and poker that have a random element, do pay attention to density of states.

(Oddly enough, so nowadays do the best known algorithms for Go, which surprised almost everyone in the field when this discovery was made. Intuitively, this can be seen as being because the game tree of Go is too large and complex for exhaustive search to work.)

Comment author: timtyler 27 September 2009 08:41:49AM *  1 point [-]

The "second horn" seems to be phrased incorrectly. It says:

"you can coherently anticipate winning the lottery after five seconds, anticipate the experience of having lost the lottery after fifteen seconds, and anticipate that once you experience winning the lottery you will experience having still won it ten seconds later."

That's not really right - the fate of most of those agents that experience a win of the lottery is to be snuffed out of existence. They don't actually win the lottery - and they don't experience having won it eleven seconds later either. The chances of the lottery staying won after it has been experienced as being won are slender.

Either that "horn" needs rephrasing - or another "horn" needs to be created with the correct answer on it.

Comment author: Johnicholas 27 September 2009 01:38:09PM 2 points [-]

If I understand the proposed merging procedure correctly, the procedure treats the trillion observers who experience a win of the lottery symmetrically. None of them are "snuffed" any more than any other. For each of the observers, there is a continuous space-time-causality "worm" connecting to the future self who spends the money.

This space-time-causality worm is supposed to be as analogous as possible to the one that connects any ordinary moment in your life to your future self. The difference is that this one merges (symmetrically) with almost a trillion others, all identical.

Comment author: timtyler 27 September 2009 06:01:12PM 0 points [-]

I see, I think. I can't help wondering what the merge procedure does with any flipped bits in the diff, though. Anyway, horn 2 now seems OK - I think it describes the situation.

Comment author: timtyler 25 November 2009 10:32:59PM -1 points [-]

Rereading the comments on this thread, the problem is more subtle than I had thought - and I had better retract the above comment. I am inclined towards the idea that copying doesn't really alter the pattern - but that kind of anthropic reasoning seems challenging to properly formalise under the given circumstances.

Comment author: Eliezer_Yudkowsky 27 September 2009 04:56:10PM 0 points [-]

Yup! If you can't do the merge without killing people, then the trilemma is dissolved.

Comment author: Nominull 27 September 2009 02:47:04AM 1 point [-]

I strive for altruism, but I'm not sure I can believe that subjective selfishness - caring about your own future experiences - is an incoherent utility function; that we are forced to be Buddhists who dare not cheat a neighbor, not because we are kind, but because we anticipate experiencing their consequences just as much as we anticipate experiencing our own. I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground.

It strikes me that this is a problem of comparing things at different levels of meta. You are talking about your "motivations" as if they were things that you could freely choose based on your a priori determinations about how one well and truly ought to act. You sound, forgive me for saying this, I do respect you deeply, almost Kantian here.

The underlying basis for your ethical system, or even further, your motivational system, does not lie in this abstract observer that is Eliezer today and Brittany Spears tomorrow. I think I'm ripping this off of some videogame I've played, but think of the brain as a container, and this "observerstuff" (dare I call it a soul?) as water poured into the container. Different containers have different shapes and shape the water differently, even though it is the same water. But the key point is, motivations are a property of the container, not of the water, and their referents are containers, not water. Eliezer-shaped water cares about what happens to the Eliezer-container, not what happens to the water in it. That's just not how the container is built.

Comment author: jschulter 02 March 2011 11:02:55PM *  1 point [-]

The odds of winning the lottery are ordinarily a billion to one. But now the branch in >which you win has your "measure", your "amount of experience", temporarily >multiplied by a trillion. So with the brief expenditure of a little extra computing power, >you can subjectively win the lottery - be reasonably sure that when next you open >your eyes, you will see a computer screen flashing "You won!"

As I see it, the odds of being any one of those trillion "me"s in 5 seconds is 10^21 to one(one trillion times one billion). since there are a trillion ways for me to be one of those, the total probability of experiencing winning is still a billion to one. To be more formal:

P("experiencing winning")=sum(P("winning"|"being me #n")P("being me #n")) =sum(P("winning" and "being me #n"))=10^12*10^-21=10^-9 since "being me #n" partitions the space.

Overall this means I:

  • anticipate not winning at 5 sec.

  • anticipate not winning at 15 sec.

  • don't have super-psychic-anthropic powers

  • don't see why anyone has an issue with this

Checking consistency just in case:

p("experience win after 15s") = p("experience win after 15s"|"experience win after >5s")p("experience win after 5s") + p("experience win after 15s"|"experience not-win >after 5s")p("experience not-win after 5s").

p("experience win after 15s") = (~1)*(10^-9) + (~0)(1-10^-9)=~10^-9=~p("experience win after 5s")

Additionally, I should note that the total amount of "people who are me who experience winning" will be 1 trillion at 5 sec. and exactly 1 at 15 sec. This is because those trillion "me"s must all have identical experiences for merging to work, meaning the merged copy only has one set of consistent memories of having won the lottery. I don't see this as a problem, honestly.

Comment author: jacob_cannell 12 December 2011 09:21:14PM 0 points [-]

I have nearly the same viewpoint and was surprised to find what seems to me to be the obvious solution so far down this thread.

One little nitpick:

Additionally, I should note that the total amount of "people who are me who experience winning" will be 1 trillion at 5 sec. and exactly 1 at 15 sec.

From your analysis, I think you mean you expect there is a 1 in a billion chance there will be 1 trillion "people who are me who experience winning" at 5 sec.

Comment author: jimrandomh 12 March 2010 09:51:11PM 1 point [-]

Tafter thinking about the Anthropic Trilemma for awhile, I've come up with an alternative resolution which I think is better than, or at least simpler than, any of the other resolutions. Rather than try to construct a path for consciousness to follow inductively forwards through time, start at the end and go backwards: from the set of times at which an entity I consider to be a copy of me dies, choose one at random weighted by quantum measure, then choose uniformly at random from all paths ending there.

The trick is that this means that while copying your mind increases the probability of ending up in the universe where you're copied, merging back together cancels it out perfectly. This means that you can send information back in time and influence your path through the universe by self-copying, but only log(n) bits of information or influence for n copies that you can't get rid of without probably dying.

Comment author: Baughn 08 November 2009 10:30:00AM *  1 point [-]

I'm coming in a bit late, and not reading the rest of the posts, but I felt I had to comment on the third horn of the trilemma, as it's an option I've been giving a lot of thought.

I managed to independently invent it (with roughly the same reasoning) back in high school, though I haven't managed to convince myself of it or, for that matter, to explain it to anyone else. Your explanation is better, and I'll be borrowing it.

At any rate. One of your objections seems to be "...to assert that you can hurl yourself off a cliff without fear, because whoever hits the ground will be another person not particularly connected to you by any such ridiculous thing as a "thread of subjective experience".

For that to make sense would require that, while you can anticipate subjective experiences from just about anywhere, you would only anticipate experiencing a limited subset of them; 1/N of the total, where N represents.. what? The total number of humans, and why? Of souls?

Things get simpler if you set N to 1. Then your anticipation would be to experience Eliezer+5, Britney+5 and Cliffdiver+5, as well as every other subjective experience available for experiencing; sidestepping the cliffdiver problem, and more importantly removing any need to explain the value of N.

There's still the alternate option of it being infinity. I feel relatively certain that this is not the case, but I'm not sure this isn't simply wishful thinking. Help?

Comment author: smoofra 28 September 2009 09:23:44PM 1 point [-]

I figured it out! Roger Penrose is right about the nature of the brain!

just kidding.

Comment author: rwallace 27 September 2009 02:53:50PM 1 point [-]

I don't know the answer either. My best guess is that the question turns out to involve comparing incommensurable things, but I haven't pinned down which things. (As I remarked previously, I think the best answer for policy purposes is to just optimize for total utility, but that doesn't answer the question about subjective experience.)

But one line of attack that occurs to me is the mysterious nature of the Born probabilities.

Suppose they are not fundamental, suppose the ultimate layer of physics -- maybe superstrings, maybe something else -- generates the various outcomes in its own way, like a computer (digital or analog) forking processes...

and we subjectively experience outcomes according to the Born probabilities because this is the correct answer to the question about subjective experience probability.

Is there a way to test that conjecture? Is there a way to figure out what the consequences would be if it were true, or if it were false?

Comment author: Eliezer_Yudkowsky 27 September 2009 04:58:29PM 1 point [-]

Suppose they are not fundamental, suppose the ultimate layer of physics -- maybe superstrings, maybe something else -- generates the various outcomes in its own way, like a computer (digital or analog) forking processes...

and we subjectively experience outcomes according to the Born probabilities because this is the correct answer to the question about subjective experience probability.

That's indeed what we should all be hoping for. But what possible set of "axioms" for subjective experience - never mind what possible underlying physics - could correspond to the Born probabilities, while solving the computer-processor trilemma as well?

Comment author: rwallace 27 September 2009 05:12:54PM 2 points [-]

Well... following this line of thought, we should expect that the underlying physics is not special, because any physics that satisfies certain generic properties will lead to subjective experience of the Born probabilities.

Suppose we can therefore without loss of generality take the underlying physics to be equivalent to a digital computer programmed in a straightforward way, so that the quantum and computer trilemmas are equivalent.

Is there any set of axioms that will lead (setting aside other intuitions for the moment) to subjective experience of the Born probabilities in the case where we are running on a computer and therefore do know the underlying physics? If there is, that would constitute evidence for the truth of those axioms even if they are otherwise counterintuitive; if we can somehow show that there is not, that would constitute evidence that this line of thought is barking up the wrong tree.

Comment author: Psy-Kosh 27 September 2009 11:20:45PM 1 point [-]

we should expect that the underlying physics is not special, because any physics that satisfies certain generic properties will lead to subjective experience of the Born probabilities.

Elaborate on that bit please? Thanks.

Comment author: rwallace 28 September 2009 12:45:07AM 0 points [-]

Well basically, we start off with the claim (which I can't confirm of my own knowledge, but have no reason to doubt) that the Born rule has certain special properties, as explained in the original post.

We observe that the Born rule seems to be empirically true in our universe.

We would like an explanation as to why our universe exhibits a rule with special properties.

Consider the form this explanation must take. It can't be because the Born rule is encoded into the ultimate laws of physics, because that would only push the mystery back a few steps. It should be a logical conclusion that we would observe the Born rule given any underlying physics (within reason).

Of course there is far too much armchair handwaving here to constitute proof, but I think it at least constitutes an interesting conjecture.

Comment author: Psy-Kosh 28 September 2009 01:06:28AM 0 points [-]

Well, even if it turns out that there're special properties of our physics that are required to produce the Born rule, I'd say that mystery would be a different, well, kind of mystery. Right now it's a bit of "wtf? where is this bizzaro subjective nonlinearity etc coming from? and it seems like something 'extra' tacked onto the physics"

If we could reduce that to "these specific physical laws give rise to it", then even though we'd still have "why these laws and not others", it would, in my view, be an improvement over the situation in which we seem to have an additional law that seems almost impossible to even meaningfully phrase without invoking subjective experience.

I do agree though that given the special properties of the rule, any special properties in the underlying physics that are needed to give rise to the rule should be in some sense "non arbitrary"... that is, should look like, well, like a nonaribitrarily selected physical rule.

Comment author: Eliezer_Yudkowsky 27 September 2009 05:38:11PM 0 points [-]

Sounds like a right question to me. Got an answer?

A related problem: If we allow unbounded computations, then, when we try to add up copies, we can end up with different limiting proportions of copies depending on how we approach t -> infinity; and we can even have algorithms for creating copies such that their proportions fail to converge. (1 of A, 3 of B, 9 of A, 27 of B, etc.) So then either it is a metaphysical necessity that reality be finite, because otherwise our laws will fail to give correct answers; or the True Rules must be such as to give definitive answers in such a situation.

Comment author: rwallace 27 September 2009 06:05:52PM 1 point [-]

I'm afraid I'm not familiar enough with the Born probabilities to know how to approach an answer -- oh, I've been able to quote the definition about squared amplitudes since I was a wee lad, but I've never had occasion to actually work with them, so I don't have any intuitive feel about their implications.

As for the problem of infinity, you're right of course, though there are other ways for that to arise too -- for example, if the underlying physics is analog rather than digital. Which suggests it can't be fiated away. I don't know what the solution is, but it reminds me of the way cardinality says all shapes contain the same number of points, so it was necessary to invent measure to justify the ability to do geometry.

Comment author: Psy-Kosh 27 September 2009 11:25:56PM 0 points [-]

Deeply fundamentally analog physics, ie, infinite detail, would just be another form of infinity, wouldn't it? So it's a variation of the same problem of "what happens to all this when there's an infinity involved?"

Comment author: bogus 27 September 2009 11:38:23PM 1 point [-]

Deeply fundamentally analog physics, ie, infinite detail,

To the best of our understanding, there's no such thing as "infinite detail" in physics. Physical information is limited by the Bekenstein bound.

Comment author: Psy-Kosh 28 September 2009 12:03:02AM 0 points [-]

Sorry, I may have been unclear. I didn't mean to make a claim that physics actually does have this property, but rather I was saying that if physics did have this property, it would just be another instance of an infinity, rather than an entirely novel source for the problem mentioned.

(Also, I'm unclear on the BB, if it takes into account possible future tech that may be able to manipulate the geometry of spacetime to some extent. ie, if we can do GR hacking, would that affect the bound or are the limits of that effectively already precomputed into that?)

Comment author: rwallace 28 September 2009 12:39:14AM 0 points [-]

Yes, that is my position on it.

Comment author: MichaelHoward 27 September 2009 12:11:40PM 1 point [-]

In quantum copying and merging, every "branch" operation preserves the total measure of the original branch,

Maybe branch quantum operations don't make new copies, but represent already existing but identical copies "becoming" no longer identical?

In the computer program analogy: instead of having one program at time t and n slightly different versions at time t+1, start out with n copies already existing (but identical) at time t, and have each one change in the branching. If you expect a t+2, you need to start with at least n^2 copies.

(That may mean a lot more copies of everything than would otherwise be expected even under many worlds, but even if it's enough to give this diabolical monster bed-wetting nightmares, by the Occam's razor that works for predicting physical laws, that's absolutely fine).

Come to think of it... if this interpretation isn't true...

or for that matter, even if this is true but it isn't true that someone who runs redundantly on three processors gets three times as much weight as someone who runs on one processor...

then wouldn't we be vastly likely to be experiencing the last instant of experienceable existence in the Universe, because that's where the vast majority of distinct observers would be? Omega-point simulation hypothesis anyone? :-)

Comment author: Psy-Kosh 27 September 2009 02:36:30PM 1 point [-]

But where does all the quantum interference stuff come from then?

Comment author: MichaelHoward 27 September 2009 04:37:36PM 0 points [-]

I'm not trying to resolve any quantum interference mysteries with the above, merely anthropic ones. I have absolutely no idea where the born probabilities come from.

Comment author: Psy-Kosh 27 September 2009 04:39:28PM 1 point [-]

Sorry, I was unclear. I meant "if what you say is the correct explanation, then near as I can tell, there shouldn't be anything resembling quantum interference. In your model, where is there room for things to 'cancel out' if copies just keep multiplying like that?"

Or did I misunderstand what you were saying?

Comment author: MichaelHoward 27 September 2009 09:06:35PM 2 points [-]

In your model, where is there room for things to 'cancel out' if copies just keep multiplying like that?

Ah, sorry if I wasn't clear. The copies wouldn't multiply. In the computer program analogy, you'd have the same number of programs at every time step. So instead of doing this...

Step1: "Program".

Step2: "Program0", "Program1".

Step3: "Program00", "Program01", "Program10", "Program11".

You do this...

Step1: "Program", "Program", "Program", "Program", ...

Step2: "Program0", "Program1", "Program0", "Program1", ...

Step3: "Program00", "Program01", "Program10", "Program11", ...

For the sake of simplicity this is the same algorithm, but with part of the list not used when working out the next step. If our universe did this, surely at any point in time it would produce exactly the same experimental results as if it didn't.

If we're not experiencing the last instant of experienceable existence, I think that may imply that the second model is closer to the truth, and also that someone who runs redundantly on three processors gets three times as much weight as someone who runs on one processor, for the reasons above.

Comment author: Psy-Kosh 27 September 2009 11:06:21PM 1 point [-]

Ah, okay. Sorry I misunderstood.

Comment author: casebash 16 April 2016 06:19:38AM *  0 points [-]

I will bite the first horn of the trilemma. I'm will argue that the increase in subjective probability results from losing information and that it is no different from other situations where you lose information in such a way to make subjective probabilities seem higher. For example, if you watch the lotto draw, but then forget every number except those that match your ticket, your subjective probability that you won will be much higher than originally.

Let's imagine that if you win the lottery that a billion copies of you will be created.

t=0: The lottery is drawn t=1: If you won the lottery, then a billion clones are created. The original remembers that they are the original as they see the clones being created, but if clones were made, they don't know they are clones and don't know that the original knows that they are the original, so they can't figure it out that way. t=2: You have a bad memory and so you forget whether you are an original or a clone. t=3: If any clones exist, they are all killed off t=4: Everyone is informed about whether or not they ran the lottery.

Let's suppose that you know you are the original and that you are at t=1. Your chances of winning the lottery are still 1 in a million as the creation of clones does not affect your probability of waking up to a win at t=4 if you know that you are not a clone.

Now let's consider the probability at t=2. Your subjective odds of winning the lottery have rise massively, since you most probably are a copy. Even though there is only a one in a million chance that copies will be made, the fact that a billion copies will be made more than cancels this out.

What we have identified is that it is the information loss that is the key feature. Of course you can increase your subjective probabilities by erasing any information that is contrary. What is interesting about cloning is that if we are able to create clones with the exact same information, we are able to effectively remove knowledge without touching your brain. That is, if you know that you are not a clone, after we have cloned you exactly, then you no longer know you are not a clone, unless someone tells you or you see it happen.

Now at t=3 we kill off/merge any remaining clones. If you are still alive, you've gained information when you learned that you weren't killed off. In fact, you've been retaught the same information you've forgotten.

Comment author: tmx 07 February 2013 03:32:18AM 0 points [-]

1) the probability of ending up in the set of winners is

1/billion

2) the probability of being (a specific) one of the trillion is

1/(b * t)

the probability of being a 2) given you are awake is

p(2 | awake) = P(awake | 2) * p(2) ------------------- p(awake)

 = 1 * 1E-21
---------
1
= very small
Comment author: [deleted] 12 August 2012 03:21:00PM 0 points [-]

Buy a ticket. Suspend your computer program just before the lottery drawing - which should of course be a quantum lottery, so that every ticket wins somewhere. Program your computational environment to, if you win, make a trillion copies of yourself, and wake them up for ten seconds, long enough to experience winning the lottery. Then suspend the programs, merge them again, and start the result. If you don't win the lottery, then just wake up automatically.

How would you do that???

Comment author: wedrifid 13 August 2012 01:27:31AM 0 points [-]

How would you do that???

Are you asking for a solution to the engineering problem of how to convert yourself into an Em? I can't help there. Once you have that, this part with the lottery seems simple. The 'merge them again' would be tricky both on the philosophy and the engineering side (perhaps harder than converting to an Em in the first place.)

Comment author: jacob_cannell 12 December 2011 09:36:00PM *  0 points [-]

The odds of winning the lottery are ordinarily a billion to one. But now the branch in which you win has your "measure", your "amount of experience", temporarily multiplied by a trillion.

This seems perhaps too obvious, but how can branches multiply probability by anything greater than 1? Conditional branches follow the rules of conjunctive probability . . .

Probability in regards to the future is simply a matter of counting branches. The subset of branches in which you win is always only one in a billion of all branches - and any further events in a branch only create further sub-branches, so the probability of anything happening in that sub-branch can never be greater than 10^-9. The exact number of copies in this context is irrelevant - it could be infinite and it wouldn't matter.

Whether we accept identification with only one copy of ourself as in jschculter's result or we consider our 'self' to be all copies, the results still work out to 1 billion to 1 against winning.

Another way of looking at the matter: we should be wary of any non-objective decision process. If we substitute 'you' for 'person X' in the example, we wouldn't worry that person X splitting themselves into a trillion sub-copies only if they win the lottery would somehow increase their actual likelihood of winning.

Comment author: paulfchristiano 27 December 2010 07:33:08AM 0 points [-]

The flaw is that anticipation should not be treated as a brute thing. Anticipation should be a tool used in the service of your decision theory. Once you bring in some particular decision theory and utility function, the question is dissolved (if you use TDT and your utility function is just the total quality of simulated observer moments, then you can reverse engineer exactly Nick Bostrom's notion of "anticipate." So if I had to go with an answer, that would be mine.)

Two people disagreeing about what they should anticipate is like two people arguing about whether a tree falling in an empty forest makes a sound. They disagree about what they anticipate, yes, but they behave identically.

Comment author: Vladimir_Nesov 27 December 2010 10:58:53AM 1 point [-]

Anticipation should be a tool used in the service of your decision theory. Once you bring in some particular decision theory and utility function, the question is dissolved (if you use TDT and your utility function is just the total quality of simulated observer moments, then you can reverse engineer exactly Nick Bostrom's notion of "anticipate." So if I had to go with an answer, that would be mine.)

Do that. Isn't as straightforward as it perhaps looks, I still have no idea how to approach the problem of anticipation. (Also, "total quality of simulated observer moments"?)

Comment author: paulfchristiano 27 December 2010 07:57:25PM -1 points [-]

Do that

Do you mean try to reverse engineer a notion of anticipation, or try to dissolve the question?

For the first, I mean to define anticipation in terms of what wagers you would make. In this case, how you treat a wager depends on whether having a simulation win the wager causes something good to happen to your utility function in one simulated copy, or in a million of them. Is that fair enough? I don't see why we care about anticipation at all, except as it bears on our decision making.

I don't really understand how the second question is difficult. Whatever strategy you choose, you can predict exactly what will happen. So as long as you can compare the outcomes, you know what you should do. If you care about the number of simulated paperclips that are ever created, then you should take an even paperclip bet on whether you won the lottery if the paperclips would be created before the extra simulations are destroyed. Otherwise, you shouldn't.

(Also, "total quality of simulated observer moments"?)

How do you describe a utility function that cares twice as much what happens to a consciousness which is being simulated twice?

Comment author: SforSingularity 03 October 2009 12:59:10AM 0 points [-]

a truly remarkable observation: quantum measure seems to behave in a way that would avoid this trilemma completely

Which is why Roger Penrose is so keen to show that consciousness is a quantum phenomenon.

Comment author: orthonormal 28 September 2009 07:14:41PM *  0 points [-]

We have a strong subjective sense of personal experience which is optimized for passing on genes, and which thus coincides with the Born probabilities. In addition, it seems biased toward "only one of me" thinking (evidence: most people's intuitive rejection of MWI as absurd even before hearing any of the physics, and most people's intuitive sense that if duplicated, 'they' will be the original and 'someone else' will be the copy). The plausible ev-psych explanation for this, ISTM, is that you won't ever encounter another version of your actual self, and that it's very bad to be tricked into really loving your neighbor as yourself. Thus the rigid sense of continuity of personal identity.

Thus, when complications like quantum suicide or splitting or merging of minds are introduced, the basic intuitions become extremely muddled. In particular, Nick Bostrom's solution prompts the objection of absurdity, even though it is made up of ingredients that seem reasonable (to rationalist materialists, anyhow) taken separately. That makes me suspicious that Bostrom might in fact be right, and that our objections stem more from the ev-psych than anything else.

The following thought experiment pumps my intuition: what concept of subjective probability might Ebborian-like creatures evolve? Of course, they'd split more frequently when resources were plentiful. Imagine that a quantum random event X would double an Ebborian's resources if it happened, and that X happened in half the branches of the wavefunction. If it's assumed that the Ebborian would split if and only if X happened, what subjective probability would ve evolve to assign to X? Well, since what really counts for the evolutionary process is the total 'population of descendants' averaged across all branches, ve should in fact weight more heavily the futures in which ve splits: ve should evolve to assign X probability 2/3, even though the splitting happens after observing X. And there's really no inconsistency with that: the single copy in half the branches feels a bit unlucky while the two copies in the other branches rejoice that the odds were in their favor. Repeat this a great many times, and most of the descendants will feel pretty well calibrated in their probabilities.

So I think that our sense of subjective probability has to be an evolved aid to decision-making, rather than an inherent aspect of conscious experience; and I have to go with Nick Bostrom's probabilities, as strange as they sound. (Nods to Tyrrell and Wei Dai, whose comments greatly helped my thought process.)

ETA: I just realized an ingredient of "what it would feel like": Ebborians would evolve to give the same probabilities we would for events that don't affect their splitting times, but all events that would make them richer or poorer would have subjective probability skewed in this fashion. Basically, Ebborians evolve so that each one just feels that being a consistently lucky frood is the natural state of things, and without that necessarily giving them the ego it would give us.

Comment author: Johnicholas 28 September 2009 08:26:15PM 0 points [-]

Suppose that the Ebborians gamble. What odds would it give for event X?

Suppose ve gives odds of 2:1 (probability of 2/3). A bookie takes the bet, and in half of the branches, collects 2 (from the two "descendants"), and in half of the branches, pays out 1, for an average profit of 0.5.

I think your argument leads to the Ebborians being vulnerable to Dutch books.

Comment author: orthonormal 28 September 2009 08:45:14PM *  1 point [-]

Er, your math is the wrong way around, but your point at first seems right: the Ebborian sees 2/3 odds, so ve is willing to pay the bookie 2 if X doesn't happen, and get paid 1 (split between copies, as in correlated decision theory) if X does happen.

However, if instead the Ebborian insists on paying 2 for X not happening, but on each copy receiving 1 if X happens, the Dutch book goes away. Are there any inconsistencies that could arise from this sort of policy? Perhaps the (thus developed) correlated decision theory only works for the human form of subjective probability? Or more probably, I'm missing something.

Comment author: Johnicholas 29 September 2009 08:38:43PM 0 points [-]

From the bookie's perspective, the "each copy" deal corresponds to 1:1 odds, right?

Comment author: Stuart_Armstrong 28 September 2009 03:34:45PM 0 points [-]

The third horn is a fake, being capable of being defined in or out of existence at will. It I am indifferent to my state ten seconds from now, it is true; if my current utility function includes a term for my state ten seconds from now, it is false.

The 'thread of subjective experience' is not the issue; whether I throw myself of the cliff, will depend on whether I am currently indifferent as to whether my future self will die.

Comment author: Tyrrell_McAllister 28 September 2009 06:45:46PM 0 points [-]

I don't follow you. You write

The third horn is a fake, being capable of being defined in or out of existence at will. It I am indifferent to my state ten seconds from now, it is true; if my current utility function includes a term for my state ten seconds from now, it is false.

What do you mean by calling the horn "fake"? What is the "it" that is true or false?

Comment author: AlanCrowe 28 September 2009 01:37:13PM 0 points [-]

Why is bullet three a fake reduction? I've bit it before and even see bullet three as something to aspire to.

Comment author: saturn 27 September 2009 11:36:54PM *  0 points [-]

Is there a contradiction in supposing that the total subjective weight increases as unconnected threads of subjective experience come into existence, but copies branching off of an existing thread of subjective experience divide up the weight of the parent thread?

Comment author: steven0461 27 September 2009 07:51:26PM 0 points [-]

For what it's worth, here is the latest attempt by philosophers of physics to prove the Born rule from decision theory.

Comment author: rwallace 28 September 2009 12:38:20AM 0 points [-]

Interesting paper, but from skimming it without grokking all the mathematics, it looks to me like it doesn't quite prove the Born rule from decision theory, only proves that given the empirical existence of the Born rule in our universe, a rational agent should abide by it. Am I understanding the paper correctly?

Comment author: steven0461 28 September 2009 03:32:50PM *  0 points [-]

My understanding is the proof doesn't use empirical frequencies -- though if we observed different frequencies, we'd have to start doubting either QM or the proof. The question is just whether the proof's assumptions are true rationality constraints or "wouldn't it be convenient if" constraints.

Everett and Evidence is another highly relevant paper.

Comment author: Johnicholas 28 September 2009 01:53:35AM 0 points [-]

I think the paper starts from the empirical existence of Born rule "weights" and attempts to explain in what sense they should be treated, decision-theoretically, as classical probabilities (since in the MWI sense, everything that might happen does happen) - but I admit I didn't grok the mathematics either.

Comment author: Stuart_Armstrong 27 September 2009 09:19:12AM 0 points [-]

Since I have a theory of Correlated decision making, let's use it! :-)

Let's look longer at the Nick Bostrom solution. How much contribution is there towards "feeling I will have won the lottery ten seconds from now" from "feeling I have currently won the lottery"? By the rules of this set-up, each of the happy copies contributes one trillionth towards that result.

(quick and dirty argument to convince you of that: replace the current rules by one saying "we will take the average feeling of victory across the trillion copies"; since all the feelings are exactly correlated, this rule gives the same ultimate result, while making it clear that each copy contributes one trillionth of the final result).

Thus Nick's position is, I believe, correct.

As for dealing with the fourth horn, I've already written on how to do that: here you have partially correlated experiences contributing to future feelings of victory, which you should split into correlated and anti-correlated parts. Since the divergence is low, the anti-correlated parts are of low probability, and the solution is approxiamtely the same as before.

Comment author: Eliezer_Yudkowsky 27 September 2009 05:01:13PM 0 points [-]

So... what does it feel like to be merged into a trillion exact copies of yourself?

Answer: it feels like nothing, because you couldn't detect the event happening.

So in terms of what I expect to see happen next... if I've seen myself win the lottery, then in 10 seconds, I expect to still see evidence that I won the lottery. Even if, for some reason, I care about it less, that is still what I see... no?

Comment author: Stuart_Armstrong 27 September 2009 05:12:48PM *  1 point [-]

See my other reformulations; here there is no "feeling of victory", but instead, you have scenarios where all but one of the trillions is spared, the others are killed. Then your expecations - if you didn't know what the other trillion had been told or shown - is that there is 1/trillion chance that the you in 10 seconds will still remember evidence that he has won the lottery.

You can only say the you in 10 seconds will remember winning the lottery with certainty because you know that all the other copies also remember winning the lottery. Their contributions bumps it up to unity.

Comment author: Psy-Kosh 27 September 2009 05:41:47AM *  0 points [-]

I've thought about this before and I think I'd have to take the second horn. Argument: assuming we can ignore quantum effects for the moment, imagine setting up a computer running one instance of some mind. There're no other instances anywhere. Shut the machine down. Assuming no dust theory style immortality (which, if there was such a thing, would seem to violate born stats, and given that we actually observe the validity of born stats...), total weight/measure/reality-fluid assigned to that mind goes from 1 to 0, so it looks reasonable that second horn type stuff is allowed to happen.

I'd say personal continuity is real, but is made up of stuff like memory, causality maybe, etc. I suspect those things explain it rather than explain it away.

However, given that in this instance it seems QM actually makes things behave in a saner way, there's one other option I think we ought consider, though I'm hesitant to bring it up:

Horn 5b: consciousness may be inherently quantum. This is not, on its own, an explanation of consciousness, but maybe we ought consider the possibility that the only types of physical processes that are "allowed" to be conscious are in some way tied to inherently quantum ones, and the only type of mind branching that's allowed is via quantum branching.

Given that, as you point out, it seems that the only form of branching that we experience (ie, quantum branching) is the one way that actually seems to (for some reason) automatically make it work out in a way that doesn't produce confusing weirdness, well...

(EDIT: main reason I'm bringing up this possibility is that it's an option that would actually help recover "it all adds up to normality")

(EDIT2: Ugh, I'm stupid: "normality" except for more or less allowing stuff that's pretty close to being p-zombies... so this doesn't actually improve the situation all that much as far as "normality" after all.)

Other than that, maybe when we explicitly solve the Born stats fully satisfactorily, when we see how nature is pulling the trick off, then we'll hopefully automatically see the consequences of this situation.

Comment author: red75 25 June 2010 11:23:07AM *  0 points [-]

I'm curious why no one mentioned Solomonoff prior here. Anticipation of subjective experience can be expressed as: what is a probability of experiencing X, given my prior experiences. Thus we "swap" ontological status of objective reality and subjective experience, then we can use Solomonoff prior to infer probabilities.

When one wakes up as a copy, one experiences instantaneous arbitrary space-time travel, thus Solomonoff prior for this experience should be lower, than that of wake-up-as-original one (if original one can wake up at all).

Given that approach, it seems that our subjective experience will tend to be as much "normal" as it allowed by simplest computable laws of physics.

Comment author: red75 25 June 2010 05:45:55PM *  0 points [-]

It seems I've given too little information to make it worth thinking on it. Here's detailed explanation.

I'll abbreviate thread of subjective experience as TSE.

  1. If I make 10^6 copies of myself, then all 10^6+1 continuations of TSE are indistinguishable to external observer. Thus all these continuations are invariant under change of TSE, and it seems that we can assign equal probability to them. Yes, we can, but:

  2. If TSE is not ontologically fundamental, then it is not bound by spacetime, laws of physics, universe, Everett multiverse, etc. There will be no logical contradiction, if you will find youself next instant as Boltzmann brain, or in one of infinitely many universes of level 4 multiverse, or outside your own lightcone. Thus:

  3. Every finite set of continuations of TSE has zero probability. And finally:

  4. We have no options, but Solomonoff prior to infer what we will experience next.