Strongly related: the Ebborians

Imagine mapping my brain into two interpenetrating networks. For each brain cell, half of it goes to one map and half to the other. For each connection between cells, half of each connection goes to one map and half to the other. We can call these two mapped out halves Manfred One and Manfred Two. Because neurons are classical, as I think, both of these maps change together. They contain the full pattern of my thoughts. (This situation is even more clear in the Ebborians, who can literally split down the middle.)

So how many people am I? Are Manfred One and Manfred Two both people? Of course, once we have two, why stop there - are there thousands of Manfreds in here, with "me" as only one of them? Put like that it sounds a little overwrought - what's really going on here is the question of what physical system corresponds to "I" in english statements like "I wake up." This may matter.

The impact on anthropic probabilities is somewhat straightforward. With everyday definitions of "I wake up," I wake up just once per day no matter how big my head is. But if the "I" in that sentence is some constant-size physical pattern, then "I wake up" is an event that happens more times if my head is bigger. And so using the variable people-number definition, I expect to wake up with a gigantic head.

The impact on decisions is less big. If I'm in this head with a bunch of other Manfreds, we're all on the same page - it's a non-anthropic problem of coordinated decision-making. For example, if I were to make any monetary bets about my head size, and then donate profits to charity, no matter what definition I'm using, I should bet as if my head size didn't affect anthropic probabilities. So to some extent the real point of this effect is that it is a way anthropic probabilities can be ill-defined. On the other hand, what about preferences that depend directly on person-numbers like how to value people with different head sizes? Or for vegetarians, should we care more about cows than chickens, because each cow is more animals than a chicken is?

 

According to my common sense, it seems like my body has just one person in it. Why does my common sense think that? I think there are two answers, one unhelpful and one helpful.

The first answer is evolution. Having kids is an action that's independent of what physical system we identify with "I," and so my ancestors never found modeling their bodies as being multiple people useful.

The second answer is causality. Manfred One and Manfred Two are causally distinct from two copies of me in separate bodies but the same input/output. If a difference between the two separated copies arose somehow, (reminiscent of Dennett's factual account) henceforth the two bodies would do and say different things and have different brain states. But if some difference arises between Manfred One and Manfred Two, it is erased by diffusion.

Which is to say, the map that is Manfred One is statically the same pattern as my whole brain, but it's causally different. So is "I" the pattern, or is "I" the causal system? 

In this sort of situation I am happy to stick with common sense, and thus when I say me, I think the causal system is referring to the causal system. But I'm not very sure.

 

Going back to the Ebborians, one interesting thing about that post is the conflict between common sense and common sense - it seems like common sense that each Ebborian is equally much one person, but it also seems like common sense that if you looked at an Ebborian dividing, there doesn't seem to be a moment where the amount of subjective experience should change, and so amount of subjective experience should be proportional to thickness. But as it is said, just because there are two opposing ideas doesn't mean one of them is right.

On the questions of subjective experience raised in that post, I think this mostly gets cleared up by precise description an  anthropic narrowness. I'm unsure of the relative sizes of this margin and the proof, but the sketch is to replace a mysterious "subjective experience" that spans copies with individual experiences of people who are using a TDT-like theory to choose so that they individually achieve good outcomes given their existence.

New Comment
27 comments, sorted by Click to highlight new comments since:
[-]Dentin120

You're applying a label, "I", to a complex system. None of your definitions for "I" correctly describe the system. The "conflict between common sense and common sense" that you describe appears because you have conflicting simplistic interpretations of a complex system, not because there's actually anything special going on with the complex system.

Multiple parallel brains with the same input and output are not currently covered by the standard concepts of "I" or "identity", and reasoning about that kind of parallel brain using such concepts is going to be problematic.

The map is not the territory. Nothing to see here.

Would you say that your probability that the sun rises tomorrow is ill-defined, the map is not the territory, nothing to see here?

If so, I commend you for your consistency. If not, well, then there's something going on here whether you like it or not.

EDIT: To clarify, this is not an argument from analogy; ill-definedness of probability spreads to other probabilities. So one can either use an implicit answer to the question of what "I exist" means physically, or an explicit one, but you cannot use no answer.

The reason for this unusual need to cross levels is because our introspective observations already start on the abstract level - they are substrate-independent.

The reason for this unusual need to cross levels is because our introspective observations already start on the abstract level - they are substrate-independent.

This looks like a good assumption to question. If we do attribute the thought "I need to bet on Heads" (sorry, but pun intended) to Manfred One, the "I" in that thought still refers to plain old Manfred, I'd say. Maybe I am not understanding what "substrate independent" is supposed to mean.

Suppose that my brain could be running in two different physical substrates. For example, suppose I could be either a human brain in a vat, or a simulation in a supercomputer. I have no way of knowing which, just from my thoughts. That's substrate-independence - pretty straightforward.

The relevant application happens when I try to do an anthropic update - suppose I wake up and say "I exist, so let me go through my world-model and assign all events where I don't exist probability 0, and redistribute the probability to the remaining events." This is certainly a thing I should do - otherwise I'd take bets that only paid out when I didn't exist :)

The trouble is, my observation ("I exist") is at a different level of abstraction from my world model, and so I need to use some rule to tell me which events are compatible with my observation that I exist. This is the focus of the post.

If I could introspect on the physical level, not just the thought level, this complicated step would be unnecessary: I'd just say "I am physical system so and so, and since I exist I'll update to only consider events consistent with that physical system existing." But that super-introspection would, among its other problems, not be substrate-independent.

Oh, that kind of substrate independence. In Dennett's story, an elaborate thought experiment has been constructed to make substrate independence possible. In the real world, your use of "I" is heavily fraught with substrate implications, and you know pretty well which physical system you are. Your "I" got its sense from the self-locating behavior and experiences of that physical system, plus observations of similar systems, i.e. other English speakers.

If we do a Sleeping Beauty on you but take away a few neurons from some of your successors and add some to other successors, the sizes of their heads doesn't change the number of causal nexuses, which is the number of humans. Head size might matter insofar as it makes their experiences better or worse, richer or thinner. (Anthropic decision-making - which seems not to concern you here, but I like to keep it in mind, because some anthropic "puzzles" are thus helped.)

To clarify, I'm arguing that your post revolves entirely around your concepts of "I" and "people" (the map), and how those concepts fail to match up to a given thought experiment (the territory.) Sometimes concepts are close matches to scenarios, and you can get insight from looking at them; sometimes concepts are poor matches and you get garbage instead. Your post is a good example of the garbage scenario, and it's not surpising that you have to put forth a lot of effort to pound your square pegs into non-square-shaped holes to make sense of it.

Did my last sentence in the edit make sense? We may have a misunderstanding.

No, your last sentence did not make sense, and neither does the rest of that comment, hence my attempt to clarify. My best attempt at interpreting what you're trying to say looks at this particular section:

'an implicit answer to the question of what "I exist" means physically'

Where I immediately find the same problem I see in the original post: "I exist" doesn't actually "mean" anything in this context, because you haven't defined "I" in a way that is meaningful for this scenario.

For me personally, the answer to the question is pretty trivially clear because my definition of identity covers these cases: I exist anywhere that a sufficiently good simulation of me exists. In my personal sense of identity, the simulation doesn't even have to be running, and there can be multiple copies of me which are all me and which all tag themselves with 'I exist'.

With that in mind, when I read your post, I see you making an issue out of a trivial non-issue for no reason other than you've got a different definition of "I" and "person" than I do. When this happens, it's a good sign that the issue is semantic, not conceptual.

Imagine mapping my brain into two interpenetrating networks. For each brain cell, half of it goes to one map and half to the other. For each connection between cells, half of each connection goes to one map and half to the other.

What would happen in this case is that there would be no Manfreds, because (even assuming the physical integrity of the neuron-halves was preserved) you can't activate a voltage-gated ion channel with half the potential you had before. You can't reason about the implications of the physical reality of brains while ignoring the physical reality of brains.

Or are you asserting no physical changes to the system, and just defining each neuron to be multiple entities? For the same reason I think the p-zombies argument is incoherent, I'm quite comfortable not assigning any moral weight to epiphenomenal 'people'.

[-]Shmi40

According to my common sense, it seems like my body has just one person in it.

How do you define the term "person" for the purposes of this statement?

The key property of me in this case is the anthropic one - 'my' existence allows me to infer things about causes of my existence.

Also, I just now noticed that you -did not answer the question-, and that it's a critically important question. How do you define the term 'person', as questioned above? That definition has nothing whatsoever to do with anthopic properties or inference.

I don't see that as being a valid property. Your existence purely in isolation does not allow you to infer anything. Did you mean something more along the lines of "'my' existence in addition to other information X allows me to infer things..." instead? If so, it would be helpful if you clarified exactly which other information is involved.

It does not as you don't obtain any world properties that 'your' existence should reflect with such a definition.

How many people am I?

Does it make any difference?

Well, if we put our dual Manfred's in one trolley car, and one person in another, then the ethics might care.

More substantially, once uploads start being a thing, the ethics of these situations will matter.

The other contexts where these issues matter is in anthropics, expectations and trying to understand what the implications of Many-Worlds are. In this case, making the separation completely classical may be helpful: when one cannot understand a complicated situation, looking at a simpler one can help.

It does not as the other person is parseable as multiple ones as well.

Uploading is not a thing atm, and once it is viable, the corresponding ethics will be constructed from special cases of the entity's behaviour, like it was done before.

I still don't get how the anthropic principle cares about the labels we assign to stuff.

It does not as the other person is parseable as multiple ones as well

That's not obvious. What if one entity is parseable in such a way and another one isn't?

the corresponding ethics will be constructed from special cases of the entity's behaviour, like it was done before.

Why?

I still don't get how the anthropic principle cares about the labels we assign to stuff.

Right. They shouldn't. So situations like this one may be useful intuition pumps.

That's not obvious. What if one entity is parseable in such a way and another one isn't?

Every human produces lots of different kinds of behaviour, so it can be modeled as a pack of specialized agents.

Why?

Because ethics is essentially simplified applied modeling of other beings.

Because ethics is essentially simplified applied modeling of other beings.

This seems like a very non-standard notion of what constitutes ethics. Can you expand on this captures the usual intuitions about what the concerns of ethics are?

This seems like a very non-standard notion of what constitutes ethics. Can you expand on this captures the usual intuitions about what the concerns of ethics are?

The concerns of ethics for a given agent is to facilitate one to interact with others effectively, no?

Not at all. If I do something that doesn't accomplish my goals that's generally labeled as something like "stupid." If I decide that I want to kill lots of people, the problem with that is ethical even if my goals are fulfilled by it. Most intuitions don't see these as the same thing.

How does this contradicts my notion of ethics? You will surely use what you know about the ethical properties of manslaughter to reach the goal and save yourself from the troubles, like manipulating the public opinion in your favor via, for instance, imitation the target people attacking you. Or even consider if the goal is worthy at all.

Please explain how say a trolley problem fits into your framework.

Please explain how say a trolley problem fits into your framework.

The correct choice is to check out who do you want to be killed and saved more, and what are, for instance, the social consequences of your actions. I don't understand your question, it seems.

Suppose you don't have any time to figure out which people would be better. And suppose no one else will know that you were able to pull a switch.

Honestly, it seems like your notion of ethics is borderline psychopathic.

Suppose you don't have any time to figure out which people would be better. And suppose no one else will know that you were able to pull a switch.

Then my current algorythms will do the habitual stuff I'm used to do in similar situations or randomly explore the possible outcomes (as in "play"), like in every other severely constrained situation.

Honestly, it seems like your notion of ethics is borderline psychopathic.

What does this mean?