You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

torekp comments on How many people am I? - Less Wrong Discussion

3 Post author: Manfred 15 December 2014 06:11PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread. Show more comments above.

Comment author: torekp 16 December 2014 11:58:52PM 0 points [-]

The reason for this unusual need to cross levels is because our introspective observations already start on the abstract level - they are substrate-independent.

This looks like a good assumption to question. If we do attribute the thought "I need to bet on Heads" (sorry, but pun intended) to Manfred One, the "I" in that thought still refers to plain old Manfred, I'd say. Maybe I am not understanding what "substrate independent" is supposed to mean.

Comment author: Manfred 17 December 2014 01:25:58AM *  0 points [-]

Suppose that my brain could be running in two different physical substrates. For example, suppose I could be either a human brain in a vat, or a simulation in a supercomputer. I have no way of knowing which, just from my thoughts. That's substrate-independence - pretty straightforward.

The relevant application happens when I try to do an anthropic update - suppose I wake up and say "I exist, so let me go through my world-model and assign all events where I don't exist probability 0, and redistribute the probability to the remaining events." This is certainly a thing I should do - otherwise I'd take bets that only paid out when I didn't exist :)

The trouble is, my observation ("I exist") is at a different level of abstraction from my world model, and so I need to use some rule to tell me which events are compatible with my observation that I exist. This is the focus of the post.

If I could introspect on the physical level, not just the thought level, this complicated step would be unnecessary: I'd just say "I am physical system so and so, and since I exist I'll update to only consider events consistent with that physical system existing." But that super-introspection would, among its other problems, not be substrate-independent.

Comment author: torekp 18 December 2014 12:54:42AM 0 points [-]

Oh, that kind of substrate independence. In Dennett's story, an elaborate thought experiment has been constructed to make substrate independence possible. In the real world, your use of "I" is heavily fraught with substrate implications, and you know pretty well which physical system you are. Your "I" got its sense from the self-locating behavior and experiences of that physical system, plus observations of similar systems, i.e. other English speakers.

If we do a Sleeping Beauty on you but take away a few neurons from some of your successors and add some to other successors, the sizes of their heads doesn't change the number of causal nexuses, which is the number of humans. Head size might matter insofar as it makes their experiences better or worse, richer or thinner. (Anthropic decision-making - which seems not to concern you here, but I like to keep it in mind, because some anthropic "puzzles" are thus helped.)