Patternism is the belief that the thing that makes 'you' 'you' can be described as a simple pattern. When there is a second being with the same mental pattern as you, that being is also you. This comes up mostly in the debate around mind-upoading, where if you make a digital copy of yourself, that copy will be just as much you as the current you is.
Question: How do we quantify this pattern?
And I don't mean how do we encode epigenetic information or a human mind into a string of binary numbers (even though that is a whole issue in and of itself), I will assume in this post that's all perfectly doable. How will we quantify when something is no longer part of the same general pattern?
I mean when we have a string of binary numbers with a couple of ones switched to zeros we can just tally up the errors and determine how much percent is different from the copy. But what if there is stuff missing or added? What if there is a big block of code that is duplicated in the copy? Should we count that as a one error or dozens? What if there are big chunks of codes that appear in both copies but in different places? Or in a different order? Or interspersed into the rest of the code?
But even in the first example are problems, not all ones and zeros are created equal. We care a lot about wether certain features of our being are switches 'on' or 'off' but not so much for others. Do we have to compare someone's personal desires? How do we quantify that? Or should we quantify what most people would deem an important change? Why? How? I fear that there is no real way to do this objectively and since there will always be small mutations/errors in copying you can never know which of the other "you's" is the most like you. Which I think is a pretty heavy blow for patternism.
EDIT: Apparently that last sentence caused some confusion so let me clarify. I'm not saying it's a blow for the truthfulness of the theory, since that is just a matter of definition (and I'm not interested in disputing definitions). I'm saying it's a blow for the usefulness of the theory since it doesn't help us generate new insights by making new and accurate predictions.
I didn't follow this. You're saying for now you're leaning towards a subjective measuring viewpoint? Which one?
Depending on what you mean by "impartial", I might agree that that's the right move. But I think a good theory might end up looking more like special relativity, where time, speed, and simultaneity are observer-dependent (rather than universal), but in a well-defined way that we can speak precisely about.
I assume personal identity will be a little more complicated than that, since minds are more complicated than beams of light. But just wanted to highlight that as an example where we went from believing in a universal to one that was relative, but didn't have to totally throw up our hands and declare it all meaningless.
FWIW, if I were to spend some time on it, I'd maybe start by thinking through all the different ways that we use personal identity. Like, how the concept interacts with things. For example, partly it's about what I anticipate experiencing next. Partly it's about which beings' future experiences I value. Partly it's about how similar that entity is to me. Partly it's about how much I can control what happens to that future entity. Partly it's about what that entities memories will be. Etc, etc.
Just keep making the list, and then analyze various scenarios and thought experiments and think through how each of the different forms of personal identity applies. Which are relevant, and which are most important?
Then maybe you have a big long list of attributes of identity, and a big long list of how decision-relevant they are for various scenarios. And then maybe you can do dimensionality reduction and cluster them into a few meaningful categories that are individual amenable to quantification (similar to how the Big 5 personality system was developed).
That doesn't sound so hard, does it? ;-)