Now imagine that I alter your entire brain. Now, the answer seems to be no.
Alter how? Do I still have memories of this argument? Do I share any memories with my past self? If I share all memories, then probably it's still me. If all have gone, then most likely not. (Identifying self with memories has its own problems, but let's gloss over them for now.) So I'm going to interpret your "remove a neuron" as "remove a memory", and then your question becomes "how many memories can I lose and still be me"? That's a difficult question to answer, so I'll give you the first thing I can think of. It's still me, just a lower percentage of me. I'm not that confident that it can be put to a linear scale, though.
Therefore, there must be some minimal change to your brain to ensure that a different person will wake up (i.e. with different consciousness/qualia). This seems strange.
This is a bit like the Sorites paradox. The answer is clearly to switch to a non-binary same-consciousness dichotomy. That doesn't mean I can't point to an exact clone and say it's me.
You don't assume that the person who wakes up always has different consciousness with the person who fell asleep, do you?
Not sure what you mean. Some things change, so it won't be exactly the same. It's still close enough that I'd consider it "me".
It would be the same computer, but different working session. Anyway, I doubt such analogies are precise and allow for reliable reasoning.
Such analogies can help if they force you to explain the difference between computer and brain in this regard. You seem to have an identical model to my brain model by computers; why isn't it illogical there?
That's a difficult question to answer, so I'll give you the first thing I can think of. It's still me, just a lower percentage of me. I'm not that confident that it can be put to a linear scale, though.
That is one of the reasons why I think binary-consciousness models are likely to be wrong.
There are many differences between brains and computers; they have different structure, different purpose, different properties; I'm pretty confident (>90%) that my computer isn't conscious now, and the consciousness phenomenon may have specific qualities which a...
A self-modifying AI is built to serve humanity. The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare. To solve the problem, they direct the AI to create billions of simulated humanities in the hope that this will serve as a Schelling point to them, and make their own universe almost certainly simulated.
Plausible?