It seems to me that most AI researchers on this site are patternists in the sense of believing that the anti-zombie principle necessarily implies:
1. That it will ever become possible *in practice* to create uploads or sims that are close enough to our physical instantiations that their utility to us would be interchangeable with that of our physical instantiations.
2. That we know (or will know) enough about the brain to know when this threshold is reached.
But, like any rationalists extrapolating from unknown unknowns... or heck, extrapolating from anything... we must admit that one or both of the above statements could be wrong without also making friendly AI impossible. What would be the consequences of such error?
I submit that one such consequence could be an FAI that is also wrong on these issues but not only do we fail to check for such a failure mode, it actually looks to us like what we would expect the right answer to look because we are making the same error.
If simulation/uploading really does preserve what we value about our lives then the safest course of action is to encourage as many people to upload as possible. It would also imply that efforts to solve the problem of mortality by physical means will at best be given an even lower priority than they are now, or at worst cease altogether because they would seem to be a waste of resources.
Result: people continue to die and nobody including the AI notices, except now they have no hope of reprieve because they think the problem is already solved.
Pessimistic Result: uploads are so widespread that humanity quietly goes extinct, cheering themselves onward the whole time
Really Pessimistic Result: what replaces humanity are zombies, not in the qualia sense but in the real sense that there is some relevant chemical/physical process that is not being simulated because we didn't realize it was relevant or hadn't noticed it in the first place.
Possible Safeguards:
* Insist on quantum level accuracy (yeah right)
* Take seriously the general scenario of your FAI going wrong because you are wrong in the same way and fail to notice the problem.
* Be as cautious about destructive uploads as you would be about, say, molecular nanotech.
* Make sure you knowledge of neuroscience is at least as good as you knowledge of computer science and decision theory before you advocate digital immortality as anything more than an intriguing idea that might not turn out to be impossible.
You have access to your future mind in the sense that it is an evolution of your current mind. Your copy's future mind is an evolution of your copy's current mind, not yours.
Perhaps this tight causal link is what makes me care more about the mes that will branch off in the future more than I care about the past me of which I am a branch. Perhaps I would see a copy of myself as equivalent to me if we had at least sporadic direct access to each other's mind states. So my skepticism toward immortality-through-backup-copies is not unconditional.
You might not put much stock into that, and you might also be rationalizing away your basic will to live. What do you stand to lose?
My copy's future mind is an evolution of me pre-copy's current mind, and correlates overwhelmingly for a fairly long time after the copy was made. That means that making the copy is good for all me's pre-copy and to some (large) degree even post-copy. I'd certainly be more willing to take risks if I had a backup. After all, what do I stand to lose? A few days of memory?
(I don't see ... (read more)