It astonishes me how many otherwise skeptical people are happy to ascribe magical properties to their substrate.
The identify of an object is a choice, a way of looking at it. The "right" way of making this choice is the way that best achieves your values. When you ask yourself what object is really you, and therefore to be valued, you're engaged in a tail biting exercise without "rational" answer.
If you value the continuance of your thought patterns, you'll likely he happy to upload. If you value your biological substrate, you won't. In a world where some do, and some don't, I don't see either as irrational - they just value different things, and take different actions thereby. You're not "irrational" for picking Coke over Pepsi.
If the rules of this game allow one side to introduce a "small intrinsic philosophical risk" attached to mind-uploading, even though it's impossible in principle to detect whether someone has suffered 'arbitrary Searlean mind-annihiliation', then surely the other side can postulate a risk of arbitrary mind-annihilation unless we upload ourselves. (Even ignoring the familiar non-Searlean mind-annihilation that awaits us in old age.)
Perhaps a newborn mind has a half-life of only three hours before spontaneously and undetectably annihilating itself.
First of all, thanks for sharing from my blog posts. Second, and perhaps unsurprisingly, I disagree with Hauskeller's interpretation of Agar's argument as being "curiously techno-optimistic" because of its appeal to LEV. Agar isn't particularly optimistic about LEVs chances of success (as is shown by his comments in subsequent chapters of his book). He just thinks LEV is more likely than the combination of Strong AI and apparently successful mind-uploading.
I find it odd that the reviewer Hauskeller, who finds uploading not just absurd, but obviously absurd, bothers to address this topic, though he seems fair. Reading the rest of the review, I find it similarly odd that since Agar rejects longevity that he bothers to talk about uploading. Finally, the review claims that Agar says something simply stupid. He rejects Bostrom's complaints about status quo bias on the grounds that they are endowment effect. They are the same thing! How did Agar pick up the phrase "endowment effect" without noticing that people were condemning it? I'm not interested in what he might have meant, but how he made the mistake of thinking that other people thought the phrase a good thing.
I think the question of assigning a probability to whether or not you'll 'really go on' or 'be replaced by something else' is an example of a fundamentally confused question.
Gwern notes that when you create an upload of yourself, you risk that upload being abused. A sadist could copy your upload millions of times and torture you for subjective aeons.
A new paper has gone up in the November 2011 JET: "Ray Kurzweil and Uploading: Just Say No!" (videos) by Nick Agar (Wikipedia); abstract:
The argument is a variant of Pascal's wager he calls Searle's wager. As far as I can tell, the paper contains mostly ideas he has already written on in his book; from Michael Hauskeller's review of Agar's Humanity's End: Why We Should Reject Radical Enhancement
John Danaher (User:JohnD) examines the wager, as expressed in the book, further in 2 blog posts:
After laying out what seems to be Agar's argument, Danaher constructs the game-theoretic tree and continues the criticism above:
One point is worth noting: the asymmetry of uploading with cryonics is deliberate. There is nothing in cryonics which renders it different from Searle's wager with 'destructive uploading', because one can always commit suicide and then be cryopreserved (symmetrical with committing suicide and then being destructively scanned / committing suicide by being destructively scanned). The asymmetry exists as a matter of policy: the cryonics organizations refuse to take suicides.
Overall, I agree with the 2 quoted people; there is a small intrinsic philosophical risk to uploading as well as the obvious practical risk that it won't work, and this means uploading does not strictly dominate life-extension or other actions. But this is not a controversial point and has already in practice been embraced by cryonicists in their analogous way (and we can expect any uploading to be either non-destructive or post-mortem), and to the extent that Agar thinks that this is a large or overwhelming disadvantage for uploading ("It is unlikely to be rational to make an electronic copy of yourself and destroy your original biological brain and body."), he is incorrect.