"I can make a reasonable estimate of the risk of being kidnapped or arrested and being tortured.
"There's a lot less information about the risk of ems being tortured, and such information may never be available, since I think it's unlikely that computers can be monitored to that extend."
If we can't make a reasonable estimate, what estimate do we make? The discounted validity of the estimate is incorporated in the prior probability. (Actually I'm not sure if this always works, but a consistent Bayesianism must so hold. Please correct me if this is wrong.)
My reaction to the most neutral form of the question about downloading--"If offered the certain opportunity of success at no cost, would I accept?"--Is "No." The basis is my fear that I wouldn't like the result. I justify it—perhaps after the fact—by assigning an equal a priori likelihood to a good and bad outcome. In Nancy's terms, I'm saying that we have no ability to make a reasonable estimate. The advantage of putting it my way is that it implies a conclusion, rather than resulting in agnosticism (but at the cost of a less certain justification).
In general, I think people over-value the continuation of life. One consequence is that people put too little effort into mitigating the circumstances of their death--which many times, involves inclining it to come sooner rather than later.
If we can't make a reasonable estimate, what estimate do we make?
What's the status of error bars in doing this sort of reasoning? It seems to me that a probability of .5 +/- epsilon (a coin you have very good reason to think is honest) is a very different thing from .5 +/- .3 (outcome of an election in a country about which you only know that they have elections and the names of the candidates).
I'm not sure +/- .3 is reasonable-- I think I'm using it to represent that people familiar with that country might have a good idea who'd win.
A new paper has gone up in the November 2011 JET: "Ray Kurzweil and Uploading: Just Say No!" (videos) by Nick Agar (Wikipedia); abstract:
The argument is a variant of Pascal's wager he calls Searle's wager. As far as I can tell, the paper contains mostly ideas he has already written on in his book; from Michael Hauskeller's review of Agar's Humanity's End: Why We Should Reject Radical Enhancement
John Danaher (User:JohnD) examines the wager, as expressed in the book, further in 2 blog posts:
After laying out what seems to be Agar's argument, Danaher constructs the game-theoretic tree and continues the criticism above:
One point is worth noting: the asymmetry of uploading with cryonics is deliberate. There is nothing in cryonics which renders it different from Searle's wager with 'destructive uploading', because one can always commit suicide and then be cryopreserved (symmetrical with committing suicide and then being destructively scanned / committing suicide by being destructively scanned). The asymmetry exists as a matter of policy: the cryonics organizations refuse to take suicides.
Overall, I agree with the 2 quoted people; there is a small intrinsic philosophical risk to uploading as well as the obvious practical risk that it won't work, and this means uploading does not strictly dominate life-extension or other actions. But this is not a controversial point and has already in practice been embraced by cryonicists in their analogous way (and we can expect any uploading to be either non-destructive or post-mortem), and to the extent that Agar thinks that this is a large or overwhelming disadvantage for uploading ("It is unlikely to be rational to make an electronic copy of yourself and destroy your original biological brain and body."), he is incorrect.