I was very put off by this piece--even though Jobs clearly had a pro-death stance, using his death as an opportunity to score points against transhumanists isn't particularly nice. (And I wonder how much of this article Jobs would have actually agreed with.)
I found this part of the article especially disturbing:
Indeed, [Jobs] might conclude that the whole flaw in the Kurzweil vision is that anybody anywhere would have any lasting use for us, that our sticking around in digital form would be welcome or valuable in any way.
Apparently the author thinks that human lives don't have any kind of moral value, or the ability to work after being uploaded.
One funny thing is that "cryogenics" and the singularity are so often lumped together, even though interest in them is for many people inversely correlated--if you believe in a near singularity (and are relatively young), you don't need cryonics for personal survival, and if you don't believe in a near singularity, you do need cryonics.
If you believe that technology is going to get vastly better before you're in serious danger of death, you don't need cryonics. If you believe that technology isn't going to get vastly better at all, or at least not for hundreds of years, you probably shouldn't bother with cryonics. There's a region in the middle where it looks much more appealing: you expect huge progress, but fear that it'll happen after your death.
One outside-view reason for adjusting one's optimism about cryonics downwards would be that predictions of Big Dramatic Changes on a timescale of, say, a few decades, are very common and almost always wrong. AI and fusion power have been about 50 years away for at least 50 years. (Cheap space travel, too?) Looking outside the domain of science, this is also a common timescale for end-of-the-world cults.
(For the avoidance of doubt, I am not claiming that everyone who thinks cryonics worth while is overestimating the probability of huge technological progress in the 30-to-300-year range, nor that cryonics is not in fact worth while.)
he might conclude that the most important feature such a device could have is an off-switch—a permanent one.
The irony being that iPods (and iThink (too much?) iPads?) don't have "off switches" per se...
('Should it fit in a pocket or backpack?': Robot chassis, please. 'Who is the user?': Hopefully the consciousness itself. O.O)
What kind of device should our consciousness occupy? . . . And who is the user? . . . If the user is our "survivors"—i.e., our loved ones who still exist in physical form—he might conclude that the most important feature such a device could have is an off-switch—a permanent one.
He probably didn't intend to do it, but I think the author just claimed that uploads ought to become their relatives' property, and that said relatives should be easily able and permitted to kill them. Presumably, he doesn't believe that our consciousnesses would really be present in uploads, or he would have more respect for people's right to self-ownership.
The best case scenario for transhumanist would have been if this piece had been written by Robert Bryce.
I downvoted this because (I thought it was too highly upvoted and) I think a purely negative comment saying such a bad article's flaws are typical should have to say explicitly what is wrong. Otherwise a reader might wrongly interpret this as strong evidence that a particular flaw of the article is common - I see many flaws in it, large and small, and I don't know which you perceive as common or rare there.
What I meant was mostly the tendency of using news as a platform to "score points against the other team." This is at best uncorrelated with correctness, but can do even worse if the "other team" in this case makes their decisions based on the facts.
And One Last Thing: Digital immortality is an app that probably wouldn't have interested Steve Jobs
Excerpt: