The problem with the abstract seems different from what you describe (I only read the abstract). It looks like a kind of fallacy of gray, arguing for irrelevance of (vast) quantitative improvements by pointing out (supposed) absence of corresponding absolute qualitative change. It's similar to a popular reaction to the idea of life extension: people point out that it's not possible to live "forever", even though this point doesn't make the improvement from 80 to 800 years any less significant. (It's misleading to bite the bullet and start defending possibility of immortality, which is unnecessary for the original point.) This pattern matches most of the goals outlined in the abstract.
That's part of the frustrating thing - there are many parts which do look exactly like the fallacy of grey (thanks for reminding me of the name, I simply couldn't remember it) and he seems to recognize it a bit in some of the later parts like where he describes how a defender of Bostrom might point out that the goal of the fable was to motivate us to eliminate one particularly bad dragon.
But he also took pains to explicitly state at one point his concern with fundamental limits, so anyone who looked at just the abstract or just (all the many) parts that looked like fallacy of grey could instantly be smacked down as 'you clearly did not read my paper carefully, because I am not concerned with the transhumanists' incremental improvements but with the final goal of perfection'.
The paper is muddled enough that I don't think this was deliberate, but it does impress me a little bit.
Annoyance was the feeling I got, as well. It seems to me that in the places he does not commit the fallacy of grey, he only restates limits that any LW-style transhumanist understands--ie, in an EM scenario without a friendly singleton, there will still be disease, injuries, and death; even given a friendly singleton, with meaningful "continuous improvement" we only get about 28,000 subjective years until the heat death of the universe, etc.
Thanks for doing this; your criticism is precisely what I was thinking a few lines into the piece. To echo the other thing Douglas_Knight said, though, it's helpful to say something at the top that lets people know whether this is a worthwhile read for them. (For instance, the Scooby Doo post's title makes it pretty clear to most people whether or not it's the sort of thing they want to read right now.)
In this case, it would have been relevant to say that (in your analysis) the linked article isn't of interest for any quality insights, but mainly because it's been published in a prestigious journal and thus illustrates the (embarrassingly shallow) current level at which academics publicly engage with transhumanist ideas. (There are more deft/polite/high-status ways to briefly convey this information, of course.)
Since that question is at the heart of transhumanism, his paper offers nothing of interest to us.
So why bother with it?
For the same reason you deal with any critic; in particular, this is published in one of the most relevant journals for LW topics. One may not like it or think it is a valuable contribution, but that doesn't mean it's not worth discussing, especially since as far as I can tell, no one has discussed it yet.
(And what's with the high standards? This is in Discussion; this is more relevant than at least a quarter of the other Discussion posts like Scooby Doo or 'Semantic Over-achievers'.)
It seems to me that this standard would result in you writing hundreds of similar reviews with the same conclusion. Why did you choose this one? If you write more articles like this, please state the conclusion at the beginning so I can avoid reading it. I can filter other posts by their titles.
I'm not sure there are hundreds of such articles, but since you asked, I was thinking of doing the other 3 papers in this special JET issue (note the tag); then, if people seemed to find it valuable or it seemed to be leading to good discussions, I might then sporadically do particularly good or interesting ones in the previous issues of JET. Is this a problem?
While I'm asking your permission, perhaps you could tell me in advance what you would think of a chapter by chapter read of Good and Real, or reading through the SL4 archive to produce 'greatest hits' pages of links to and excerpts from the best/most original SL4 emails. (After all, I wouldn't want to annoy you.)
"Particularly good or interesting" articles sound like great ones to write about. That's the opposite of "nothing of interest to us." If you can identify "particularly good or interesting" articles, why write about the current ones? They won't be current forever. If you conclude that a chapter of Good and Real is worthless, then I would like to know that at the start of the review. But surely the reason you chose Good and Real for this treatment is because you don't expect that conclusion.
Thank you for providing a digest of the article. After reading the abstract, I wanted to know the content of the argument, but I didn't want to read the whole thing. The digest is just perfect.
/me shrugs
Yeah, most proposed "immortality" methods probably wouldn't survive, say, the Earth falling into a black hole, a sufficiently close gamma ray burst, or the heat death of the universe, but, you know, I don't really care.
"Vulnerable Cyborgs: Learning to Live with our Dragons", Mark Coeckelbergh (university); abstract:
Breaking down the potential improvements:
Physical vulnerability
Material and immaterial vulnerability
Bodily vulnerability
Metaphysical vulnerability
Existential and psychological vulnerabilities
Social and emotional vulnerability
Ethical-axiological vulnerability
'Relational vulnerability'/'Conclusion: Heels and dragons'
Before criticizing it, I'd like to point to the introduction where the author lays out his mission: to discuss what problems cannot "in principle" be avoided, what vulnerabilities are "necessary". In other words, he thinks he is laying out fundamental limits, on some level as inexorable and universal as, say, Turing's Halting Theorem.
But he is manifestly doing no such thing! He lists countless 'vulnerabilities' which could easily be circumvented to arbitrary degrees. For example, the computer viruses he puts such stock on: there is no fundamental reason computer viruses must exist. There are many ways they could be eliminated starting from formal static proofs of security and functionality; the only fundamental limit relevant here would be Turing/Rice's theorem, which is applicable only if we wanted to run all possible programs, which we manifestly cannot and do not. Similar points apply to the rest of his software vulnerabilities.
I would also like to single out his 'Metaphysical vulnerability'; physicists, SF authors, and transhumanists have been, for decades, outlining a multitude of models and possibilities for true immortality, ranging from Dyson's eternal intelligences to Tipler's collapse to Omega point to baby blackhole-universes. To appeal to atomism is to already beg the question (why not run intelligence on waves or more exotic forms of existence, why this particle-chauvinism?).
This applies again and again - the author supplies no solid proofs from any field, and apparently lacks the imagination or background to imagine ways to circumvent or dissolve his suggested limits. They may be exotic methods, but they still exist; were the author to reply that to employ such methods would result in intelligences so alien as to no longer be human, then I should accuse him of begging the question on a even larger scale - of defining the human as desirable and, essentially, as that which is compatible with his chosen limits.
Since that question is at the heart of transhumanism, his paper offers nothing of interest to us.