Annoyance was the feeling I got, as well. It seems to me that in the places he does not commit the fallacy of grey, he only restates limits that any LW-style transhumanist understands--ie, in an EM scenario without a friendly singleton, there will still be disease, injuries, and death; even given a friendly singleton, with meaningful "continuous improvement" we only get about 28,000 subjective years until the heat death of the universe, etc.
"Vulnerable Cyborgs: Learning to Live with our Dragons", Mark Coeckelbergh (university); abstract:
Breaking down the potential improvements:
Physical vulnerability
Material and immaterial vulnerability
Bodily vulnerability
Metaphysical vulnerability
Existential and psychological vulnerabilities
Social and emotional vulnerability
Ethical-axiological vulnerability
'Relational vulnerability'/'Conclusion: Heels and dragons'
Before criticizing it, I'd like to point to the introduction where the author lays out his mission: to discuss what problems cannot "in principle" be avoided, what vulnerabilities are "necessary". In other words, he thinks he is laying out fundamental limits, on some level as inexorable and universal as, say, Turing's Halting Theorem.
But he is manifestly doing no such thing! He lists countless 'vulnerabilities' which could easily be circumvented to arbitrary degrees. For example, the computer viruses he puts such stock on: there is no fundamental reason computer viruses must exist. There are many ways they could be eliminated starting from formal static proofs of security and functionality; the only fundamental limit relevant here would be Turing/Rice's theorem, which is applicable only if we wanted to run all possible programs, which we manifestly cannot and do not. Similar points apply to the rest of his software vulnerabilities.
I would also like to single out his 'Metaphysical vulnerability'; physicists, SF authors, and transhumanists have been, for decades, outlining a multitude of models and possibilities for true immortality, ranging from Dyson's eternal intelligences to Tipler's collapse to Omega point to baby blackhole-universes. To appeal to atomism is to already beg the question (why not run intelligence on waves or more exotic forms of existence, why this particle-chauvinism?).
This applies again and again - the author supplies no solid proofs from any field, and apparently lacks the imagination or background to imagine ways to circumvent or dissolve his suggested limits. They may be exotic methods, but they still exist; were the author to reply that to employ such methods would result in intelligences so alien as to no longer be human, then I should accuse him of begging the question on a even larger scale - of defining the human as desirable and, essentially, as that which is compatible with his chosen limits.
Since that question is at the heart of transhumanism, his paper offers nothing of interest to us.