I'm looking both for ... objections
As "V_V" implies, the existence of other forms of life and other forms of intelligence does not imply the possibility of radical life extension or of superintelligence.
It is easy enough to imagine a future in which biotechnology permits all sorts of altered lives and altered states without going much beyond the lifespan or intelligence of anything already in the animal kingdom, and in which computers, robots, and computer programs continue to be as brittle as they are now. So history continues and becomes posthuman, but not transhuman.
Eliezer sometimes talks about how animals on earth are but a tiny dot in the "mind design space." For example, in "Artificial Intelligence as a Positive and Negative Factor in Global Risk," he writes:
Though Eliezer doesn't stress this point, this argument applies as much to biotechnology as Artificial Intelligence. You could say, paralleling Eliezer, that when we talk about "biotechnology" we are really talking about living things in general, because life on Earth represents just a tiny subset of all life that could have evolved anywhere in the universe. Biotechnology may allow to create some of that life that could have evolved but didn't. Extending the point, there's probably an even vaster space of life that's recognizably life but couldn't have evolved, because it exists in a tiny island of life not connected to other possible life by a chain of small, beneficial mutations, and therefore is effectively impossible to reach without the conscious planning of a bioengineer.
The argument can further be extended to nanotechnology. Nanotechnology is like life in that they both involve doing interesting things with complex arrangements of matter on a very small scale, it's just that visions of nanotechnology tend to involve things which don't otherwise look very much like life at all. So we've got this huge space of "doing interesting this with complex arrangements of matter on a very small scale," of which existing life on earth is a tiny, tiny fraction, and in which "Artificial Intelligence," "biotechnology," and so on represent much large subsets.
Generalized in this way, this argument seems to me to be an extremely important one, enough to make it a serious contender for the title "the basic argument for the feasibility* of transhumanism." It suggests a vast space of unexplored possibilities, some of which would involve life on earth being very different than it is right now. Short of some catastrophe putting a halt to scientific progress, it seems hard to imagine how we could avoid having some significant changes of this sort not taking place, even without considering specifics involving superhuman AI, mind uploading, and so on.
On Star Trek, this outcome is avoided because a war with genetically enhanced supermen led to the banning of genetic enhancement, but in the real world such regulation is likely to be far from totally effective, no more than current bans on recreational drugs, performance enhancers, or copyright violation are totally effective. Of course, the real reason for the genetic engineering ban on Star Trek is that stories about people fundamentally like us are easier for writers to write and viewers to relate to.
I could ramble on about this for some time, but my reason for writing this post is to bounce ideas off people. In particular:
*I don't call it an argument for transhumanism, because transhumanism is often defined to involve claims about the desirability of certain developments, which this argument doesn't show anything about one way or the other.)