Eliezer sometimes talks about how animals on earth are but a tiny dot in the "mind design space." For example, in "Artificial Intelligence as a Positive and Negative Factor in Global Risk," he writes:
The term “Artificial Intelligence” refers to a vastly greater space of possibilities than does the term “Homo sapiens.” When we talk about “AIs” we are really talking about minds-in-general, or optimization processes in general. Imagine a map of mind design space. In one corner, a tiny little circle contains all humans; within a larger tiny circle containing all biological life; and all the rest of the huge map is the space of minds-ingeneral. The entire map floats in a still vaster space, the space of optimization processes. Natural selection creates complex functional machinery without mindfulness; evolution lies inside the space of optimization processes but outside the circle of minds.
Though Eliezer doesn't stress this point, this argument applies as much to biotechnology as Artificial Intelligence. You could say, paralleling Eliezer, that when we talk about "biotechnology" we are really talking about living things in general, because life on Earth represents just a tiny subset of all life that could have evolved anywhere in the universe. Biotechnology may allow to create some of that life that could have evolved but didn't. Extending the point, there's probably an even vaster space of life that's recognizably life but couldn't have evolved, because it exists in a tiny island of life not connected to other possible life by a chain of small, beneficial mutations, and therefore is effectively impossible to reach without the conscious planning of a bioengineer.
The argument can further be extended to nanotechnology. Nanotechnology is like life in that they both involve doing interesting things with complex arrangements of matter on a very small scale, it's just that visions of nanotechnology tend to involve things which don't otherwise look very much like life at all. So we've got this huge space of "doing interesting this with complex arrangements of matter on a very small scale," of which existing life on earth is a tiny, tiny fraction, and in which "Artificial Intelligence," "biotechnology," and so on represent much large subsets.
Generalized in this way, this argument seems to me to be an extremely important one, enough to make it a serious contender for the title "the basic argument for the feasibility* of transhumanism." It suggests a vast space of unexplored possibilities, some of which would involve life on earth being very different than it is right now. Short of some catastrophe putting a halt to scientific progress, it seems hard to imagine how we could avoid having some significant changes of this sort not taking place, even without considering specifics involving superhuman AI, mind uploading, and so on.
On Star Trek, this outcome is avoided because a war with genetically enhanced supermen led to the banning of genetic enhancement, but in the real world such regulation is likely to be far from totally effective, no more than current bans on recreational drugs, performance enhancers, or copyright violation are totally effective. Of course, the real reason for the genetic engineering ban on Star Trek is that stories about people fundamentally like us are easier for writers to write and viewers to relate to.
I could ramble on about this for some time, but my reason for writing this post is to bounce ideas off people. In particular:
- Is there a better candidate for the title "the basic argument for the feasibility of transhumanism"?--and--
- What objections can be raised against this argument? I'm looking both for good objections and objections that many people are likely to raise, even if they aren't really any good.
*I don't call it an argument for transhumanism, because transhumanism is often defined to involve claims about the desirability of certain developments, which this argument doesn't show anything about one way or the other.)
I'm not sure if this is an objection many people are likely to raise, or a good one, but in any case, here are my initial thoughts:
Transhumanism is just a set of values, exactly like humanism is a set of values. The feasibility of transhumanism can be shown from a compiling a list of those values that are said to qualify someone as a transhumanist, and the observed existence of people with such values, whom we then slap a label on, and say: Here is a transhumanist!
Half an hour on google should probably suffice to persuade the sceptic that transhumanists do in fact exist, and therefore transhumanism is feasible. And so we're done.
I realize that this is not what you mean when you refer to the feasibility of transhumanism. You want to make an argument for the possiblity of "actual transhumans". Something along the lines of: "It is feasible that humans with quantitatively or qualitatively superior abilities, in some domain, relative to some baseline (such as the best, or the average performance of some collection of humans, perhaps all humans) can exist." Which seems trivially true, for the reasons you mention.
Where are the boundaries of human design space? Who do we decide to put in the plain old human category? Who do we put in the transhuman category — and who is just another human with some novel bonus attribute?
If one goes for such a definition of a transhuman as the one I propose above, are world record holding athletes then weakly transhuman, since they go beyond the previously recorded bounds of human capability in strength, or speed, or endurance?
I'd say yes, but justifying that would require a longer reply. One question one would have to answer is: Who is a human? (The answers one would get to this question has likely changed quite a bit since the label "human" was first invented.)
If one allows the category of things that receives a "yes" in reply to the question "is this one a human?" to change at all, if one allows that category to expand or indeed to grow over time, perhaps by an arbitrary amount. (Which is excactly what seems, to me at least, to have happened, and seems to continue to be the case.) Then, perhaps, there will never be a transhuman. Only a growing category of things which one considers to be "human". Including some humans that are happier, better, stronger and faster than any current or previously recorded human.
In order to say "this one is a transhuman" one needs to first decide upon some limits to what one will call "human", and then decide, arbitrarily, that whoever goes beyond these limits, we will put into this new category, instead of continuing to relax the boundaries of humanity, so as to include the new cases, as is usual.
Wikipedia defines transhumanism as:
So what I mean by "the feasibility of transhumanism" is just the "possibility" half of that definition, setting aside the desirability.
Even granting all tha... (read more)