What objections can be raised against this argument? I'm looking both for good objections and objections that many people are likely to raise, even if they aren't really any good.
I'm not sure if this is an objection many people are likely to raise, or a good one, but in any case, here are my initial thoughts:
Transhumanism is just a set of values, exactly like humanism is a set of values. The feasibility of transhumanism can be shown from a compiling a list of those values that are said to qualify someone as a transhumanist, and the observed existence of people with such values, whom we then slap a label on, and say: Here is a transhumanist!
Half an hour on google should probably suffice to persuade the sceptic that transhumanists do in fact exist, and therefore transhumanism is feasible. And so we're done.
I realize that this is not what you mean when you refer to the feasibility of transhumanism. You want to make an argument for the possiblity of "actual transhumans". Something along the lines of: "It is feasible that humans with quantitatively or qualitatively superior abilities, in some domain, relative to some baseline (such as the best, or the average performance of some collection of humans, perhaps all humans) can exist." Which seems trivially true, for the reasons you mention.
Where are the boundaries of human design space? Who do we decide to put in the plain old human category? Who do we put in the transhuman category — and who is just another human with some novel bonus attribute?
If one goes for such a definition of a transhuman as the one I propose above, are world record holding athletes then weakly transhuman, since they go beyond the previously recorded bounds of human capability in strength, or speed, or endurance?
I'd say yes, but justifying that would require a longer reply. One question one would have to answer is: Who is a human? (The answers one would get to this question has likely changed quite a bit since the label "human" was first invented.)
If one allows the category of things that receives a "yes" in reply to the question "is this one a human?" to change at all, if one allows that category to expand or indeed to grow over time, perhaps by an arbitrary amount. (Which is excactly what seems, to me at least, to have happened, and seems to continue to be the case.) Then, perhaps, there will never be a transhuman. Only a growing category of things which one considers to be "human". Including some humans that are happier, better, stronger and faster than any current or previously recorded human.
In order to say "this one is a transhuman" one needs to first decide upon some limits to what one will call "human", and then decide, arbitrarily, that whoever goes beyond these limits, we will put into this new category, instead of continuing to relax the boundaries of humanity, so as to include the new cases, as is usual.
Wikipedia defines transhumanism as:
Transhumanism, abbreviated as H+ or h+, is an international intellectual and cultural movement that affirms the possibility and desirability of fundamentally transforming the human condition by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities.
So what I mean by "the feasibility of transhumanism" is just the "possibility" half of that definition, setting aside the desirability.
Even granting all that, I suppose you can still quibble about semantics, but I ran through several possible labels for what I had in mind and that seemed the best choice.
An idea at least good enough for sf: biotech is illegal but available. People improve (or "improve") their children and eventually, what's considered to be normal drifts pretty far from what we'd consider to be baseline human.
Here's an interesting problem: why do we live in this era? Imagine the people that lived before we migrated out of Africa; when the human population was less than 10,000. What were the odds of being one of those people? At least less than winning the lottery. So we can conclude that the likelihood of existing in a specific era is proportional to the amount of consciousness in existence during that time period.
This presents a major problem for a technological singularity as the odds of living before the singularity turned all matter in the universe into consciousness are virtually nil. So there will be no singularity, and it's almost frightening to imagine what we can conclude from this for our future.
This argument is called the Doomsday Argument. It has been discussed several times around these parts (e.g. here)
In a technical sense, the issue resolves around how you think self-sampling should be understood. You might consider looking up the "Sleeping Beauty" problem for more discussion of that point.
In a non-technical sense, there's what might be called the "reference-class problem." On the one hand, the number of people in existence has constantly been increasing over time. On the other, the number of interconnected civilizations seems to be dropping (after the widespread adoption of the internet, one could argue that the number of distinct civilizations currently in existence can be counted on one's fingers and toes). Figuring out the correct reference class has profound effects on the conclusions one reaches using this kind of reasoning.
Here's an interesting problem: why do we live in this era? Imagine the people that lived before we migrated out of Africa; when the human population was less than 10,000. What were the odds of being one of those people? At least less than winning the lottery, obviously. So we can conclude that the likelihood of living in a specific time period is proportional to the amou
There are two arguments that are at the heart of criticising the feasibility of transhumanism. One is skeptical about whether we can gain the science to achieve this aim and the other asserts that, whilst the tech may be possible, human beings will use it to kill each other in vast quantities.
The latter seems a more fundamental problem with human nature. You want personalized medicine? That requires the wide distribution of Bio-tech printers - printers that would be just as happy to print out a lethal, tailor-made virus.
This argument is as old as the hills. But other than Totalitarian snooping on EVERYBODY - How do you prevent widely distributed uber-tech from being abused?
I'm looking both for ... objections
As "V_V" implies, the existence of other forms of life and other forms of intelligence does not imply the possibility of radical life extension or of superintelligence.
It is easy enough to imagine a future in which biotechnology permits all sorts of altered lives and altered states without going much beyond the lifespan or intelligence of anything already in the animal kingdom, and in which computers, robots, and computer programs continue to be as brittle as they are now. So history continues and becomes posthuman, but not transhuman.
As has been raised by others, just because the design space is large, does not imply that the possibilities have high probability of being actualized.
Your argument shows that there is possibility. And, I think, nothing more. But yes, exempting existential catastrophe, I don't see how transhumanism is avoidable.
I don't really see how the argument for feasibility of H+ has much to do with the size of the design space for life (and AI, and nanotech,...) as long as its non-empty. After all, there's a huge design space for impossibilities as well. Or am I misunderstanding the argument?
There are some rather mundane improvements (at least compared to the design space) that would be enough (if realized) to show the feasibility -- say, intelligence augmentation, brain-computer hybrids.
Would an EMP effectively disable any implanted nanotechnology? If so, how can nanotechnology be made EMP-proof?
EMP destroys equipment by inducing high voltage and current in unshielded conductors, which act as antennas. The amount of energy picked up is related to the length of the conductor, with shorter conductors picking up less energy. Anything small enough to be described as "nanotechnology" would probably be unaffected, as long as it's not connected to unshielded external wiring. (An unmodified human touching a conductor would also experience an electric shock during an EMP.)
I was worried that human augmentation might come at the cost of susceptibility to EMP's: tricksters finding it humourous to walk around with controlled radii EMP devices and troubling augmented humans.
As opposed to beating random people with a stick, for instance? Try not to worry about unlikely things
Point taken. But the repercussions of EMP disruption of augmented humans aren't akin to playful beatings with sticks: an augmented eye short-circuiting, an augmented arm rendered considerably heavy dead weight, an artificial heart stopped.
Unless of course you meant stick-beatings of a fatal or maiming nature, in which case I would not call them tricksters but thugs. Sorry if my diction misled.
to playful beatings with sticks
'playful' beatings with sticks?
an artificial heart stopped
Indeed. Messing with people's implanted devices, whether it is a standard pacemaker or sci-fiesque medical nanotech, would be a severe act of assault, not a prank.
What puzzles me is why you care about this particular type of assault, since other types seem much more likely.
'playful' beatings with sticks?
That was the only way I could reconcile 'beating random people with a stick' and 'tricksters'; 16th century vagabonds in London, for instance, may have found it an amusing pastime.
What puzzles me is why you care about this particular type of assault, since other types seem much more likely.
EMP assaults come (or came, if it indeed would not prove problematic) across as the largest obstacle in ensuring augmentation's safety from malicious attacks, as it would be difficult to identify a guilty party in a crowd, and attacks might, if the technology develops to allow for it, be relatively easy to carry out.
What types of assaults do you consider having a higher probability?
EMP's affect only circuitry which can be broken by high voltage. The intersection of this with nanotechnology is not empty, but also not the same as all nanotechnology.
A faraday cage should be enogh to block the effects of an EMP (full-body chainmail, anyone?).
I really wasn't presenting it as an argument, but more a request for information. I view technology of the sort I imagine nanotech requires as necessitating electricity, which I had thought susceptible to EMP's; I know brains aren't affected, but could not fully elaborate why.
I would be surprised if a human were completely unaffected by an EMP that trashed the electronics around them.
Nuclear EMP effects had real-world impact damaging electronics, but I never saw any mention of human health damage from the EMP (as opposed to the fallout).
Street lights are an extreme case - hooked up directly to a very long baseline with no real protection to speak of. Anything capable of taking on, say, a cell phone, would have to be several orders of magnitude stronger.
The link mentioned that if the detonation had been over the US, the effect itself would have been 6x stronger - quite aside from being closer than 1500 kilometers to places that mattered. And that wasn't even designed to maximize EMP effects in any way.
Besides that, when someone says 'the electronics around them', I think that covers a lot more and more important stuff than one's cellphone.
The context here was EMP to be deployed against nanobots, not power grids. The source will thus be optimized to produce EMP, and to minimize collateral damage against general infrastructure - perhaps by producing smaller pulses closer to the target rather than enormous pulses further away.
In particular, the ability to affect microelectronics is paramount. The ability to take down the grid is irrelevant.
(replying here because the karma system apparently doesn't allow me to reply in the original subthread)
Standard evolutionary theory. Evolution did not have to take the path it took by any stretch of the imagination.
It could have created a zilion other varieties of bacteria, or rodents, or something else, but I wouldn't call those things transhumans.
While it is certainly plausible that humans can be improved to some extent using genetic engineering, given the state of the evidence there is no reason to believe that the typical transhumanist fantasies such as extreme lifespans or extreme intelligence will be feasible with this approach.
AI and non-biochemical nanotech are even more speculative technologies.
EDIT:
We can't really say whether the space of all the possible minds includes something that is substantially different from a human mind and yet be intelligent as much as or even more than a human at things humans do, (and be also manufacturable by humans and not alien enough that humans can't interact with it in any meaningful way).
Similarly, we can't say whether the space of all the possible types of chemistries includes something substantially different than biochemistry-as-we-know-it, and yet still capable of sustaining processes and forming structures of complexity and efficiency comparable to our biochemistry, and be compatible with the physical and chemical properties of our environment.
Eliezer sometimes talks about how animals on earth are but a tiny dot in the "mind design space." For example, in "Artificial Intelligence as a Positive and Negative Factor in Global Risk," he writes:
Though Eliezer doesn't stress this point, this argument applies as much to biotechnology as Artificial Intelligence. You could say, paralleling Eliezer, that when we talk about "biotechnology" we are really talking about living things in general, because life on Earth represents just a tiny subset of all life that could have evolved anywhere in the universe. Biotechnology may allow to create some of that life that could have evolved but didn't. Extending the point, there's probably an even vaster space of life that's recognizably life but couldn't have evolved, because it exists in a tiny island of life not connected to other possible life by a chain of small, beneficial mutations, and therefore is effectively impossible to reach without the conscious planning of a bioengineer.
The argument can further be extended to nanotechnology. Nanotechnology is like life in that they both involve doing interesting things with complex arrangements of matter on a very small scale, it's just that visions of nanotechnology tend to involve things which don't otherwise look very much like life at all. So we've got this huge space of "doing interesting this with complex arrangements of matter on a very small scale," of which existing life on earth is a tiny, tiny fraction, and in which "Artificial Intelligence," "biotechnology," and so on represent much large subsets.
Generalized in this way, this argument seems to me to be an extremely important one, enough to make it a serious contender for the title "the basic argument for the feasibility* of transhumanism." It suggests a vast space of unexplored possibilities, some of which would involve life on earth being very different than it is right now. Short of some catastrophe putting a halt to scientific progress, it seems hard to imagine how we could avoid having some significant changes of this sort not taking place, even without considering specifics involving superhuman AI, mind uploading, and so on.
On Star Trek, this outcome is avoided because a war with genetically enhanced supermen led to the banning of genetic enhancement, but in the real world such regulation is likely to be far from totally effective, no more than current bans on recreational drugs, performance enhancers, or copyright violation are totally effective. Of course, the real reason for the genetic engineering ban on Star Trek is that stories about people fundamentally like us are easier for writers to write and viewers to relate to.
I could ramble on about this for some time, but my reason for writing this post is to bounce ideas off people. In particular:
*I don't call it an argument for transhumanism, because transhumanism is often defined to involve claims about the desirability of certain developments, which this argument doesn't show anything about one way or the other.)