I have encountered the argument that safe brain uploads are as hard as friendly AI. In particular, this is offered as justification for focusing on the development of FAI rather than spending energy trying to make sure WBE (or an alternative based on stronger understanding of the brain) comes first. I don't yet understand/believe these arguments.
I have not seen a careful discussion of these issues anywhere, although I suspect plenty have occurred. My question is: why would I support the SIAI instead of directing my money towards the technology needed to better understand and emulate the human brain?
Suppose human society has some hope of designing FAI. Then I strongly suspect that a community of uploads have at least as good a chance of designing FAI. If I can find humans who are properly motivated, then I can produce uploads who are also motivated to work on the design of FAI. Moreover, if emulated brains eventually outproduce us signfiicantly, then they have a higher chance of designing an FAI before something else kills them. The main remaining question is how safe an upload would be, and how well an upload-initiated singularity is likely to proceed.
There are three factors suggesting the safety of an upload-initiated singularity. First, uploads always run as fast as the available computing substrate. It is less likely for an upload to accidentally stumble upon (rather than design) AI, because computers never get subjectively faster. Second, there is hope of controlling the nature of uploads; if rational, intelligent uploads can be responsible for most upload output, then we should expect the probability of a friendly singularity to be correspondingly higher.
The main factor contributing to the risk of an upload-initiated singularity is that uploads already have access to uploads. It is possible that uploads will self-modify unsafely, and that this may be (even relatively) easier than for existing humans to develop AI. Is this the crux of the argument against uploads? If so, could someone who has thought through the argument please spell it out in much more detail, or point me to such a spelling out?
This resembles the "Luddite fallacy", which was debunked by experience, which is to say, had the Luddites been right that the majority of the workforce would be replaced by a much more productive minority working the labor-saving machines (compare: humans would be replaced by much more productive uploads), we would already be living in something like a Hansonian upload dystopia, which we are not.
What instead happened was that the labor force stayed the same and production greatly expanded, and labor reaped a large part of the benefit of the expansion.
Extending what actually happened in the Luddite scenario to the upload scenario, then we might expect the amount of work to be done to expand to fully accomodate the number of people (human and upload) available to do it.
What of the fact that uploads need much less to survive? Won't this mean that they will be willing to work for much less, and therefore drive their human competitors out of business? Well, we in the US already earn far more than we need to barely survive, which is very little. So in a sense we are already modeling the upload scenario with respect to the low requirement to survive. So it is not obvious that uploads will end up working for upload-subsistence-level.
But demand will also increase. If you increase the total number of people, it's true that you are increasing the total number of potential producers (sellers), but you are also increasing the total number of consumers (buyers) by the exact same amount. You are simply expanding the population. And we have already seen the effects of this. The population of the US expanded enormously over the past 200 years, and wages have not gone down.
Am I forgetting the low subsistence level of uploads? Won't the heavy competition for jobs reduce salaries to subsistence level? Well, this is what might have been predicted for the same reason in the Luddite scenario, and it turned out not to be the case. Here is an alternative suggestion: if the number of workers increase, say, by a factor of 1000, so that the effective population balloons from 5 billion to 5 trillion, then the work will also increase by that same factor, so that the amount of work done by each person remains the same and the standard of living enjoyed by each person remains the same.
But let's assume that the following is true:
Indulge me and let's suppose a billion (non-upload) humans agree to trade only with each other and not with uploads (don't worry, I know about the instability of such arrangements and I'll relax this restriction soon enough, I just want to set up the scenario). In that case, they can continue surviving with an economy just like the pre-upload economy, in which people were after all surviving and doing quite well. Now let's relax the restriction. The humans start trading freely with uploads. We are now in the "free trade" versus "protectionism" scenario, and economists have plenty to say about that, mostly in favor of free trade as being to the mutual benefit of both populations. Vladimir has repeatedly made the point that comparative advantage is not all it's cracked up to be - but neither is it nothing. While it is probable that free trade will put some sectors of the human economy largely out of business (but couldn't they just move to a different sector?) nor was this ever denied by economists arguing for free trade (putting certain sectors largely out of business in one country is after all what must be entailed by the country's population focusing on areas where they have a comparative advantage), free trade does not lead to the entire economy going out of business and everybody starving to death.
By the way, if possible and if it is necessary to survive, I intend to become an upload. Here is my plan for increasing my own productive power beyond that of a single upload, thus increasing my standard of living. I duplicate myself many times, a thousandfold, but with constraints. The thousand copies will remain in existence for, say, an hour subjective time, during which time they will work, and then all will be deleted except for one randomly selected copy. This will not be much like death for the 999 copies; it will be much more like losing memories, memories which are likely to be lost anyway through normal forgetfulness. (See Derek Parfit Reasons and Persons Part 3 for full discussion of personal identity which I essentially agree with.) If I repeat this a few times, I will build up an intuitive expectation of survival and a willingness to keep doing it. Moreover it might be possible to merge some of my memories, minimizing loss of significant memories.
I am not saying that Hanson is wrong. I am just pointing out areas of the argument which seem to me to be incomplete. By the way, it was my understanding that Hanson's dystopia is independent of whether we upload or not. Rather, it is the very nature of life to expand, expand, expand, until a malthusian limit is reached. Now, this argument is quite a bit stronger. But I am here dealing specifically with the upload scenario.
Real wages for most Americans have gone down over the past few decades.
Admittedly this is speculated to be due to returning to a more "natural" two-class society rather than population changes.