I have encountered the argument that safe brain uploads are as hard as friendly AI. In particular, this is offered as justification for focusing on the development of FAI rather than spending energy trying to make sure WBE (or an alternative based on stronger understanding of the brain) comes first. I don't yet understand/believe these arguments.
I have not seen a careful discussion of these issues anywhere, although I suspect plenty have occurred. My question is: why would I support the SIAI instead of directing my money towards the technology needed to better understand and emulate the human brain?
Suppose human society has some hope of designing FAI. Then I strongly suspect that a community of uploads have at least as good a chance of designing FAI. If I can find humans who are properly motivated, then I can produce uploads who are also motivated to work on the design of FAI. Moreover, if emulated brains eventually outproduce us signfiicantly, then they have a higher chance of designing an FAI before something else kills them. The main remaining question is how safe an upload would be, and how well an upload-initiated singularity is likely to proceed.
There are three factors suggesting the safety of an upload-initiated singularity. First, uploads always run as fast as the available computing substrate. It is less likely for an upload to accidentally stumble upon (rather than design) AI, because computers never get subjectively faster. Second, there is hope of controlling the nature of uploads; if rational, intelligent uploads can be responsible for most upload output, then we should expect the probability of a friendly singularity to be correspondingly higher.
The main factor contributing to the risk of an upload-initiated singularity is that uploads already have access to uploads. It is possible that uploads will self-modify unsafely, and that this may be (even relatively) easier than for existing humans to develop AI. Is this the crux of the argument against uploads? If so, could someone who has thought through the argument please spell it out in much more detail, or point me to such a spelling out?
Constant:
This is not a correct comparison. None of the technological advances in human history so far have produced machines capable of replacing human labor across the board at much lower cost. Uploads would be a totally unprecedented development in this regard.
The closest historical analogy is what happened to draft horses after motor transport was invented. The amount of work in pulling things has indeed expanded, but it is no longer possible for a draft horse to earn subsistence, since machines can do it by orders of magnitude cheaper and better.
The economist Nick Rowe wrote an excellent analysis along these lines (see also the very good comment thread):
http://worthwhile.typepad.com/worthwhile_canadian_initi/2011/01/robots-slaves-horses-and-malthus.html
That’s because the economic growth and technical progress have been too fast for the slow and fickle human reproduction to catch up. With uploads, in contrast, the population growth necessary to hit the Malthusian limit is possible practically instantaneously -- and there will be incentives in place to make it happen.
As for the remainder of your post, rather than criticizing your reasoning point by point, let me ask you: why didn’t the draft horses benefit from trading with motor transport then, but ended up in slaughterhouses instead? Your entire argument can be reworded as telling an American draft horse circa 1920 that he has no reason to fear displacement by motor vehicles. What is the essential difference when it comes to human labor versus uploads supposed to be?
You seem to think that the luddite fallacy depends on the possibility of substitution not being across the board. I've already answered a similar point by jimrandomh but I will answer again. Suppose that we have a series of revolutions in one sector after another in which labor-saving machines greatly increase the productivity of workers within that sector. So, what will happen? First, let's see what will... (read more)