I have encountered the argument that safe brain uploads are as hard as friendly AI. In particular, this is offered as justification for focusing on the development of FAI rather than spending energy trying to make sure WBE (or an alternative based on stronger understanding of the brain) comes first. I don't yet understand/believe these arguments.
I have not seen a careful discussion of these issues anywhere, although I suspect plenty have occurred. My question is: why would I support the SIAI instead of directing my money towards the technology needed to better understand and emulate the human brain?
Suppose human society has some hope of designing FAI. Then I strongly suspect that a community of uploads have at least as good a chance of designing FAI. If I can find humans who are properly motivated, then I can produce uploads who are also motivated to work on the design of FAI. Moreover, if emulated brains eventually outproduce us signfiicantly, then they have a higher chance of designing an FAI before something else kills them. The main remaining question is how safe an upload would be, and how well an upload-initiated singularity is likely to proceed.
There are three factors suggesting the safety of an upload-initiated singularity. First, uploads always run as fast as the available computing substrate. It is less likely for an upload to accidentally stumble upon (rather than design) AI, because computers never get subjectively faster. Second, there is hope of controlling the nature of uploads; if rational, intelligent uploads can be responsible for most upload output, then we should expect the probability of a friendly singularity to be correspondingly higher.
The main factor contributing to the risk of an upload-initiated singularity is that uploads already have access to uploads. It is possible that uploads will self-modify unsafely, and that this may be (even relatively) easier than for existing humans to develop AI. Is this the crux of the argument against uploads? If so, could someone who has thought through the argument please spell it out in much more detail, or point me to such a spelling out?
I'm not incredibly familiar with Robin Hanson's arguments; I think I disagree with his assumption/conclusion that a singleton is unlikely. The balance of power he presumes seems incredibly unlikely to persist for long.
Moreover, the question is whether we can engineer a future where uploads design a friendly singularity. To me (naively) this seems easier than friendliness. Hanson's writings don't really speak to this question.
I'm going to try to summarize them (feedback on how well that works please). One relates to how uploads come to dominate humanity, and another is how ruthless resource exporters come to dominate their section of the universe.
Uploads beating Humans
An upload will be as mentally capable as a human, but faster
An upload will be easy to copy
Uploads require much less to survive
Basically, the argument goes that uploads will be able to replace large swathes of normal humans who work on cognitive ... (read more)