My question is: why would I support the SIAI instead of directing my money towards the technology needed to better understand and emulate the human brain?
You're probably familiar with Robin Hanson's writings on the economics of uploads. If you accept his arguments -- and I do find them very convincing -- this means that uploads will lead quickly and directly to an extremely grim Malthusian equilibrium. (Though Hanson himself, who accepts the Repugnant Conclusion but sees nothing repugnant about it, wouldn't characterize it as grim. Most people would however find it rather horrible -- including, I think, most people on LW too -- assuming they really understand the implications.)
I'm not at all optimistic about what awaits us if any sort of machine intelligence gets developed, but the upload scenario strikes me as especially dismal.
Robin's vision is actually far, far worse than the ordinary Repugnant Conclusion because it doesn't necessarily preserve or even increase total net utility. You do end up with huge numbers of people with lives barely worth living, but they are only a tiny fraction of the people you'd end up with under the ordinary Repugnant Conclusion (which implies free resources sufficient to bring newly created people up to worth living level) given the same starting point.
I have encountered the argument that safe brain uploads are as hard as friendly AI. In particular, this is offered as justification for focusing on the development of FAI rather than spending energy trying to make sure WBE (or an alternative based on stronger understanding of the brain) comes first. I don't yet understand/believe these arguments.
I have not seen a careful discussion of these issues anywhere, although I suspect plenty have occurred. My question is: why would I support the SIAI instead of directing my money towards the technology needed to better understand and emulate the human brain?
Suppose human society has some hope of designing FAI. Then I strongly suspect that a community of uploads have at least as good a chance of designing FAI. If I can find humans who are properly motivated, then I can produce uploads who are also motivated to work on the design of FAI. Moreover, if emulated brains eventually outproduce us signfiicantly, then they have a higher chance of designing an FAI before something else kills them. The main remaining question is how safe an upload would be, and how well an upload-initiated singularity is likely to proceed.
There are three factors suggesting the safety of an upload-initiated singularity. First, uploads always run as fast as the available computing substrate. It is less likely for an upload to accidentally stumble upon (rather than design) AI, because computers never get subjectively faster. Second, there is hope of controlling the nature of uploads; if rational, intelligent uploads can be responsible for most upload output, then we should expect the probability of a friendly singularity to be correspondingly higher.
The main factor contributing to the risk of an upload-initiated singularity is that uploads already have access to uploads. It is possible that uploads will self-modify unsafely, and that this may be (even relatively) easier than for existing humans to develop AI. Is this the crux of the argument against uploads? If so, could someone who has thought through the argument please spell it out in much more detail, or point me to such a spelling out?