However, if you kept adding an unlimited number of immigrants to a country at arbitrarily fast rates, including an unlimited number of immigrants skilled at each imaginable profession, the wages of all kinds of labor would indeed plummet....
That is what uploads mean, and there's no way you can extrapolate the comparably infinitesimal trends from ordinary human societies to such extremes.
One the one hand you express near certainty about what would happen (wages "would indeed plummet"), and on the other hand you caution about extrapolating from the known to the unknown.
My position, as you will recall, is not that Hanson is wrong, but that his argument is incomplete. My position is skeptical, in the sense that I see important gaps in the argument (at least as reproduced here). You are defending Hanson's prediction - the prediction about which I am expressing skepticism. Warnings about extrapolating from the known to the unknown work in favor of skepticism about predictions and against confidence in predictions, and therefore they work in my favor.
Uploads still require non-zero resources to subsist
Indeed they do, but my point is that if you look at the two ends of this spectrum - one end at which they take up the same amount of resources as humans, and the other end in which they take up nothing, at both ends there is no clear reason to believe that humans will die off. Now, this does not necessarily means that something funny won't happen in between, but since it is very common that if A causes B then more of A will cause more of B, then the fact that taking A to an extreme does not obviously cause any more B should at least make a person who reasoned that A caused B start to suspect that maybe they missed something.
Imagining that the uploads take zero resources and charge zero for their services is unrealistic, granted - about as unrealistic as imagining that you are traveling along at the speed of light and trying to imagine what you observe. Unrealistic, yes, but not necessarily useless. It's inherently hard to think about most things, and so as an assist - a dangerous assist granted - it is useful to consider cases which are simpler to think about, as extremes often are.
You are tremendously confident in a certain prediction. I am not confident. I am objecting, pointing out why certain supposed extrapolations do not really follow because the larger picture matters - the larger picture being what you call "all kinds of complex and non-obvious effects" and which you continue to neglect and which you argue does not matter if the increase is sufficiently fast - as if increasing the speed of the transition would by magic somehow enhance the effects that you happen to have considered while negating the effects that I have pointed out. Which is not the case. If an upload replaces a human at some task because the upload does it better for less, then the customer is immediately benefits. So the speed of that neglected effect (benefit to customer) is precisely as fast as the speed of the considered effect (harm to competitor). Speed up one by a million times, and the other also speeds up by a million times, because they are flip sides of precisely the same occurrence.
But the rent of land must be at least as high as the opportunity cost of filling it up with swarms of slaving uploads and reaping the profits, which will be many orders of magnitude above what a human can earn. It would be as if presently there existed a creature large enough to fill a whole state and requiring its entire agricultural output to subsist, but incapable of doing more productive work than a single human.
To say that one quantity would be much larger than another does not mean that the second quantity would be absolutely low. The first quantity could be absolutely very high.
We already have a kind of land use similar to what you are describing: skyscrapers. These allow an enormous number of people to occupy a minuscule square footage. So, where is the mass starvation? Do you think that the American economy would be enhanced by blowing up skyscrapers full of people? Or do you think that the American economy would be harmed? I think the latter.
But rent would definitely be lowered in NYC if all of its buildings were blown up. So, yeah, rent is high because of the high concentration of minds. But lowering the rent would not accompany a net benefit to humanity. I don't think we would be benefited by lowering rents in NYC by means of blowing up the buildings with the people in them. So, why would we necessarily be benefited by blowing up a square yard of land with trillions of minds on it? And if we would not be benefited by their destruction, then we would not be harmed by their introduction.
How could such a creature support itself?
That scenario imagines a creature with a certain absolute size and a certain absolute productivity. Given that absolute size and that absolute productivity, the creature cannot support itself. But given only that a human is much less productive than a trillion minds in a box, then we cannot draw any conclusions about how well the human can support themselves.
One the one hand you express near certainty about what would happen (wages "would indeed plummet"), and on the other hand you caution about extrapolating from the known to the unknown.
I don't caution about extrapolating from the known to the unknown in this case -- on the contrary. The economic effects of the (relatively) low rates of migration and population growth in today's world are unclear, complicated, and controversial, since these phenomena are intertwined with many others of similar magnitudes. In contrast, the economic effects of the...
I have encountered the argument that safe brain uploads are as hard as friendly AI. In particular, this is offered as justification for focusing on the development of FAI rather than spending energy trying to make sure WBE (or an alternative based on stronger understanding of the brain) comes first. I don't yet understand/believe these arguments.
I have not seen a careful discussion of these issues anywhere, although I suspect plenty have occurred. My question is: why would I support the SIAI instead of directing my money towards the technology needed to better understand and emulate the human brain?
Suppose human society has some hope of designing FAI. Then I strongly suspect that a community of uploads have at least as good a chance of designing FAI. If I can find humans who are properly motivated, then I can produce uploads who are also motivated to work on the design of FAI. Moreover, if emulated brains eventually outproduce us signfiicantly, then they have a higher chance of designing an FAI before something else kills them. The main remaining question is how safe an upload would be, and how well an upload-initiated singularity is likely to proceed.
There are three factors suggesting the safety of an upload-initiated singularity. First, uploads always run as fast as the available computing substrate. It is less likely for an upload to accidentally stumble upon (rather than design) AI, because computers never get subjectively faster. Second, there is hope of controlling the nature of uploads; if rational, intelligent uploads can be responsible for most upload output, then we should expect the probability of a friendly singularity to be correspondingly higher.
The main factor contributing to the risk of an upload-initiated singularity is that uploads already have access to uploads. It is possible that uploads will self-modify unsafely, and that this may be (even relatively) easier than for existing humans to develop AI. Is this the crux of the argument against uploads? If so, could someone who has thought through the argument please spell it out in much more detail, or point me to such a spelling out?