ISTM that the major flaw in Hanson's logic is the assumption that uploads won't replace themselves with simpler nonsentients based on their expertise. The real evolutionary pressure wouldn't be to have optimum levels of pain and pleasure, but to replace motivation with automation: it takes less power, computing time, and storage space.
Stuart, it sounds like you think that the life of the typical animal, and of the typical human in history, were not worth living -- you'd prefer that they had never existed. Since you seem to think your own life worth living, you must see people like you as a rare exception, and may be unsure if your existence justifies all the suffering your ancestors went through to produce you. And you'd naturally be wary of a future of descendants with lives more like your ancestors' than like your own. What you'd most want from the future is to stop change enough to ensure that people very much like you continue to dominate.
If we conceive of "death" broadly, then pretty much any competitive scenario will have lots of "death", if we look at it on a large enough scale. But this hardly implies that individuals will often feel the emotional terror of an impending death - that depends far more on framing and psychology.
the life of the typical animal, and of the typical human in history, were not worth living -- you'd prefer that they had never existed.
When I read this, a part of my brain figuratively started jumping up and down and screaming "False Dichotomy! False Dichotomy!"
Even if you and I might disagree on trading number/length of lives for some measure of quality, I hope you see that my analysis can help you identify policies that might push the future in your favored direction. I'm first and foremost trying to predict the outcomes of a low regulation scenario. That is the standard basis for analyzing the consequences of possible regulations.
'Pain asymbolia' is when people feel pain but it isn't painful: they are aware of the damage but it causes no suffering. (As opposed to conditions like leprosy or diabetes, where the pain nerves are dead, report nothing, and this causes endless health problems.)
We already find it very useful to override pain in the interests of long-term gain or optimization (eg. surgery). Why should we not expect uploads to quickly be engineered to pain asymbolia? Pain which is more like a clock ticking away in the corner of one's eye than a needle through the eye doesn't seem like that bad a problem.
Far more efficiently dealt with by some simple cognitive prostheses like RescueTime... What's better, a few machine instructions matching a blocked Web address, or reengineering the architecture of the brain with, at a minimum, operant conditioning? This is actually a good example of how a crude ancestral mechanism like pain is not very adaptive or applicable to upload circumstances!
I've never been able to figure out what sort of work ems would do once everything available has been turned into computronium. A few of them would do maintenance on the physical substrate, but all I can imagine for the rest is finding ways to steal computational resources from each other.
What are humans doing now that we need only ~2% of the workforce to grow food and ~15% to design and make stuff?
I didn't say the rest weren't doing useful tasks. On the contrary, I meant to imply that if only a fraction of the workforce works on providing subsistence directly and obviously, it doesn't mean that the rest are useless rent-seekers.
(That said, I probably do have a more pessimistic view than you about the amount of rent-seeking and makework that takes place presently.)
This article has given me an idea about the new worst case scenario for preference utiltiarianism: A lot of computing power and an algorythm that will make different minds pop in and out of existence. Each time the mind has a different combination of preferences out of some vast template space for possible minds. And each time, the mind is turned off (forever) after a very brief period of existence. How much computing power and time would it need to force a preference utiltiarian to torture every human being on earth if that were the only way to prevent the simulation?
Malthusian cases arise mainly when reproduction is involuntary or impulsive, as it is with humans. It seems highly unlikely that ems will have the same mechanisms in place for this.
Plus, a 'merge' function would solve the 'fork and die' problem.
Instead of the deletion or killing of uploads that want to live but can't cut it economically, why not slow them down? (Perhaps to the point where they are only as "quick" and "clever" as an average human being is today.) Given that the cost of computation keeps decreasing, this should impose a minimal burden on society going forward. This could also be an inducement to find better employment, especially if employers can temporarily grant increased computation resources for the purposes of the job.
Nonsentient AI doing all the work necessary is a far better option. The protocol regulating uploading and multiplying them should be implemented in time.
An upload may be only a pleasure recipient, nothing else.
The virtualization of conflict neatly solves this. Nature makes conflict virtual, as part of its drive towards efficiency. The result is conflicts between companies and sports teams. When these "die" it is sometimes sad, but no human beings are seriously harmed in the process. It's Darwinian evolution that has lost its sting. Evolution via differential reproductive success is largely an alternative to evolution via death.
Robin Hanson has done a great job of describing the future world and economy, under the assumption that easily copied "uploads" (whole brain emulations), and the standard laws of economics continue to apply. To oversimplify the conclusion:
The competition will not so much be driven by variation, but by selection: uploads with the required characteristics can be copied again and again, undercutting and literally crowding out any uploads wanting higher wages.
Megadeaths
Some have focused on the possibly troubling aspects voluntary or semi-voluntary death: some uploads would be willing to make copies of themselves for specific tasks, which would then be deleted or killed at the end of the process. This can pose problems, especially if the copy changes its mind about deletion. But much more troubling is the mass death among uploads that always wanted to live.
What the selection process will favour is agents that want to live (if they didn't, they'd die out) and willing to work for an expectation of subsistence level wages. But now add a little risk to the process: not all jobs pay exactly the expected amount, sometimes they pay slightly higher, sometimes they pay slightly lower. That means that half of all jobs will result in a life-loving upload dying (charging extra to pay for insurance will squeeze that upload out of the market). Iterating the process means that the vast majority of the uploads will end up being killed - if not initially, then at some point later. The picture changes somewhat if you consider "super-organisms" of uploads and their copies, but then the issue simply shifts to wage competition between the super-organisms.
The only way this can be considered acceptable is if the killing of a (potentially unique) agent that doesn't want to die, is exactly compensated by the copying of another already existent agent. I don't find myself in the camp arguing that that would be a morally neutral or positive action.
Pain and unhappiness
The preceding would be mitigated to some extent if the uploads were happy. It's quite easy to come up with mental pictures of potential uploads living happy and worthwhile lives. But evolution/selection is the true determiner of the personality traits of uploads. Successful uploads would have precisely the best amount of pain and happiness in their lives to motivate them to work at their maximum possible efficiency.
Can we estimate what this pain/happiness balance would be? It's really tricky; we don't know exactly what work the uploads would be doing ("office work" is a good guess, but that can be extraordinarily broad). Since we are in extreme evolutionary dis-equilibrium ourselves, we don't have a clear picture of the best pain/happiness wiring for doing our current jobs today - or whether other motivational methods could be used.
But if we take the outside view, and note that this is an evolutionary processes operating on agents at the edge of starvation, we can compare this with standard Darwinian evolution. And there the picture is clear: the disequilibrium between happiness and pain in the lives of evolved beings is tremendous, and all in the direction of pain. It's far too easy to cause pain to mammals, far too hard to cause happiness. If upload selection follows broadly similar processes, their lives will be filled with pain far more than they will be filled with happiness.
All of which doesn't strike me as a good outcome, in total.