I need help getting out of a logical trap I've found myself in after reading The Age of Em.
Some statements needed to set the trap:
If mind-uploading is possible, then a mind can theoretically exist for an arbitrary length of time.
If a mind is contained in software, it can be copied, and therefore can be stolen.
An uploaded mind can retain human attributes indefinitely.
Some subset of humans are sadistic jerks, many of these humans have temporal power.
All humans, under certain circumstances, can behave like sadistic jerks.
Human power relationships will not simply disappear with the advent of mind uploading.
Some minor negative implications:
Torture becomes embarrassingly parallel.
US states with the death penalty may adopt death plus simulation as a penalty for some offenses.
The trap:
Over a long enough timeline, the probability of a copy of any given uploaded mind falling into the power of a sadistic jerk approaches unity. Once an uploaded mind has fallen under the power of a sadistic jerk, there is no guarantee that it will ever be 'free', and the quantity of experienced sufferring could be arbitrarily large, due in part to the embarrassingly parallel nature of torture enabled by running multiple copies of a captive mind.
Therefore! If you believe that mind uploading will become possible in a given individual's lifetime, the most ethical thing you can do from the utilitarian standpoint of minimizing aggregate suffering, is to ensure that the person's mind is securely deleted before it can be uploaded.
Imagine the heroism of a soldier, who faced with capture by an enemy capable of uploading minds and willing to parallelize torture spends his time ensuring that his buddies' brains are unrecoverable at the cost of his own capture.
I believe that mind uploading will become possible in my lifetime, please convince me that running through the streets with a blender screaming for brains is not an example of effective altruism.
On a more serious note, can anyone else think of examples of really terrible human decisions that would be incentivised by the development of AGI or mind uploading? This problem appears related to AI safety.
Thank you for your reply to this thought experiment professor!
I accept your assertion that the ratio of aggregate suffering to aggregate felicity has been trending in the right direction, and that this trend is likely to continue, even into the Age of Em. That said, the core argument here is that as humans convert into Ems, all present day humans who become Ems have a high probability of eventually subjectively experiencing hell. The fact that other versions of the self, or other Ems are experiencing euphoria will be cold comfort to one so confined.
Under this argument, the suffering of people in the world today can be effectively counterbalanced by offering wireheading to Americans with a lot of disposable income--it doesn't matter if people are starving, because the number of wireheaded Americans is trending upwards!
An Age of Em is probably on balance a good thing, even though I see the possibility of intense devaluation of human life, and the possibility of some pretty horrific scenarios, I think that mitigating the latter is important, even if the proposed (controversial!) mechanism is inappropriate.
After all, if we didn't use cars, nobody would be harmed in car accidents.