I need help getting out of a logical trap I've found myself in after reading The Age of Em.
Some statements needed to set the trap:
If mind-uploading is possible, then a mind can theoretically exist for an arbitrary length of time.
If a mind is contained in software, it can be copied, and therefore can be stolen.
An uploaded mind can retain human attributes indefinitely.
Some subset of humans are sadistic jerks, many of these humans have temporal power.
All humans, under certain circumstances, can behave like sadistic jerks.
Human power relationships will not simply disappear with the advent of mind uploading.
Some minor negative implications:
Torture becomes embarrassingly parallel.
US states with the death penalty may adopt death plus simulation as a penalty for some offenses.
The trap:
Over a long enough timeline, the probability of a copy of any given uploaded mind falling into the power of a sadistic jerk approaches unity. Once an uploaded mind has fallen under the power of a sadistic jerk, there is no guarantee that it will ever be 'free', and the quantity of experienced sufferring could be arbitrarily large, due in part to the embarrassingly parallel nature of torture enabled by running multiple copies of a captive mind.
Therefore! If you believe that mind uploading will become possible in a given individual's lifetime, the most ethical thing you can do from the utilitarian standpoint of minimizing aggregate suffering, is to ensure that the person's mind is securely deleted before it can be uploaded.
Imagine the heroism of a soldier, who faced with capture by an enemy capable of uploading minds and willing to parallelize torture spends his time ensuring that his buddies' brains are unrecoverable at the cost of his own capture.
I believe that mind uploading will become possible in my lifetime, please convince me that running through the streets with a blender screaming for brains is not an example of effective altruism.
On a more serious note, can anyone else think of examples of really terrible human decisions that would be incentivised by the development of AGI or mind uploading? This problem appears related to AI safety.
A suicide ban in a world of immortals is an extreme case of a policy of force-feeding hunger striking prisoners. The latter is normal in the modern United States, so it is safe to assume that if the Age of Em begins in the United States, secure deletion of an Em would likely be difficult, and abetting it, especially for prisoners, may be illegal.
I assert that the addition of potential immortality, and abandonment of 'human scale' times for brains built to care about human timescales creates a special case. Furthermore, a living human has, by virtue of the frailty of the human body, limits on the amount of suffering it can endure. An Em does not, so preventing an Em, or potential Em from being trapped in a torture-sim and tossed into the event horizon of a black hole to wait out the heat death of the universe is preventing something that is simply a different class of harm than the privations humans endure today.