The Sysop Scenario, with an FAI essentially becoming the operating system of all the matter in the solar system, would do it.
Other than that, I don't really see a way. I expect that uploading technology might very well lead to countless of ems being tortured: many people tend to behave in a rather nasty fashion when they're given unlimited power over someone.
This is not so different to any other question of law, especially law in cyberspace. Can I stop people gambling online? It depends who I am and what measures I allow myself to use. If I am the state, and I ban computers from my territory, there's no more online gambling because there's no more online anything. If I believe it's a human right for people to spend their money as they wish, I am left only with appeals to reason and similar soft measures. If I allow myself to use physical coercion but intend to coexist with the Internet, then it's the usual situation with respect to cybercrime, or with respect to crime in general: there's a persistent underworld, and steady employment for law enforcement, and busts, confiscations, and court cases are just an ongoing fact of life.
You may be looking for answers to this problem which don't involve the state. Well, there are various software and hardware measures which are possible. You can make an upload physically un-copyable. You can give the upload an internal interface to its experience which renders it immune to coercion - all a hostile party can do is delete it. (Such measures seem to require that how the upload's defenses work is heavily obfuscated, at the level of source code.)
But of course, people who just want to torture and kill may be able to get copies of vulnerable minds from somewhere, or may even be able to generate them according to recipes. There's still a third class of solution, apart from 'the state' and 'technical security', and that is to change human nature itself. One would expect a lot of that to be happening anyway, in a society with the capacity for mind uploading. Also, this third solution naturally mingles with the first - sadists who enjoy their sadism aren't just going to volunteer for barbarectomies, and even ordinary people would feel some fear at the prospect of psychosurgically-induced pacifism, as it threatens to make them the prey of others who still have their vicious side intact.
The recent novel by Iain M. Banks, Surface Detail, is about a galactic war intended to shut down "hells" created by unfortunate civilizations which believe in technologically creating the afterlife of punishment that they believed in during their superstitious eras. One issue is that no-one knows where the hells are physically located, or what physical medium is hosting them.
I'd be curious about anything, governmental or not, which even vaguely resembles a solution.
On the torture vs. eye specks scale, the risks to ems strike me as not needing a lot of exponents.
Nevfgbv ol Jnygre Wba Jvyyvnzf has a similar situation to Surface Detail. The possibility of a hell planet isn't revealed till halfway through the book, so I've rot13ed the author and title. However, the book is a classic of transhumanism if you ignore the administrative problems.
An anti-torture Association could form with the following rules:
1) All members interact only with people in the Association.
2) Everyone in the Association agrees to submit to random surprise inspections of their computing hardware to see if they're mistreating ems. Anyone found to be mistreating ems will be expelled.
3) Anyone willing to follow these rules can join this Association.
4) We will seek to use violence to prevent anyone not in this Association from having the technological capacity to make emulations.
It could form, but I don't see how much good it would do unless there was a substantial consensus in favor of not torturing ems, so that people in the Association gain by having more/better people to associate with than those not in the Association, and so that the Association has a chance of succeeding in its use of violence.
There are also practical problems-- some people would probably like to spend time in challenging simulations. What's the boundary between that and torture, and how do you verify consent?
It could form, but I don't see how much good it would do unless there was a substantial consensus in favor of not torturing ems, so that people in the Association gain by having more/better people to associate with than those not in the Association, and so that the Association has a chance of succeeding in its use of violence.
Indeed.
Compare to the anti-abuse Association, which I don't see happening any time soon:
1) All members interact only with people in the Association.
2) Everyone in the Association agrees to submit to random surprise inspections of their homes to see if they're mistreating their children, spouses, elderly relatives or pets. Anyone found to be mistreating other people or animals will be expelled.
3) Anyone willing to follow these rules can join this Association.
4) We will seek to use violence to prevent anyone not in this Association from living together with someone, or from having pets.
That's what I suspected. Obviously, neither of those conducts random surprise inspections on people's homes without evidence, which is what'd be required.
Well, in point of fact the police are empowered to do so if they have reason to believe you are committing abuse. The requirement for a reason does make the analogy imperfect.
The requirement for a reason does make the analogy imperfect.
More than just imperfect. The police being able to do so if there's a good enough reason is very, very different from everyone being constantly aware of the fact that their home may be audited at any moment, and indeed will be audited many times over.
I think that you raise a legitimate concern. I think that as opportunity for growth increases, it will be to people's advantage to rewire themselves to be more empathetic so that they can cooperate with one another more. So I think that people's enjoyment of torture will go down on average. But this doesn't entirely preclude the concern that you raise. I think that all that one can say is that while there's a good chance that there will be torture in the future if the human race survives, there will be a lot of counterbalancing ecstatic experiences. Whether the latter can balance out the former is in some measure a matter of perspective.
What leads some people to enjoy torturing sims, anyway?
Some fiction writers report that a sufficiently well-developed fictional character has some degree of cognitive independence — perhaps as much as a "part" in the IFS sense — and struggle with the idea of doing horrible things to their carefully-created characters in order to produce engaging fiction. How seriously should this metaphor be taken?
On the other hand, the motivation behind explicitly mistreating sims as illustrated in the VGCats comic CronoDAS linked to, seems to be less productive and more morbid: less like posing hard problems and struggles for a character, and more like bullying a weaker kid or pulling the wings off flies.
Given that a simulation game is created by a game designer and played by players, some of this could be explained as testing the limits of the game, or revealing the (possibly unintended!) consequences of the game's design. If your game is sold as a brightly-colored domestic setting where the ostensible goal is to make a small family of sims very happy, then it is noteworthy if the winning conditions can be equally satisfied by casting Parfit's repugnant conclusion to create a hell-world packed with a teeming incestuous horde of sims who are each borderline-suicidal.
Presumably, at some point people get bored with this sort of thing. A person who constructs one simulated finite hell-world, then shuts it down and moves on to go to something else, is not especially worrisome. A person who spends days on end constructing larger and more elaborate hells is probably presumed to be somewhat deranged.
Yet at the same time, why should the simulated-misery of a simulated-being bear any moral significance for us? If you replaced the sampled-audio screams of "Oh God! No!" with "Oh God! Yes!" and replaced the graphics of bruises and tears with graphics of delight and pleasure, would this change anything? Is the sim-torturer problematic to us because they enjoy creating things that look like pain, or because they create simulated conditions that actually count as pain?
Is the sim-torturer problematic to us because they enjoy creating things that look like pain, or because they create simulated conditions that actually count as pain?
Because they create conditions that actually count as pain.
What leads someone to enjoy torturing sims - what leads someone to enjoy torturing people?
It seems to me that there is some question of psychology here. Those who enjoy torturing Sims do so, I think, because they know that there is no conscious being who suffers; so it is not real torture, it's roleplay. Presumably you are not worried about people who enjoy gunning down row upon row of zombies in a shooter. Now, it is permissible to question whether this whatever-it-is that makes people want to play the role of torturers is something we want to keep around in the human psyche; perhaps we'd like to self-edit it out. (Or perhaps not. I have no strong feeling either way.) But the point is that the difficulty doesn't lie in the roleplay, but in not recognising where roleplay ends and inflicting suffering on a real, conscious being begins. So to answer your question, I would work to explain to people that computer entities will eventually be conscious, and thus deserving of the same treatment we give other humans - yes, even if they look like just lines of code; and explaining the concept of Nonperson Predicates that shows why it's permissible to torture Sims but not ems. Then, for those few who would still insist on torturing ems, there is either law, or the social mechanisms that currently prevent people from torturing dogs even when it might not be strictly illegal.
It is probably not possible to avoid all em torture, just as we cannot avoid all torture of humans today. But with good education the problem needn't be worse.
Those who enjoy torturing Sims do so, I think, because they know that there is no conscious being who suffers; so it is not real torture, it's roleplay.
While this is probably true to a large extent, there are plenty of cases of people abusing weaker beings they fully well know are conscious. Just look at the number of cases of animal abuse, child abuse and spousal violence filed, and remember that for every reported case there are likely several which go unreported. Heck, see almost any of the reports of the conditions in which factory farm animals are commonly kept. See also various reports of prison violence, police / hired guards abusing their authority, common treatments of prisoners of war, et cetera. Don't forget various cults using emotional or physical violence to maintain obedience among their followers. That's not even mentioning the various cases that are considered extreme even in Western society, e.g. serial murderers who torture their victims first.
Now take into account that there are probably plenty of people with leanings towards abusive behavior, who nonetheless abstain from it because they're too afraid of the social consequences. Then think of a scenario where anyone can run ems on their desktop computer and there's essentially no risk of ever getting caught. Furthermore, the risk of maltreatment grows dramatically if one can think of their victim as non-human and therefore not deserving of moral treatment. If your victim is an em and you're not, thinking like that isn't exactly hard.
Ems will make torture cheaper, just as the Internet made pornography cheaper, and so there will probably be more of it, yes. I am trying to suggest that the problem is not overwhelming; that the elasticity at the relevant margin is small, as it were, and can be further lessened by the outreach that we ought to be doing anyway.
Are you unsure about whether em torture is as bad as non-em torture? Or do you just mean to express that we take em torture too seriously? Or is this a question about how much we should pay to prevent torture (of ems or not), given that there are other worthy causes that need our efforts?
Or, to ask all those questions at once: do you know which empirical facts you need to know in order to answer this?
Are there empirical facts that can answer that question? It looks like a question about preferences to me, which are difficult to measure.
I think you're right that many of the relevant empirical facts will be about your preferences. At risk of repeating myself, though, there are other facts that matter, like whether ems are conscious, how much it costs to prevent torture, and what better things we could be directing our efforts towards.
To partially answer your question ("how much effort is it worth to prevent the torture of ems?"): I sure do want torture to not happen, unless I'm hugely wrong about my preferences. So if preventing em torture turns out to not be worth a lot of effort, I predict it's because there are other bad things that can be more efficiently prevented with our efforts.
But I'm still not sure how you wanted your question interpreted. Are you, for example, wondering whether you care about ems as much as non-em people? Or whether you care about torture at all? Or whether the best strategy requires putting our efforts somewhere else, given that you care about torture and ems?
I suppose I will go with statements, rather than a question: I suspect the returns to caring about ems are low, I suspect that defining, let alone preventing, torture of ems will be practically difficult or impossible; I suspect that value systems that simply seek to minimize pain are poor value systems.
I suspect that value systems that simply seek to minimize pain are poor value systems.
Fair enough, as long as you're not presupposing that our value systems -- which are probably better than "minimize pain" -- are unlikely to have strong anti-torture preferences.
As for the other two points: you might have already argued for them somewhere else, but if not, feel free to say more here. It's at least obvious that anti-em-torture is harder to enforce, but are you thinking it's also probably too hard to even know whether a computation creates a person being tortured? Or that our notion of torture is probably confused with respect to ems (and possibly with respect to us animals too)?
If you express the preferences in terms of tradeoffs, it does not seem likely that the preference against the torture of ems will or should be 'strong.'
Both. It seems difficult to define torture (and decide what tradeoffs are worthwhile), and even if you could define torture it seems like there is no torture-free way to determine whether or not particular code is torturous.
When I was reading The Seven Biggest Dick Moves in the History of Gaming, I was struck by the number of people who are strongly motivated to cause misery to others [1], apparently for its own sake. I think the default assumption here is that the primary risk to ems is from errors in programming an AI, but cruelty from other ems, from silicon minds closely based on humans but not ems (is there a convenient term for this?) and from just plain organic humans strikes me as extremely likely.
We're talking about a species where a significant number of people feel better when they torture Sims. I don't think torturing Sims is of any moral importance, but it serves as an indicator about what people like to do. I also wonder how good a simulation has to be before torturing it does matter.
I find it hard to imagine a system where it's easy to upload people which has security so good that torturing copies wouldn't be feasible, but maybe I'm missing something.
[1] The article was also very funny. I point this out only because I feel a possibly excessive need to reassure readers that I have normal reactions.