We recently saw a post in Discussion by ChrisHallquist, asking to be talked out of cryonics. It so happened that I'd just read a new short story by Greg Egan which gave me the inspiration to write the following:

 

It is likely that you would not wish for your brain-state to be available to all-and-sundry, subjecting you to the possibility of being simulated according to their whims. However, you know nothing about the ethics of the society that will exist when the technology to extract and run your brain-state is developed. Thus you are taking a risk of a negative outcome that may be less attractive to you than mere non-existence.

 

I had little expectation of this actually convincing anyone, but thought it was a fairly novel contribution. When jowen's plea for a refutation went unanswered, I began attempting one myself. What I ended up with closes the door on the scenario I outlined, but opens one I find rather more disturbing.

I think I'd better start by explaining why I wrote my comment the way that I did.

Normally, when being simulated is raised as a negative possibility (referred to in SF as being 'deathcubed', and carrying the implication not so much of torture as of arbitrariness), it's in the context of an AI doing so. Now there's a pretty good argument against being deathcubed by an AI, as follows:

Any AI that would do this is unFriendly. The vast majority of uFAIs have goals incompatible with human life but not in any way concerned with it. Humans are just matter that can be better used for something else; likewise simulations use computational resources better used for something else. Therefore there is little to fear in the way of being tortured by an AI.

I sidestepped that entire argument (but I'll return to it in a minute) by referring to "the ethics of the society that will exist". In other words, without making it explicit, I created the image of a community of agents, probably human-like agents, each with ownership of resources that they could use according to a moral code and subject to some degree of enforcement. I assumed a naturalistic polity, rather than a hegemony.

With the assumptions behind my scenario laid bare, it should now be apparent that it is no more stable than a world in which everyone owns nuclear weapons. If the resources to do these simulations are dispersed amongst that many people, someone will use them to brute-force an AI which will then hegemonize the universe.

If you accept the above, then you need only worry about a hegemon that permits such a society to exist. Such a hegemon would probably be classified as uFAI, and so we go back to the already refuted situation of a perversely evil AI.


Thus far I believe myself to have argued according to what passes for orthodoxy on LW. Note, though, that everything hinges on predictions about uFAI. These tend to be based on Steve Omohundro's The Basic AI Drives, which if taken seriously imply that an uFAI would convert the universe to utilons and not give a fig for human beings.

One of the drives AIs are predicted to have is the desire to be rational. I claim that a key behaviour that is eminently rational in humans has been neglected in considering AIs, particularly uFAIs. Namely, play. We humans take pleasure in play, quite aside from whatever productive gains we get out of it. Nevertheless, intellectual play has contributed to innumerable advances in science, mathematics, philosophy, etc. Dour, incurious people demonstrably fail to satisfy their values in dimensions that ostensibly are completely unrelated to leisure. An AI, you may say, need not play in order to create utility; it can structure its thoughts more rationally than humans, without resorting to undirected intellectual activity. I grant that a hyperrational agent will not allocate as great a proportion of resources to such undirected activity - that follows from holding more accurate beliefs about what it is productive to do. But no agent can have perfect knowledge of this nature. Thus all sufficiently rational agents will devote some proportion of their resources, no matter how small, to play.

What is the nature of play for a superintelligence? For Friendly AI, by definition it will not involve deathcubing. For an unFriendly AI, this is not the case. We then need to assess how likely it is that play for such an AI would involve such atrocities. Well then, first consider the sum of resources available to a hegemonic AI. Computationally speaking, doing sims of humans, indeed of human civilizations, would be a relative drop in the ocean. In a universe containing billions of stars in a single galaxy, how many tonnes of computronium would it take? Not many, I'd wager. Yet, no matter how easy it is for an AI to deathcube, perhaps it's simply so irrelevant, or uninteresting, an activity as not to occur even in undirected intellectual activity.

Perhaps. Yet it's rather easy to devise side-projects for an uFAI that are very simple to describe, take minimal resources and include untold human suffering. It begins to strain belief that of all the compactly specified tasks that refer to the universe in which an AI finds itself, it will try none of those which fall into this category.

An example seems in order. One would be just to do every thought experiment devised by mankind. Some of these would be computationally intractable even for a superintelligence, but those tend to be limited to computer science and number theory. Let's assume that 10 billion human beings record an average of a thousand unique and tractable ideas of note, each requiring about 10^30 operations - numbers deliberately on the high side of plausible. Then it would take 10^43 operations to do all of them. A maximally efficient computer weighing one kilogram and occupying one litre appears capable of ~10^50 operations a second. Thus, a superintelligence that can achieve efficencies of one ten-millionth in all these dimensions would take one second on a computer weighing 10,000 tonnes and occupying 10,000 m3 (imagine a zeppelin filled with water rather than hydrogen gas) to finish the job. It is at least plausible for an uFAI to do such a thing even before beginning wholescale matter-conversion. In such a context, where humans actually exist, play involving humans looks very natural.

It hardly needs mentioning that almost everything ever discussed on this site would be included in the example above.

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 12:11 PM

fairly novel contribution

Eh? People have been discussing this point for at least a decade, and I previously gave it as the main reason I'm not signed up for cryonics.

Then perhaps my assessment was mistaken! But in any case, I wasn't referring to the broad idea of cryonics patients ending up in deathcubes, but of their becoming open-access in an exploitative society - c.f. the Egan short.

[-]lmm10y30

Why play with humans? For an AI I would expect forks of itself to be vastly more engaging. If such an AI did want to play with much lesser minds, there's no reason for it to be particularly interested in the human region of mindspace (and creating minds from nothing will be trivial enough that I would expect it to be easier than all the messy matter-manipulation required for cryonic revival), so I don't see why it would bother simulating a particular human unless and until it got around to simulating all beings of complexity <n (and reaching an n high enough for humans seems implausible even for an AI)

Any AI that would do this is unFriendly. The vast majority of uFAIs have goals incompatible with human life but not in any way concerned with it. [...] Therefore there is little to fear in the way of being tortured by an AI.

That makes no sense. The uFAIs most likely to be created are not drawn uniformly from the space of possible uFAIs. You need to argue that none of the uFAIs which are likely to be created will be interested in humans, not that few of all possible uFAIs will.

It hardly needs mentioning that almost everything ever discussed on this site would be included in the example above.

The distinction between this and "almost everything on this site" is relevant. This makes this important to address.

I don't think 'play' is a good term, here. "Play" is something humans do to avoid boredom. It is instrumentally useful, but only because of other flaws in our mind design; specifically, intellectual play is useful to break us out of status quo bias, or something like it. Undirected intellectual exercises get us to explore varying sets of assumptions and habits of thought, which can lead to new conclusions, in a way that we're not good at doing when we sit and think through things as thought experiments.

Or in short, without a proposed mechanism I don't see how you can justify the UFAI's desire to experiment with arbitrary activities rather than simulate them internally.

i call this the Social Anxiety Objection. Imagine a child who fears going to a new school: "What if the kids at the new school don't like me? What if they won't let me play with them? What if they make fun of me?"

Now, from a child's perspective, these don't sound like irrational objections. Some children do experience bullying, humiliation and rejection in school.

But then most children survive this period of their lives and manage to grow up as functional adults.