Today's post, The Design Space of Minds-In-General was originally published on 25 June 2008. A summary (taken from the LW wiki):

 

When people talk about "AI", they're talking about an incredibly wide range of possibilities. Having a word like "AI" is like having a word for everything which isn't a duck.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Psychological Unity of Humankind, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
18 comments, sorted by Click to highlight new comments since:

I've asked this elsewhere. Here goes again with some refinement:

So, {Possible AI} > {Evolved Intelligence} > {Human Intelligence}. What about {AI practically discoverable/inventable by near-term human civilization}? While possible AIs are practically boundless, the set of AIs that a few tens of thousands of researchers and supporting colleagues are likely to produce in our near-term reality is distinctly finite. (Though the "hull" of those points might subtend a large region of design space indeed.)

For clarity, when you say "evolved intelligence" vs "human intelligence", what are you referring to?

I'm talking about the design space of all intelligences that could evolve through the process of natural selection. "Human intelligence" is just the subset that has and could manifest as Homo sapiens and descendants through the process of natural selection.

Ah. That makes more sense than I originally thought.

[-]Maelin-10

I'd say it is a superset of {Human Intelligence} (since humans can easily fabricate more human intelligences - it's so easy, we often start the process entirely by accident) and a subset of {Possible AI} (since there are almost certainly likely to be mind designs that are too complex, too alien, or too somethingelse to be near-term feasible.)

Whether {Near-Term Human-Inventible AIs} has a large vs small overlap with {Evolved Intelligences} is an interesting question, but not one for which I can think of any compelling arguments offhand.

[-]Shmi-30

My primary moral is to resist the temptation to generalize over all of mind design space

If we focus on the bounded subspace of mind design space which contains all those minds whose makeup can be specified in a trillion bits or less, then every universal generalization that you make has two to the trillionth power chances to be falsified.

That's what I think every time someone brings up the idea of tortured sims. What are the odds of it happening?

I'd say somewhere in the vicinity of 100%.

I'm just hoping euphoric sims will be more common.

I'm just hoping euphoric sims will be more common.

What if all sims are tortured sims, with the difference being in the degree of torture?

That would suck. Why?

Well, you could consider our reality to be a devious torture sim where we are allowed moments of happiness, joy, and even euphoria, but those are interspersed with boredom and pain and everything is doomed to eventual decrepitude and death.

I suspect sims where sometimes things suck, and they suck a little more than they're awesome on the balance, predominate in the set of conceivable self-consistent universes.

[-][anonymous]20

Do you have a particular formal specification of a tortured mind in mind?

[-]Shmi-30

Not sure what you are asking. My point was that the human notion of torture is apriori a tiny speck in the ocean of possible Turing machines. We don't know nearly enough at this point to worry about accidental or intentional sim torture, so we shouldn't, until at least a ballpark estimate with a few sigma confidence interval can be computed. This is a standard cognitive bias people fall into here and elsewhere: a failure of imagination. In this particular case someone brings up the horrifying possibility of tortured and killed 3^^^3 sims, and the emotional response to it is strong enough to block any rational analysis of the odds of it happening. Also reminds me of people conjuring aliens as human-looking, human-thinking and, of course, emotionally and sexually compatible. EY wrote about how silly this is at some length.

[-][anonymous]10

Not sure what you are asking.

Is it not clear that in order to calculate the probability of any proposition, you need an actual definition of the proposition at hand?

My point was that the human notion of torture is apriori a tiny speck in the ocean of possible Turing machines. We don't know nearly enough at this point to worry about accidental or intentional sim torture, so we shouldn't, until at least a ballpark estimate with a few sigma confidence interval can be computed.

I think we agree that the only currently feasible arguments for any given value of P(a mind in such and such a mindspace is being tortured) are those based on heuristics.

However, you say these minds constitute "apriori a tiny speck", and I do not endorse such a statement (given any reasonable definition of torture), unless you have some unstated, reasonable, heuristic reason for believing so. Ironically, "failure of imagination" is frequently a counterargument to people arguing that a certain reference class is a priori very small.

[-]Shmi-20

Is it not clear that in order to calculate the probability of any proposition, you need an actual definition of the proposition at hand?

My only reason is of the Pascal's wager-type: you pick one possibility (tortured sims) out of unimaginably many, without providing any estimate of its abundance in the sea of all possibilities, why privilege it?

I don't think most people talking about torture vs dust specks actually expect it to happen. And even if it actually could happen, it might be a smart idea to precommit to refuse to play any crazy games with an intelligence that wants to torture people. The point of the discussion is ethics. It's a thought experiment. It's not actually going to happen.

[-]Shmi00

Not sure why you are bringing up specks vs torture, must be some misunderstanding.

It was the line about torturing and killing 3^^^3 sims. It seemed like you were referencing all of the various thought experiments people have discussed here involving that number. I only mentioned torture vs specks, but the point is the same. I don't think anyone ever actually expects something to happen in real life that involves the number 3^^^3.

My primary moral is to resist the temptation to generalize over all of mind design space

That's what I think every time someone brings up the idea of tortured sims. What are the odds of it happening?

The odds of that happening are almost entirely unrelated to the proportion of mind-space such torture machines make up. That kind of 'torture AI' is something that would be created - either chosen out of mind space deliberately or chosen by making a comparatively tiny error when aiming for a Friendly AI. It isn't the sort of thing that is just randomly selected.