Today's post, Failed Utopia #4-2 was originally published on 21 January 2009. A summary (taken from the LW wiki):

 

A fictional short story illustrating some of the ideas in Interpersonal Entanglement above. (Many commenters seemed to like this story, and some said that the ideas were easier to understand in this form.)


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Interpersonal Entanglement, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
38 comments, sorted by Click to highlight new comments since:

This story, as well as other gender-related issues within the Sequences, mean that despite them containing what seems to be to be a lot of value, I definitely would not recommend them to anyone else without large disclaimers, in a similar fashion to how Eliezer refers to Aumann.

This story irresistibly reads to me as the author endorsing or implicitly assuming:

1) There are exactly two genders, and everyone is a member of exactly one; 2) Everyone is heterosexual; 3) Humans have literally 0 use for members of the other gender other than romance.

[-][anonymous]110

1) There are exactly two genders, and everyone is a member of exactly one; 2) Everyone is heterosexual; 3) Humans have literally 0 use for members of the other gender other than romance.

As a general aesthetic rule, avoiding works of literature that do not contain explicit evidence of these facts doesn't sound particularly fun.

In particular, however, notice that we were told a story about a single protagonist who is an apparently-heterosexual male with an apparently-heterosexual female partner. The other characters aren't human. How exactly do you make it relevant to the plot that all of us homosexual males live in pleasure domes on the terrraformed shores of Titan?

avoiding works of literature that do not contain explicit evidence of these facts doesn't sound particularly fun.

Triple negative :(

OK, look, literally a five-year-old would say "but what about my friends who are girls". That the author writes a 'superintelligence' who does not address this objection, and a main character who does not mention any, say, coworkers, board-game-playing rivals, or recreational hockey team members who are women, gives an overwhelming, and overwhelmingly unpleasant, impression that women are solely romance and sex objects. That's not only gross, it's a very common failure mode of "we're too smart to be sexist" male tech geeks. And, indeed, downthread you can see other commenters talking about how great a utopia this sounds like.

[-][anonymous]90

That the author writes a 'superintelligence' who does not address this objection

That is, the point of the entire exercise, i.e., to show one out of a gazillion possible failure modes that can happen if you get FAI almost (but not quite) right -- a theme that shows up time and time again in EY's fiction. Acting like the superintelligence character is some kind of Author Avatar is really ignorant of... well, everything else he's written. That's why this a "Failed Utopia" and not a "Utopia."

and a main character who does not mention any, say, coworkers, board-game-playing rivals, or recreational hockey team members who are women, gives an overwhelming, and overwhelmingly unpleasant, impression that women are solely romance and sex objects.

How long does the plot take -- perhaps ten minutes? We see the main character in a moment of extreme shock, and then, extreme grief -- an extreme grief that is vitally important to the moral of the story (explicitly: "I didn't want this, even though the AI was programmed to be 'friendly'"). Adding anyone else to the plot dilutes this point.

And, indeed, downthread you can see other commenters talking about how great a utopia this sounds like.

That's the bloody point. FAI is hard.

That is, the point of the entire exercise, i.e., to show one out of a gazillion possible failure modes that can happen if you get FAI almost (but not quite) right -- a theme that shows up time and time again in EY's fiction. Acting like the superintelligence character is some kind of Author Avatar is really ignorant of... well, everything else he's written. That's why this a "Failed Utopia" and not a "Utopia."

That much is true, but looking at SamLL's contributions it seems that what made him untranslatable 1 was “The Opposite Sex”, which is written in EY's own voice.

OK, look, literally a five-year-old would say "but what about my friends who are girls".

And the AI would reply "if you had never met said friends, would you still miss them? Sounds like a clear case of sunk cost bias."

I always was rather curious about that other story EY mentions in the comments. (The "gloves off on the application of FT" one, not the boreanas one.) It could have made for tremendously useful memetic material / motivation for those who can't visualize a compelling future. Given all the writing effort he would later invest in MoR, I suppose the flaw with that prospect was a perceived forced tradeoff between motivating the unmotivated and demotivating the motivated.

I would strongly prefer that Eliezer not write a compelling eutopia ever. Avatar was already compelling enough to make a whole bunch of people pretty unhappy awhile back.

Really? I assume we're talking about the Avatar with blue aliens here, not the one with magical martial arts.

When I think about eutopia, I usually start from a sort of idealized hunter-gatherer society too, though the one that first comes to mind is something much different that I read much earlier. But Avatar never seemed that eutopically optimized: too much leaning on noble-savage tropes and a conspicuous lack of curiosity and ambition. And aside from the fringe that you get every time you put sufficiently sexy nonhumans onscreen, I'm not sure I've seen anything that matches what you're talking about.

Yes, the blue aliens. Link. Avatar is not that eutopically optimized but it is still a huge improvement on most people's lives; consider the possibility that your priors for what most people's lives are like is off.

Interesting. Though absent more information it doesn't tell us very much; I'd like to know how many people showed these kinds of symptoms after watching -- to name three that might cause them by different mechanisms -- Fight Club, or Dances With Wolves, or any sufficiently romanticized period piece.

This seems similar to Stendhal syndrome or other unexpected psychological responses to immersion in beautiful stimuli. (Say what you like about the plot, Avatar is visually rather pretty.)

If there's curiosity and ambition, you'd have to portray a snapshot of a eutopia rather than staple image. Furthermore, if it keeps changing, there are going to be mistakes, though one would hope recovery from them would be relatively quick. And, of course, if the science/tech keeps improving, then it's rather hard to imagine the details.

I don't think portraying a snapshot rather than a steady-state society would be much of a problem: media like Avatar almost always captures the societies it portrays at unusually tumultuous times anyway, which is actually the main thing that makes the lack of curiosity and so forth conspicuous to me.

If the movie was about some kind of anthropologist-cum-method-actor trying to blend seamlessly into a stable culture that had never heard of a starship or a Hellfire missile, less inventive behavior on its citizens' parts wouldn't be so surprising. But it's not; it's about a contact scenario with a technologically superior species, and so the same behavior looks more like borderline-insane traditionalism or sentimentality.

If the movie was about some kind of anthropologist-cum-method-actor trying to blend seamlessly into a stable culture that had never heard of a starship or a Hellfire missile

I'd watch that movie.

I'm pretty sure there have been Star Trek episodes with that premise. Of course, everything usually goes to hell around the time of the second commercial break.

I guess that the typical LW reader is much saner that those people, though this guess is based on the fact that I found Avatar boring and unremarkable and on a very liberal amount of Generalizing from One Example.

Right, but 1) we already have evidence that Eliezer is capable of writing a story that a lot of LWers at least greatly enjoy, and HPMoR is nowhere close to being eutopically optimized, and 2) even if the typical LWer isn't at serious risk, putting 5% of them out of commission is probably not a good idea either.

Were the verthandi cat girls? I did not catch that the first time I read it. They seemed sentient. I did think it was odd that the AI would be allowed to create people.

Are the people still capable of having kids?

Eliezer says here that the verthandi aren't meant to be catgirls.

The only sad part of that story was when the AI died.

[-]knb10

Honestly, I consider that to be one of the more compelling utopias I've read about.

What do you think about this one?

Also, if that post isn't explicitly part of this sequence, I think it should be added at the end.

[-][anonymous]40

Personally, I find that one rather grotesque, and pandering to a particular mindset.

Grotesque due to the contrived nature of the 'challenges' faced which turn one's whole life into a video game, and the apparent homogeneity of preferences, and pandering due to the implicit fawning over everything the things that actually run its world are capable of.

As for this one... the creation of sentient beings for an explicit purpose leaves a very bad taste in my mouth. It feels like limiting their powers of self-determination, though I'm not sure if that's coherent. The exact particulars of how the solar system gets remade seem a bit arbitrary, though the hands-off safeguards are interesting. I wonder what sorts of 'gaming of the rules' are possible...

[-]knb00

Grotesque due to the contrived nature of the 'challenges' faced which turn one's whole life into a video game,

Agreed. And a poorly designed video game, at that. If this world was made into a game today, I can't imagine it being as popular as Grand Theft Auto.

Grand Theft Auto

... which, if I understand correctly, is a game about miserable scared people doing horrible things to other miserable scared people, right?

[-]knb00

I remember that story. I strongly dislike it. It is clearly poorly designed on a number of levels. The main characteristics are casual sex and LARPing. I think we can do better.

The best eutopia I've read about (which Yudkowsky also highly praised), is The Golden Oecumene.

[-]Shmi00

Hmm, isn't there a logical contradiction between

I fully understand. I can already predict every argument you will make.

and

Roughly 89.8% of the human species is now known to me to have requested my death. Very soon the figure will cross the critical threshold, defined to be ninety percent. That was one of the hundred and seven precautions

? Surely this outcome, resulting in hitting one of the 107 precautions only minutes after the singularity, was predicted by the AI, and thus it would have been able to avoid it (trivially by doing nothing).

It doesn't want to avoid it. Why would it?

[-]Shmi00

I thought that hitting a precaution is a penalty in its utility function. I must be missing something.

I'm assuming this is just a deontological rule along the lines, "If X happens, shut down." (If the Programmer was dumb enough to assign a super-high utility to shutting down after X happens, this would explain the whole scenario - the AI did the whole thing just to get shut down ASAP which had super-high utility - but I'm not assuming the Programmer was that stupid.)

[-]Shmi00

I'm assuming this is just a deontological rule

Ah, thank you, I get it now. I guess for me deontology is just a bunch of consequentialist computational shortcuts, necessary because of the limited computational capacity of human brain and because of the buggy wetware.

Presumably the AI in this failed utopia would not need deontology, since it has enough power and reliability to recompute the rules every time it needs to make a decision based on terminal goals, not intermediate ones, and so it would not be vulnerable to lost purposes.

The 107 rules are all deontological, unless one of them is "maximize happiness".

[-][anonymous]-10

I tried to process through the story again, and I realized a perspective on it that I don't think I noticed on my first run through. To start off with:

A: Almost everyone is viciously upset and wants the AI's death, very, very quickly.

B: Even the AI is well aware that it failed.

C: 89.8% of the Human species includes people who aren't even CLOSE to any kind of romantic/decision making age. (at least according to http://populationpyramid.net/ )

D: Yet the AI has to have failed so horribly that the statistics are implied that almost every human being remaining alive capable of expressing the thought "I want you dead." wants it dead.

Now if the AI actually did something like this:

1: Terraform Mars and Venus

2: Relocate all Heterosexual Cisgender Adult Males to Mars, boost health.

3: Relocate all Heterosexual Cisgender Adult Females to Venus, boost health.

4: Make Complementary Partners on Mars and Venus.

Then it seems to imply, but not say:

5a: A large number of minor children have been abandoned to their deaths as any remaining adults who are still on Earth can't possibly take care of 100% of the remaining minor children in the wake of a massive societal disruption of being left behind. Oh, and neither the remaining adults or the children get boosted health, either. So, all those people in #2 and #3? You'll probably outlive your minor children even if they DID survive, and you get no say in it.

5b: Everyone in 5a was just killed very fast, possibly by being teleported to the Moon.

Either of those might be a horrible enough thing for the AI to have such a monumentally bad approval rating for a near total death wish to occur so quickly. But little else would.

I love my wife, A LOT, but I don't think her and I being moved to separate planets where we were both given amicable divorce by force and received compensation for not being able to see several of our family members for years would make me start hurling death wishes at the only thing which could hypothetically reverse the situation and which obviously has an enormous amount of power. And even if it did, applying that to 89.8% of people doesn't seem likely. I think a lot of them would spend much more time just being in shock until they got used to it.

On the other hand, if you kill my baby nephew and my young cousins and a shitload of other people then I can EASILY see myself hurling around death wishes on you, whether or not I really mean them, and hitting 89.8% feels much more likely.

If there are no implied deaths, then it seems like a vast portion of humanity is being excruciatingly dumb and reactionary for no reason, much like Stephen Grass, unless Stephen Grass DID realize the implied deaths and that's why he vomited when he did.

This seems to be sort of left up to the reader, since all Yudkowsky said in http://lesswrong.com/lw/xu/failed_utopia_42/qia was

Indeed. It's not clear from the story what happened to them, not to mention everyone who isn't heterosexual. Maybe they're on a moon somewhere?

Whether or not that moon has been Terraformed/Paradised or is still a death trap makes a rather huge difference. Although, I may just be reading too much into a plothole, since he also said.

I'll note that I wrote this story in one night.

Elsewhere in the thread. http://lesswrong.com/lw/xu/failed_utopia_42/t4d

I assume that the children were forcibly separated from their families and placed with people (or "people") who will be "better" for them in the long run.

[-][anonymous]00

That may have been the case (Since it is unclear in the story.) but from my perspective, that still doesn't seem bad enough to cause a near species wide death rage, particularly since if the children are still alive, they might count for AI voting rights as a member of the human species. It seems the AI would have to have done something currently almost universally regarded as utterly horrible and beyond the pale.

There are a lot of possible alternatives, though. Example further alternative: All of Earth's children were sold to the Baby Eating Aliens for terraforming technology.

Link: http://lesswrong.com/lw/y5/the_babyeating_aliens_18/

particularly since if the children are still alive, they might count for AI voting rights as a member of the human species.

In short run I image the kids are quite upset about being separated from their families and being told they'll never see them again. I don't have, or work around, kids so I don't know that this would translate into wishing the AI dead, but it feels plausiblish.