Followup toAnthropomorphic Optimism

If you've watched Hollywood sci-fi involving supposed robots, androids, or AIs, then you've seen AIs that are depicted as "emotionless".  In the olden days this was done by having the AI speak in a monotone pitch - while perfectly stressing the syllables, of course.  (I could similarly go on about how AIs that disastrously misinterpret their mission instructions, never seem to need help parsing spoken English.)  You can also show that an AI is "emotionless" by having it notice an emotion with a blatant somatic effect, like tears or laughter, and ask what it means (though of course the AI never asks about sweat or coughing).

If you watch enough Hollywood sci-fi, you'll run into all of the following situations occurring with supposedly "emotionless" AIs:

  1. An AI that malfunctions or otherwise turns evil, instantly acquires all of the negative human emotions - it hates, it wants revenge, and feels the need to make self-justifying speeches.
  2. Conversely, an AI that turns to the Light Side, gradually acquires a full complement of human emotions.
  3. An "emotionless" AI suddenly exhibits human emotion when under exceptional stress; e.g. an AI that displays no reaction to thousands of deaths, suddenly showing remorse upon killing its creator.
  4. An AI begins to exhibit signs of human emotion, and refuses to admit it.

Now, why might a Hollywood scriptwriter make those particular mistakes?

These mistakes seem to me to bear the signature of modeling an Artificial Intelligence as an emotionally repressed human.

At least, I can't seem to think of any other simple hypothesis that explains the behaviors 1-4 above.  The AI that turns evil has lost its negative-emotion-suppressor, so the negative emotions suddenly switch on.  The AI that turns from mechanical agent to good agent, gradually loses the emotion-suppressor keeping it mechanical, so the good emotions rise to the surface.  Under exceptional stress, of course the emotional repression that keeps the AI "mechanical" will immediately break down and let the emotions out.  But if the stress isn't so exceptional, the firmly repressed AI will deny any hint of the emotions leaking out - that conflicts with the AI's self-image of itself as being emotionless.

It's not that the Hollywood scriptwriters are explicitly reasoning "An AI will be like an emotionally repressed human", of course; but rather that when they imagine an "emotionless AI", this is the intuitive model that forms in the background - a Standard mind (which is to say a human mind) plus an extra Emotion Suppressor.

Which all goes to illustrate yet another fallacy of anthropomorphism - treating humans as your point of departure, modeling a mind as a human plus a set of differences.

This is a logical fallacy because it warps Occam's Razor.  A mind that entirely lacks chunks of brainware to implement "hate" or "kindness", is simpler - in a computational complexity sense - than a mind that has "hate" plus a "hate-suppressor", or "kindness" plus a "kindness-repressor".  But if you start out with a human mind, then adding an activity-suppressor is a smaller alteration than deleting the whole chunk of brain.

It's also easier for human scriptwriters to imagine themselves repressing an emotion, pushing it back, crushing it down, then it is for them to imagine once deleting an emotion and it never coming back.  The former is a mode that human minds can operate in; the latter would take neurosurgery.

But that's just a kind of anthropomorphism previously covered - the plain old ordinary fallacy of using your brain as a black box to predict something that doesn't work like it does.  Here, I want to talk about the formally different fallacy of measuring simplicity in terms of the shortest diff from "normality", i.e., what your brain says a "mind" does in the absence of specific instruction otherwise, i.e., humanness.  Even if you can grasp that something doesn't have to work just like a human, thinking of it as a human+diff will distort your intuitions of simplicity - your Occam-sense.

New Comment
38 comments, sorted by Click to highlight new comments since:
[-]jack3140

I think you're missing the key reason which is that Hollywood has basically no incentive to accurately portray an AI. Why would they? I suspect most people would find a truly emotionless AI exceptionally boring. Repressed emotions are a source of drama- if thats what sell movies no one in Hollywood will give a damn about being anthropocentric.

Jack, the hell with incentives, Hollywood doesn't have the ability.

It would be interesting to apply the same style of analysis to Hollywood examples of humans who recommend good or bad policies. Is the idea of someone thinking through a policy question in great detail for years and then making a recommendation that either works or doesn't work so alien that they must instead posit some simple emotional variation on an ordinary person who hasn't studied a problem in detail for years?

Re: why might a Hollywood scriptwriter make those particular mistakes?

Script writers often aim for things like dramatic tension over realism - and in that context, those are probably not mistakes.

Most sci-fi is just garbage - if read as attempted futurism. Unmodified humans in the far future? Humanoid aliens all over the place? It's really not worth criticising most of it from a futurist perspective.

"Jack, the hell with incentives, Hollywood doesn't have the ability".

Even if they had the ability, they'd be stupid (economically) to do it, mostly. I don't agree with your claim Hollywood doesn't have the ability. (Terminator 2 is sort of an exception I think in that any of (1)-(4) (almost) never happen. But then, again the point of the movie is not 'the' AI/robots, but how good Mr. Schwarzenegger is at blowing things up ...)

Perhaps this is how we generally explain the actions of others. The notion of a libertarian economist who wants to deregulate industry because he has thought about it and decided it is good for everyone in the long run, would be about as alien to most people as an AI. They find it much more believable that he is a tool of corporate oppression.

Whether this heuristic reduction to the simplest explanation is wrong more often than it is right, is another question.

I suggest that if the incentive existed, eventually the ability would also exist.

While I agree with the comments about Hollywood simply optimizing for a different utility (money by way of human drama), you're all missing the point. All of that stuff is explanation for human brains. The key idea in this post is the last paragraph. Everything before that is Eliezer correctly anthropomorphizing humans -- putting the idea in terms we will understand.

I don't think you need repression. How about this simple explanation:

Everybody knowns that machines have no emotions and thus the AI starts off this way. However, after a while totally emotionless characters become really boring...

Ok, time for the writer to give the AI some emotions! Good AIs feel happiness and fall in love (awww... so sweet), and bad AIs get angry and mad (grrrr... kick butt!).

Good guys win, bad guys loose... and the audience leaves happy with the story.

I think it's as simple as that. Reality? Ha! Screw reality.

(If it's not obvious from the above, I almost never like science fiction. I think the original 3 Star wars, Terminator 2, 2001: A space odyssey, and Mary Shelly's Frankenstein are the only works of science fiction I've ever really liked. I've pretty much given up on the genre.)

Robin, if you extend the idea, it becomes "I measure other people as simplest departures from myself or other people I know and understand". Thus, "rationalists refuse to get excited over UFOs, which make me feel warm and fuzzy" becomes "rationalists enjoy feeling cold and cynical, which is why they refuse to get excited over UFOs". The thought of complex laws of uncertain reasoning that cause rationalists to get excited over certain things, but not others, is not available as a hypothesis; but a compensating force of enjoying cynicism, just as they enjoy faith, is available.

This is indeed a general principle for understanding a broad class of misunderstandings, and I'm sure you can think of many examples in economics. I'd planned to post on all that later, though.

Sounds like Typical Psyche Fallacy, which is the most upvoted post on LW, but I'd like to see your take on it as well. Did you end up talking about this concept in other later posts? Robin and Vassar both expressed desire to read them...

I look forward to those future posts! :)

"Hollywood Economics" by Arthur De Vany. He's a retired professor of economics and shows how box office returns are Pareto distributed. Fat tail, Black Swans. Not predictable, "sure things" that they so often want us to believe. The only thing Hollywood is good and predicable at is blowing investor cash.

@Shane: Science fiction movies and science fiction novels are pretty different beasts... I wouldn't even classify Star Wars as science fiction; it's a fantasy story that happens to have spaceships and robots.

Thinking about the future as today + diff is another serious problem with similar roots.

Robin: Great Point! Eliezer: I'm awaiting that too.

Shane Legg: I don't generally like sf, film or otherwise, but try "Primer". Best movie ever made for <$6000 AND arguably best sf movie. The Truman Show was good too in the last decade or so. That's probably it though. Minority Report was OK.

Totally off-topic...

Eliezer, have you written (or is there) a thesis on the validity of using, as an a priori, poor old Ockham's beard trimmer as the starting point for any and all and every thing?

The future is today + diff (+diff +diff +diff ...).

Robin, almost certainly yes on that one. In the same way as policymakers, filmmakers have a vast inferential gap to cross. Your idea suggests that it is the limits of the audiences' intellects that define the watered-down content of the film/policy, which makes far more sense.

Jack, the hell with incentives, Hollywood doesn't have the ability.

This is pretty much meaningless, isn't it? Hollywood makes what sells. If it was misguided about what might sell; say, aiming too low; it would quickly be gazumped. It rarely is. This tells us something about the audience, not the industry.

Why is one not allowed to generalise from fictional evidence, but Eliezer allowed to seriously criticise the supposedly limited mindsets of those who think up entertaining plotlines? People like being entertained by wrong stuff! Well...yeah. That's what we have science for. Movies are for the end of the day when we actually want to suspend our disbelief. I'm not saying that most people don't use their own minds as points of departure for thinking about other things - you've long since convinced me this is the case. The fiction people enjoy is just not good evidence.

If you asked a scriptwriter for the original Star Trek series whether he really, truly believed that an AI would turn good at the last minute and develop emotions, the answer wouldn't be 'yes' or 'no'. It would be 'don't know, don't care'.

[-]Lake20

Eliezer: presumably there's an amount of money sufficient to induce (for example) you to bash out a three-act movie script about AI. So if demand is predicted to cover your fee plus the rest of the movie budget, Hollywood has the ability.

I don't generally like sf, film or otherwise, but try "Primer". Best movie ever made for <$6000 AND arguably best sf movie.
I find it difficult to approve of a time-travel movie whose basic premise is causality violation.

In that sense, Bill and Ted's Excellent Adventure was a better sci-fi movie than Primer. Far less thought-provoking, true, but making a simple-but-unparadoxical plot is a greater accomplishment than a complex-but-paradoxical one.

Addendum:

Have those of you with some psychological knowledge ever watched old movies and TV shows from that perspective? It is remarkable how often they had characters acting (and explaining the actions of others) according to Freudian psychoanalytic principles that we know are totally unrealistic. People don't actually act that way - that was obvious even at the time. But when constructing fictional examples of human behavior, writers found it more useful to ape popular conceptions of what human behavior was supposed to be like rather than realistically represent such behavior.

And really, what would be the point of realism?

Hollywood refuses to do things like keep noises out of space scenes. It's not clear that people really find noiseless space disconcerting, but it is clear that executives believe they will - and they're the ones who usually determine what's shown. So we have space with noises that couldn't possibly exist in real space.

This is all trending into a completely different topic, namely agent failures and market failures in Hollywood. Movies routinely fail to make money due to lousy scripts, but this doesn't cause Hollywood to routinely pay more for better scripts. Locks on distribution channels present huge barriers to new competition entering; Hollywood executives have no taste so they can't hire people with taste (Paul Graham's "design paradox"), but it makes them feel good to think they're above their audience; presumably there's some standard agent failures in the boardroom that prevents these guys from getting fired; etc etc.

Claim to anyone in Hollywood who knows what the phrase means that there's an efficient market in getting movies made, and they'll laugh like hell.

I wouldn't claim any exact or perfect efficiency in movie markets, but I would say they are efficient enough to make customer preferences a stronger explanation here than screenwriter abilities.

Of course the audience also figures in. Scriptwriters try to give the people what they want, and I'd imagine more people are interested in humanized AI with emotions. Spock is a cult-figure, but Shatner's had a more successful career.

I think the simplest explanation is that movie consumers like stories involving emotion, whether it's humanized cars (Herbie the Love Bug), cleaning robots (Wall-e), antz or AI's.

@phil: Agree. Bad guys are portrayed as emotionless or evil tools, good guys get humanized. Question is why economists, who generally dedicate themselves to improving the lives of strangers, are demonized. We're the Mother Theresas of the academy :)

Robin, this is the same Hollywood that had the Matrix running on human body heat. I see no reason to suppose cunning where ignorance will serve.

Mous: "Eliezer, have you written (or is there) a thesis on the validity of using, as an a priori, poor old Ockham's beard trimmer as the starting point for any and all and every thing?"

See "Occam's Razor," "A Priori," and "My Kind of Reflection." For future reference, this sort of thing should really go in the monthly Open Thread (which does have an unfortunate tendency to drop off the front page).

Most fictional and nonfictional treatments reflect the mistaken belief that the superintelligence has no purpose more important than to provide emergency first-aid for the humans, provide a nice place to live for the humans and to carry out the volition of the humans.

It seems to me that screenwriters are missing a trick. That is, a more real-ish AI would actually be a better villain. To a human, an unFriendly but only slightly superhuman AI would feel... evil. Amoral, manipulative, a liar, a backstabber, a blackmailer, an offerer of devil's bargains, a plotter of deeply laid plots, willing to sink to any cruelty for purely instrumental aims. Congenial, many-faced, hypocritical, completely untrustworthy by friend and foe alike. Of course the AI would see it as "playing humans like chess", but it's not going to say so. At least, not unless it's using that fear to manipulate someone, too.

[-]Ben700

Eliezer,

"This is all trending into a completely different topic, namely agent failures and market failures in Hollywood. Movies routinely fail to make money due to lousy scripts, but this doesn't cause Hollywood to routinely pay more for better scripts."

Your analysis here is completely off. It is probably true what you say about locks in distribution and taste, but that is somewhat irrelevant, or at least not causing the continued use of bad scripts.

The real reason is that the market is working perfectly; it's just that it is and always will be more economically efficient to get and use bad scripts than to use good ones. First, people see movies in droves whether the scripts are good or bad. I'd even go so far as to posit that script quality has close to zero effect on sales assuming other elements ("buzz", marketing, star-power, etc) are present. Studios can and have countless times made terrible movies that feature some big names and had a marketing juggernaut behind them and been ridiculously successful. So one premise from you analysis, that script quality has any correlation to box office success, is flawed.

Second, and perhaps more importantly, bad scripts are cheaper and easier to get. There are far more terrible writers than good ones, and far more bad scripts than good. It's easier, then, to get bad scripts, and since it doesn't seem to matter what kind of script you use, why wait around for brilliant scripts? Just put your money into actors and CG and marketing and you'll likely make money anyway, and those variables are much easier to control than script quality.

Hollywood script writers write for actors who are trained to and indeed sometimes do employ facial expressions, postures and gestures associated with human emotions to compellingly, or not, portray roles. Viewers of those productions depend on such cues in order to follow the sequence of illusions in something approximating the narrative mode. HAL is your AI, and successful in the film because it is not physically realized in a form that can be anthropomorphized. If you did write a role for that emotionless AI, would you really want to put it in a physical form approximating a human body? Once we want our AI to interact humanly - with facial expressions, gestures, subtle and dynamic verbal - and otherwise than verbal - language, and design it a body, does it not stand to reason that we will also want it to use that human body with in the manner, and with the mannerisms, of humans? A "realistic" "hard" AI would be nothing like a well socialized human, for it would have very constrained, and fundamentally different, input and output functions, than a human, and likely entirely unrelated reasoning, modeling and motivating mechanisms. I suppose. Two cents. Interesting article. Thanks for the ideas and forum.

That was a story - told by Morpheus. He was a superstitious character - and there was no good reason to think that he knew what was actually going on:

That Morpheus has misunderstood what is going on is underscored by his mention in the same speech of the machines' discovery of a new form of nuclear fusion. Evidently, the fusion is the real source of energy that the machines use. So what are humans doing in the power plant? Controlled fusion is a subtle and complex process, requiring constant monitoring and micromanaging. The human brain, on the other hand, is a superb parallel computer. Most likely, the machines are harnessing the spare brainpower of the human race as a colossal distributed processor for controlling the nuclear fusion reactions.

Eliezer, did Morpheus' allusion to human body heat spoil the movie for you? If so, you're doing it wrong.

Where does your pedantry end and your suspension of disbelief take over? After or before the antigravity ships? How about the genocidal robots? What's your rationale for watching the movie in the first place?

Tim, the guy you quote is making the same mistake. If you need to postulate some nonsense about Morpheus' superstition for the plot of a movie like The Matrix to 'sit right' with you, then I say again, doing it wrong!

My response ended up being over a thousand words long. Since I'm a science fiction writer, I asked what Eliezer's insight led me to, and I realized this:

In my long-running space opera, I have a world where two kinds of minds (human and AI) interact. One must by design be deferent to the other. The non-deferent minds are, by designoid, inclined to see all minds, including the deferent, as like their own. This leads to the non-deferent having a tragic habit of thought: there are minds like mine with inescapable deference. Human beings with this habit of thought become monsters. If the AIs are earnest in their desire to avoid human tragedy, what behaviors must they exhibit that signal non-deference without violating the core moral foundations that allow humans to persist? (As Eliezer points out, those core moral foundations must exist or there would be no more humans.)

Damn, feels like another novel coming on, if Peter Watts hasn't written it already.

The original script of The Matrix had the Machines using human brains to perform computations - both out of a sense of vengeful justice, and because it was convenient to use human neural nets for certain kinds of tasks.

There may even have been a suggestion that the physics of the world permitted humans to do things that the Machines couldn't - something along the lines of the AIs being implemented in digital hardware instead of analog.

This was changed in the script because it was feared most viewers would not be able to understand the idea. "Humans as energy resource" was substituted for "humans as processing resource".

There are aspects of this problem, and a possible explanation of the apparent over-complexity, running through the Freefall Web Comic.

I think you have to go back at least a year to pick up the beginnings of the relevant story thread. Florence Ambrose, anthropomorphic wolf and artificial person, has discovered similarities between the engineering of her brain, and that of most of the robots on the plant.

(I'm going to have to check the in-story implications of that.)

Anyway, the point is that the humans can't design a brain from a blank slate. The robot brains are copies of something else, with certain parts removed or suppressed.

(Checks index...)

March '07, such hardship. Mostly, people forget Florence is a form of robot. The people who treat her like one are a bit creepy.

They aren't mistakes. A story shouldn't be realistic, it should be entertaining.