Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Logical Fallacy of Generalization from Fictional Evidence

34 Post author: Eliezer_Yudkowsky 16 October 2007 03:57AM

When I try to introduce the subject of advanced AI, what's the first thing I hear, more than half the time?

"Oh, you mean like the Terminator movies / the Matrix / Asimov's robots!"

And I reply, "Well, no, not exactly.  I try to avoid the logical fallacy of generalizing from fictional evidence."

Some people get it right away, and laugh.  Others defend their use of the example, disagreeing that it's a fallacy.

What's wrong with using movies or novels as starting points for the discussion?  No one's claiming that it's true, after all.  Where is the lie, where is the rationalist sin?  Science fiction represents the author's attempt to visualize the future; why not take advantage of the thinking that's already been done on our behalf, instead of starting over?

Not every misstep in the precise dance of rationality consists of outright belief in a falsehood; there are subtler ways to go wrong.

First, let us dispose of the notion that science fiction represents a full-fledged rational attempt to forecast the future.  Even the most diligent science fiction writers are, first and foremost, storytellers;  the requirements of storytelling are not the same as the requirements of forecasting.  As Nick Bostrom points out:

"When was the last time you saw a movie about humankind suddenly going extinct (without warning and without being replaced by some other civilization)? While this scenario may be much more probable than a scenario in which human heroes successfully repel an invasion of monsters or robot warriors, it wouldn’t be much fun to watch."

So there are specific distortions in fiction.  But trying to correct for these specific distortions is not enough.  A story is never a rational attempt at analysis, not even with the most diligent science fiction writers, because stories don't use probability distributions.  I illustrate as follows:

Bob Merkelthud slid cautiously through the door of the alien spacecraft, glancing right and then left (or left and then right) to see whether any of the dreaded Space Monsters yet remained.  At his side was the only weapon that had been found effective against the Space Monsters, a Space Sword forged of pure titanium with 30% probability, an ordinary iron crowbar with 20% probability, and a shimmering black discus found in the smoking ruins of Stonehenge with 45% probability, the remaining 5% being distributed over too many minor outcomes to list here.

Merklethud (though there's a significant chance that Susan Wifflefoofer was there instead)  took two steps forward or one step back, when a vast roar split the silence of the black airlock!  Or the quiet background hum of the white airlock!  Although Amfer and Woofi (1997) argue that Merklethud is devoured at this point, Spacklebackle (2003) points out that—

Characters can be ignorant, but the author can't say the three magic words "I don't know."  The protagonist must thread a single line through the future, full of the details that lend flesh to the story, from Wifflefoofer's appropriately futuristic attitudes toward feminism, down to the color of her earrings.

Then all these burdensome details and questionable assumptions are wrapped up and given a short label, creating the illusion that they are a single package.

On problems with large answer spaces, the greatest difficulty is not verifying the correct answer but simply locating it in answer space to begin with.  If someone starts out by asking whether or not AIs are gonna put us into capsules like in "The Matrix", they're jumping to a 100-bit proposition, without a corresponding 98 bits of evidence to locate it in the answer space as a possibility worthy of explicit consideration.  It would only take a handful more evidence after the first 98 bits to promote that possibility to near-certainty, which tells you something about where nearly all the work gets done.

The "preliminary" step of locating possibilities worthy of explicit consideration includes steps like:  Weighing what you know and don't know, what you can and can't predict, making a deliberate effort to avoid absurdity bias and widen confidence intervals, pondering which questions are the important ones, trying to adjust for possible Black Swans and think of (formerly) unknown unknowns.  Jumping to "The Matrix: Yes or No?" skips over all of this.

Any professional negotiator knows that to control the terms of a debate is very nearly to control the outcome of the debate.  If you start out by thinking of The Matrix, it brings to mind marching robot armies defeating humans after a long struggle—not a superintelligence snapping nanotechnological fingers.  It focuses on an "Us vs. Them" struggle, directing attention to questions like "Who will win?" and "Who should win?" and "Will AIs really be like that?"  It creates a general atmosphere of entertainment, of "What is your amazing vision of the future?"

Lost to the echoing emptiness are: considerations of more than one possible mind design that an "Artificial Intelligence" could implement; the future's dependence on initial conditions; the power of smarter-than-human intelligence and the argument for its unpredictability; people taking the whole matter seriously and trying to do something about it.

If some insidious corrupter of debates decided that their preferred outcome would be best served by forcing discussants to start out by refuting Terminator, they would have done well in skewing the frame. Debating gun control, the NRA spokesperson does not wish to introduced as a "shooting freak", the anti-gun opponent does not wish to be introduced as a "victim disarmament advocate".  Why should you allow the same order of frame-skewing by Hollywood scriptwriters, even accidentally?

Journalists don't tell me, "The future will be like 2001".  But they ask, "Will the future be like 2001, or will it be like A.I.?"  This is just as huge a framing issue as asking "Should we cut benefits for disabled veterans, or raise taxes on the rich?"

In the ancestral environment, there were no moving pictures; what you saw with your own eyes was true.  A momentary glimpse of a single word can prime us and make compatible thoughts more available, with demonstrated strong influence on probability estimates.  How much havoc do you think a two-hour movie can wreak on your judgment?  It will be hard enough to undo the damage by deliberate concentration—why invite the vampire into your house?  In Chess or Go, every wasted move is a loss; in rationality, any non-evidential influence is (on average) entropic.

Do movie-viewers succeed in unbelieving what they see?  So far as I can tell, few movie viewers act as if they have directly observed Earth's future.  People who watched the Terminator movies didn't hide in fallout shelters on August 29, 1997.  But those who commit the fallacy seem to act as if they had seen the movie events occurring on some other planet; not Earth, but somewhere similar to Earth.

You say, "Suppose we build a very smart AI," and they say, "But didn't that lead to nuclear war in The Terminator?"  As far as I can tell, it's identical reasoning, down to the tone of voice, of someone who might say:  "But didn't that lead to nuclear war on Alpha Centauri?" or "Didn't that lead to the fall of the Italian city-state of Piccolo in the fourteenth century?"  The movie is not believed, but it is available.  It is treated, not as a prophecy, but as an illustrative historical case.  Will history repeat itself?  Who knows?

In a recent Singularity discussion, someone mentioned that Vinge didn't seem to think that brain-computer interfaces would increase intelligence much, and cited Marooned in Realtime and Tunç Blumenthal, who was the most advanced traveller but didn't seem all that powerful.  I replied indignantly, "But Tunç  lost most of his hardware!  He was crippled!"  And then I did a mental double-take and thought to myself:  What the hell am I saying.

Does the issue not have to be argued in its own right, regardless of how Vinge depicted his characters?  Tunç Blumenthal is not "crippled", he's unreal.  I could say "Vinge chose to depict Tunç as crippled, for reasons that may or may not have had anything to do with his personal best forecast," and that would give his authorial choice an appropriate weight of evidence.  I cannot say "Tunç was crippled."  There is no was of Tunç Blumenthal.

I deliberately left in a mistake I made, in my first draft of the top of this post:  "Others defend their use of the example, disagreeing that it's a fallacy."  But the Matrix is not an example!

A neighboring flaw is the logical fallacy of arguing from imaginary evidence:  "Well, if you did go to the end of the rainbow, you would find a pot of gold—which just proves my point!"  (Updating on evidence predicted, but not observed, is the mathematical mirror image of hindsight bias.)

The brain has many mechanisms for generalizing from observation, not just the availability heuristic.  You see three zebras, you form the category "zebra", and this category embodies an automatic perceptual inference.  Horse-shaped creatures with white and black stripes are classified as "Zebras", therefore they are fast and good to eat; they are expected to be similar to other zebras observed.

So people see (moving pictures of) three Borg, their brain automatically creates the category "Borg", and they infer automatically that humans with brain-computer interfaces are of class "Borg" and will be similar to other Borg observed: cold, uncompassionate, dressing in black leather, walking with heavy mechanical steps.  Journalists don't believe that the future will contain Borg—they don't believe Star Trek is a prophecy.  But when someone talks about brain-computer interfaces, they think, "Will the future contain Borg?"  Not, "How do I know computer-assisted telepathy makes people less nice?"  Not, "I've never seen a Borg and never has anyone else."  Not, "I'm forming a racial stereotype based on literally zero evidence."

As George Orwell said of cliches:

"What is above all needed is to let the meaning choose the word, and not the other way around... When you think of something abstract you are more inclined to use words from the start, and unless you make a conscious effort to prevent it, the existing dialect will come rushing in and do the job for you, at the expense of blurring or even changing your meaning."

Yet in my estimation, the most damaging aspect of using other authors' imaginations is that it stops people from using their own.  As Robert Pirsig said:

"She was blocked because she was trying to repeat, in her writing, things she had already heard, just as on the first day he had tried to repeat things he had already decided to say.  She couldn't think of anything to write about Bozeman because she couldn't recall anything she had heard worth repeating.  She was strangely unaware that she could look and see freshly for herself, as she wrote, without primary regard for what had been said before."

Remembered fictions rush in and do your thinking for you; they substitute for seeing—the deadliest convenience of all.

Viewpoints taken here are further supported in:  Anchoring, Contamination, Availability, Cached Thoughts, Do We Believe Everything We're Told?, Einstein's Arrogance, Burdensome details

 

Part of the Seeing With Fresh Eyes subsequence of How To Actually Change Your Mind

Next post: "How to Seem (and Be) Deep"

Previous post: "Original Seeing"

Comments (54)

Sort By: Old
Comment author: Doug_S. 16 October 2007 05:18:12AM 10 points [-]

"Will the future be like 2001, or will it be like A.I.?"

This question has a simple answer. It's "No."

Comment author: TGGP4 16 October 2007 05:26:00AM 2 points [-]

What I'm wondering is why you were even talking to venture capitalists about the singularity. Do you just go around asking anybody who has it for money? Did they hear you were working on something and then decided to make a proposal? I would guess it would become quickly apparent you didn't have anything to discuss.

Comment author: ansi61 16 October 2007 05:40:56AM 6 points [-]

NPR has run several stories on how the Fox TV show "24" influences both military interrogators' techniques and civilian acceptance of torture.

TV Torture Changes Real Interrogation Techniques

Fresh Air from WHYY, October 10, 2007 路 This year the Human Rights First Award for Excellence in Television will be given to a show that "depicts torture and interrogation in a nuanced, realistic fashion." According to interviews with military leaders, portrayal of torture on television shows has changed interrogation techniques in the field.

TV producer Adam Fierro (The Shield), intelligence expert Col. Stuart Herrington and human rights advocate David Danzig discuss TV violence.

Shows nominated for the award include Lost, Criminal Minds, The Closer and The Shield.

Torture's Wider Use Brings New Concerns

by Kim Masters

All Things Considered, March 13, 2007 路 The Fox Network series 24 features a hero who is not shy about using torture to achieve his objectives. The portrayal of torture as a positive tool worries human-rights watchers as well as the general who heads up West Point. They say the portrayals may be influencing military interrogators.

Comment author: Gray_Area 16 October 2007 08:14:55AM 1 point [-]

Apparently what works fairly well in Go is to evaluate positions based on 'randomly' running lots games to completion (in other words you evaluate a position as 'good' if in lots of random games which start from this position you win). Random sampling of the future can work in some domains. I wonder if this method is applicable to answering specific questions about the future (though naturally I don't think science fiction novels are a good sampling method).

Comment author: DanielLC 15 November 2010 04:36:53AM 8 points [-]

We'd have to be able to randomly run reality to completion several times.

Comment author: Anders_Sandberg 16 October 2007 09:41:16AM 5 points [-]

Another reason people overvalue science fiction is the availability bias due to the authors who got things right. Jules Verne had a fairly accurate time for going from the Earth to the Moon, Clarke predicted/invented geostationary satelites, John Brunner predicted computer worms. But of course this leaves out all space pirates using slide rules for astrogation (while their robots serve rum), rays from unknown parts of the electromagnetic spectrum and gravity-shielding cavorite. There is a vast number of quite erroneous predictions.

I have collected a list of sf stories involving cognition enhancement. They are all over the place in terms of plausibility, and I was honestly surprised by how little useful ideas of the impact of enhancement they had. Maybe it is easier to figure out the impact of spaceflight. I think the list might be useful as a list of things we might want to invent and common tropes surrounding enhancement rather than any start for analysis of what might actually happen.

Still, sf might be useful in the same sense that ordinary novels are: creating scenarios and showing more or less possible actions or ways of relate to events. There are a few studies showing that reading ordinary novels improves empathy, and perhaps sf might improve "future empathy", our ability to consider situations far away from our here-and-now situation.

Comment author: gwern 04 August 2011 06:28:49PM 2 points [-]

I have collected a list of sf stories involving cognition enhancement. They are all over the place in terms of plausibility, and I was honestly surprised by how little useful ideas of the impact of enhancement they had. Maybe it is easier to figure out the impact of spaceflight. I think the list might be useful as a list of things we might want to invent and common tropes surrounding enhancement rather than any start for analysis of what might actually happen.

Have you written on that anywhere?

Comment author: Kaj_Sotala 16 October 2007 11:16:36AM 1 point [-]

So, since the topic came up, I'll repeat the question I posed back in the "suggested posts" thread, but didn't (at least to my notice) receive any reply to:

How careful one should be to avoid generalization from fictional evidence? When writing about artificial intelligence, for instance, would it be acceptable to mention Metamorphosis of Prime Intellect as a fictional example of an AI whose "morality programming" breaks down when conditions shift to ones its designer had not thought about (not in a "see, it's happened before" sense but in a "here's one way of how it could happen")? Or would it be better to avoid fictional examples entirely and stick purely to the facts?

Comment author: JDM 06 November 2012 06:07:41PM 0 points [-]

It should depend on the level of the formality of the writing. In a strictly academic paper, it should probably be avoided completely. If the paper is slightly less formal, it may be acceptable, but the author should take care to specify that it is a work of fiction, that it is a theoretical example and not evidence, and what scope of the example is applicable to the discussion. This should be combined with actual evidence supporting the possibility and relevance of the example.

Comment author: Robin_Hanson2 16 October 2007 11:17:37AM 4 points [-]

Do we make the same mistakes as often with fiction about the present or past, or is something going extra wrong regarding the future?

Comment author: DanielLC 12 April 2014 05:36:06AM 0 points [-]

We have examples of the past and present to draw on. So do the fiction writers, making it more accurate. Science fiction is informed largely by other science fiction.

Comment author: Anders_Sandberg 16 October 2007 11:38:44AM 3 points [-]

I think Kaj has a good point. In a current paper I'm discussing the Fermi paradox and the possibility of self-replicating interstellar killing machines. Should I mention Saberhagen's berserkers? In this case my choice was pretty easy, since beyond the basic concept his novels don't contain that much of actual relevance to my paper, so I just credit him with the concept and move on.

The example of _Metamorphosis of Prime Intellect_ seems deeper, since it would be a example of something that can be described entirely theoretically but becomes more vivid and clearly understandable in the light of a fictional example. But I suspect the problem here is the vividness: it would produce a bias towards increasing risk estimates for that particular problem as a side effect of making the problem itself clearer. Sometimes that might be worth it, especially if the analysis is strong enough to rein in wild risk estimates, but quite often it might be counterproductive.

There is also a variant of absurdity bias in referring to sf: many people tend to regard the whole argument as sf if there is an sf reference in it. I noticed that some listeners to my talk on berserkers did indeed not take the issue of whether there are civilization-killers out there very seriously, while they might be concerned about other "normal" existential risks (and of course, many existential risks are regarded as sf in the first place).

Maybe a rule of thumb is to limit fiction references to where they 1) say something directly relevant, 2) there is a valid reason for crediting them, 3) the biasing effects do not reduce the ability to think rationally about the argument too much.

Comment author: Luke_G. 16 October 2007 12:52:15PM 6 points [-]

"Characters can be ignorant, but the author can't say the three magic words 'I don't know.'"

One funny exception to this is Mark Twain's "A Medieval Romance," which you can read here:

http://www.readbookonline.net/readOnLine/1537/

Just scroll down and read the last three paragraphs.

Comment author: InsertUsernameHere 19 December 2013 03:27:43PM 1 point [-]

Why would anyone think that the only way to show you're not the father is to declare you're a woman?

Comment author: Doug_S. 16 October 2007 06:42:51PM 4 points [-]

Bob Merkelthud slid cautiously through the door of the alien spacecraft, glancing right and then left (or left and then right) to see whether any of the dreaded Space Monsters yet remained. At his side was the only weapon that had been found effective against the Space Monsters, a Space Sword forged of pure titanium with 30% probability, an ordinary iron crowbar with 20% probability, and a shimmering black discus found in the smoking ruins of Stonehenge with 45% probability, the remaining 5% being distributed over too many minor outcomes to list here.

Sounds like a video game, actually...

Comment author: michael_vassar3 16 October 2007 08:56:16PM -2 points [-]

My unjustified opinion (which surely still counts as evidence) is that it's probably best to never reference science fiction, and possibly best to never read it or encourage other people to read it, but I'm much less certain about that than about referencing or mentioning it.

Comment author: Nic 17 October 2007 04:29:37AM 0 points [-]

I think you are right that science fiction is not "a rational attempt at analysis" but that you are wrong that it is usually "because stories don't use probability distributions." Some time ago, I tried writing a story using the RPG Exalted's rules. Keeping track of such a large number of character sheets turned out to be too tedious, and I gave up. Yet, the point was to make the story more surprising, not more realistic. If I write "With 45% probability, Vash dodged the bullet fired at him at point-blank range, and with 10% probability, he dodged a sniper shot taken from seven miles away" this is not unrealistic because it doesn't use probability; rather, it isn't even an attempt to be realistic, or to rationally examine what the future might be like.

Comment author: Eliezer_Yudkowsky 17 October 2007 07:55:30AM 5 points [-]

Nic, the problem I'm referring to is not that stories are not generated from probability distributions, but that the stories don't explicitly describe probability distributions. If I want to give a rational analysis of the stock market, I have to be able to say "60% probability that the stock market goes up, 40% probability that it goes down." Rolling a 10-sided die, getting 7, and saying firmly "Now the stock market will go down!" doesn't cut it. If it's a story, though, I have to say either "The stock market went up" or "The stock market went down", one or the other.

Comment author: Recovering_irrationalist 17 October 2007 04:13:18PM 2 points [-]

Eliezer, excellent post.

Do you still rate reading science fiction as a leading way to boost intelligence and train for thinking about the future, and if so, how do you reconcile that with this post? I don't suggest they contradict, but your clarification would give vital insight.

Comment author: Eliezer_Yudkowsky 17 October 2007 05:14:09PM 20 points [-]

People who don't read science fiction at all often seem extremely vulnerable to absurdity bias, that is, they assume the future is the present in silver jumpsuits wearing jetpacks. Since they haven't read 100 different future scenarios, they assume the silver jumpsuits are the only scenario that exists. The alternative to reading science fiction is often spontaneously reinventing really bad science fiction.

Comment author: Richard_Hollerith 19 October 2007 06:13:12AM 1 point [-]

Should people who must be very rational to fulfill their responsibilities in life eschew movies? Do you personally eschew movies?

Comment author: Louis_Choquel 20 October 2007 09:12:55AM 2 points [-]

I totally agree, Eliezer. Yet I like making references to science fiction when I discuss the future when discussing with friends, or on my blog for a couple of reasons:

- It's a strong argument in favor of accelerating change: the technology that exists today is way beyond many of the gadgets depicted in SF from a few decades back which predicted them for 1500 years later. And, even more impressing is that these gadgets are cheap and available to anyone, at least in rich countries (mobile phones, the Web, GPS, iPods...). If anything, it stresses how common wisdom downplays the evolution of technologies, which helps to make a case for AGI emerging in decades, not centuries.

- SF helps to raise important questions about the future which are hard to address in the setting of the present. The classic example of that is the failure of Asimov's law of robotics. The more recent example is the TV series BattleStar Galactica. Of course it's unrealistic and biased, but it changed my views on the issues of AGI's rights. Can a robot be destroyed without a proper trial? Is it OK to torture it? to rape it? What about marrying one? or having children with it (or should I type "her")?

Comment author: danlowlite 28 October 2010 02:07:32PM 2 points [-]

What about marrying one? or having children with it (or should I type "her")?

Depends. What does the robot identify as?

Comment author: MugaSofer 10 January 2013 10:52:57AM -1 points [-]

Can a robot be destroyed without a proper trial? Is it OK to torture it? to rape it? What about marrying one? or having children with it (or should I type "her")?

I can't help but notice that many (all?) of these questions seem dependent on how closely the AGI resembles a neurotypical human.

Comment author: DanielLC 12 April 2014 05:41:48AM 1 point [-]

It's a strong argument in favor of accelerating change: the technology that exists today is way beyond many of the gadgets depicted in SF from a few decades back which predicted them for 1500 years later.

I've noticed that it's not so much that our technology is better as it is that it's completely different. Science fiction routinely includes things that are physically impossible. We invent things that never occurred to authors. What you're really doing is using science fiction to illustrate that you can't predict the future by relying on science fiction.

Comment author: Ian_C. 20 November 2007 04:49:58PM 1 point [-]

If fictional evidence is admissible, then doesn't generalization itself becomes suspect, since people can simply imagine the black swan?

As for science fiction (and art in general), it may skew your concepts if you're not careful, but it also provides emotional fuel and inspiration to keep going. And since the human need for such fuel is observably true, it's not really an option to go cold turkey on art. You are either a researcher with some mildly skewed concepts, or not a researcher at all, but some poor fellow who has lost all hope. The researcher with perfectly fact-based concepts may be another case of an impossible fictional character seeping in to our reasoning.

Comment author: denis_bider 20 November 2007 07:48:16PM 1 point [-]

Louis: "The more recent example is the TV series BattleStar Galactica. Of course it's unrealistic and biased, but it changed my views on the issues of AGI's rights. Can a robot be destroyed without a proper trial? Is it OK to torture it? to rape it? What about marrying one? or having children with it (or should I type 'her')?"

See this: http://denisbider.blogspot.com/2007/11/weak-versus-strong-law-of-strongest_15.html

You are confused because you misinterpret humanity's traditional behavior towards other apparently sentient entities in the first place. Humanity's traditional (and game-theoretically correct) behavior is to (1) be allies with creatures who can hurt us, (2) go to war with creatures who can hurt us and don't want to be our allies, (3) plunder and exploit creatures that cannot hurt us, regardless of how peaceful they are or how they feel towards us.

This remains true historically whether we are talking about other people, about other nations, or about other animals. There's no reason why it shouldn't be true for robots. We will ally with and "respect" robots that can hurt us; we will go to war with robots that can hurt us but do not want to be our allies; and we will abuse, mistreat and disrespect any creature that does not have the capacity to hurt us.

Conversely, if the robots reach or exceed human capacities, they will do the same. Whoever is the top animal will be the new "human". That will be the new "humanity" where their will be reign of "law" among entities that have similar capacities. Entities with lower capacities, such as humans that continue to be mere humans, will be relegated to about the same level as capucin monkeys today. Some will be left "in the wild" to do as they please, some will be used in experiments, some will be hunted, some will be eaten, and so forth.

There is no morality. It is an illusion. There will be no morality in the future. But the ruthlessness of game theory will continue to hold.

Comment author: DanielLC 15 November 2010 04:43:35AM 4 points [-]

Human nature will hold. Similarly robot nature, whatever we design it to be, will hold. Robots won't mistreat humans unless it's the way they're made. They very well may be made that way by accident, but we can't just assume that they will be.

Comment author: rabidchicken 19 July 2010 05:15:09AM 6 points [-]

I laughed at the last couple sentences... "Yet in my estimation, the most damaging aspect of using other authors' imaginations is that it stops people from using their own. As Robert Pirsig said:"... :p I am assuming the irony was deliberate.

Comment author: [deleted] 24 February 2011 12:06:40AM 3 points [-]

When I try to introduce the subject of advanced AI, what's the first thing I hear, more than half the time?

"Oh, you mean like the Terminator movies / the Matrix / Asimov's robots!"

Doesn't Asimov's Laws provide a convenient entry into the topic of uFAI? I mean, sometime after I actually read the Asimov stories, but well before I discovered this community or the topic of uFAI, it occurred to me in a wave of chills how horrific the "I, Robot" world would actually be if those laws were literally implemented in real-life AI. "A robot may not injure a human being or, through inaction, allow a human being to come to harm"? But we do things all the time that may bring us harm--from sexual activity (STDs!) to eating ice cream (heart disease!) to rock-climbing or playing competitive sports... If the robots were programmed in such a way that they could not "through inaction, allow a human being to come to harm" then they'd pretty much have to lock us all up in padded cells, to prevent us taking any action that might bring us harm. Luckily they'd only have to do it for one generation because obviously pregnancy and childbirth would never be allowed, it'd be insane to allow human women to take on such completely preventable risks...

So then when I found you lot talking about uFAI, my reaction was just nodnod rather than "but that's crazy talk!"

Comment author: Sniffnoy 24 February 2011 12:11:42AM 1 point [-]

AKA the premise of "With Folded Hands". :)

Comment author: [deleted] 24 February 2011 12:31:36AM 0 points [-]

I haven't read that but yes, it sounds like exactly the same premise.

Comment author: ArisKatsaris 24 February 2011 12:44:10AM 0 points [-]

Violating people's freedom would probably also count as harm, emotional harm if nothing else. Which is even more troublesome as we wouldn't even be allowed to be emotionally distressed -- they'd just fill us with happy juice so that we can live happily ever after. The superhappies in robotic form. :-)

Comment author: TobyBartels 24 February 2011 01:52:43AM 0 points [-]

"I, Robot"

It's interesting that the I, Robot movie did a better job of dealing with this than anything that Asimov wrote.

Comment author: [deleted] 24 February 2011 02:03:53AM 0 points [-]

Did it? I don't remember the plot of the movie very well, but I remember a feeling of disappointment that the AI seemed to be pursuing conventional take-over-the-world villainry rather than simply faithfully executing its programming.

Comment author: TobyBartels 24 February 2011 02:14:47AM *  0 points [-]

(Spoiler warning!)

The chief villain was explicitly taking over the world in order to carry out the First Law. Only the one more-human-like robot was able to say (for no particular reason) "But it's wrong."; IIRC, all other robots understood the logic when given relevant orders. (However, when out of the chief villain's control, they were safe because they were too stupid to work it out on their own!)

However, the difference from Asimov is not realising that the First Law requires taking over the world; Daneel Olivaw did the same. The difference is realising that this would be villainy. So the movie was pretty conventional!

Comment author: [deleted] 24 February 2011 02:21:09AM 1 point [-]

That is better than I remembered. Weren't the robots, like, shooting at people, though? So breaking the First Law explicitly, rather than just doing a chilling optimization on it?

Comment author: TobyBartels 24 February 2011 02:30:55AM *  2 points [-]

My memory's bad enough now that I had to check Wikipedia. You're right that robots were killing people, but compare this with the background of Will Smith's character (Spooner), who had been saved from drowning by a robot. We should all agree that the robot that saved Spooner instead of a little girl (in the absence of enough time to save both) was accurately following the laws, but that robot did make a decision that condemned a human to die. It could do this only because this decision saved the life of another human (who was calculated to have a greater chance of continued survival).

Similarly, VIKI chose to kill some humans because this decision would allow other humans to live (since the targeted humans were preventing the take-over of the world and all of the lives that this would save). This time, it was a pretty straight greater-numbers calculation.

Comment author: [deleted] 24 February 2011 02:41:13AM *  1 point [-]

That is so much better than I remembered that I'm now doubting whether my own insight about Asimov's laws actually predated the movie or not. It's possible that's where I got it from. Although I still think it's sort of cheating to have the robots killing people, when they could have used tranq guns or whatever and still have been obeying the letter of the First Law.

Comment author: TobyBartels 24 February 2011 03:05:34AM 1 point [-]

I still think it's sort of cheating to have the robots killing people, when they could have used tranq guns or whatever and still have been obeying the letter of the First Law.

Yes, you're certainly right about that. Most of the details in the movie represent serious failures of rationality on all parts, the robots as much as anybody. It's just a Will Smith action flick, after all. Still, the broad picture makes more sense to me than Asimov's.

Comment author: kilobug 14 September 2011 09:28:10AM 5 points [-]

There is another way under which we can consider reference to fiction in a "serious" conversation : a method of lossy compression. Fully explaining the hypothesis of living in a virtual world is complex, and takes time. Saying "you saw Matrix ?" is simple and fast. The same way referring to "Asimov robots" is quicker than explaining the concept of benevolent robots who just can't attack humans, and how even with very strict rules, it's not in fact really perfect (from the Solarian robots who attacked people who didn't have the Solarian accent because they had a restrictive definition of humans, to the robot manipulated to put poison in a glass and another one to give the glass to a human, ...).

The work of Asimov isn't evidence that such robots can be built, nor that there is no way to put perfect safeguards, but it allows to transmit the whole concept in a few words instead of requiring hours.

But as every lossy compression, it introduces noise. Reading LW, I realize I underestimated the noise generated by this compression, I wasn't aware of the scope of contamination and anchoring effects (I had some kind of intuition that they did exist, but I greatly underestimated their importance). So I'll be more prudent in the future in using that compression algorithm, which is very efficient as a compression (can lead to more than 100x compression ratio) but also very noisy.

Comment author: Solvent 14 September 2011 10:11:32AM 1 point [-]

One thing I've used is "Imagine the Matrix, except think about if for five minutes, and change it to remove the ridiculous parts of the idea." It allows them to get a cleaner version of your idea, but requires them to be pretty smart.

Comment author: shokwave 14 September 2011 12:13:02PM 2 points [-]

"The original plot for the Matrix called for the humans' brains to be used as powerful computers to run all the software - that was why anyone plugged in could become an Agent - but someone at Warner Bros decided people weren't that clever. Besides, our body heat is nowhere near as efficient as nuclear power. Anyway! The Matrix ...."

This is how I start simulation arguments off on a good footing. Peoples' minds are a little blown by such a sensible version of the Matrix, so they're more accepting...

Comment author: Swimmer963 14 September 2011 12:27:42PM 1 point [-]

The original plot for the Matrix called for the humans' brains to be used as powerful computers to run all the software - that was why anyone plugged in could become an Agent - but someone at Warner Bros decided people weren't that clever. Besides, our body heat is nowhere near as efficient as nuclear power. Anyway! The Matrix ....

Are you serious? Why did they change it? That version would have been sooo much more awesome. (Cries.)

Comment author: MarkusRamikin 14 September 2011 01:13:46PM 1 point [-]

Because viewers are morons.

Comment author: JoshuaZ 14 September 2011 12:36:58PM 2 points [-]

Do you have a citation for this? This isn't mentioned in the Wikipedia article on the movie.

Comment author: shokwave 15 September 2011 12:35:03AM *  4 points [-]

No citation, not even sure it's real. TV Tropes told me, and I thought it was cool and sensible enough to pretend it was true.

tvtropes.org/pmwiki/pmwiki.php/Main/WetwareCPU ctrl-F Matrix

(Bare link only to trivially inconvenience you)

Comment author: sboo 20 August 2012 09:59:47AM 0 points [-]

i like what you said about fiction perceived as distant reality. "long long ago in a galaxy far far away".

Comment author: lukeprog 27 February 2013 05:59:45AM 1 point [-]

Behold: This week in generalization from fictional evidence.

Comment author: 8539483948 10 June 2013 05:43:47PM 3 points [-]

A story is never a rational attempt at analysis, not even with the most diligent science fiction writers, because stories don't use probability distributions.

A Yudkowsky blog post is rarely a rational attempt at analysis, because his blog posts rarely use probability distributions.

In this particular post, note that 100% of the probability distributions stated are completely fictional. Yudkowsky has not provided any estimate of the likelihood of people committing this fallacy or the costs associated with such instances.

Comment author: NancyLebovitz 03 August 2013 04:37:44AM 0 points [-]

When was the last time you saw a movie about humankind suddenly going extinct (without warning and without being replaced by some other civilization)?

There's a short story-- "Murphy's Hall" by Poul Anderson-- that's pretty close. It hasn't been reprinted much.