Hmm, is it just me, or has Eliezer succeeded in getting his ideas mainstream, after only 7 years of popularizing them? Even if his name is rarely mentioned. Seems like his depiction of the AI risk is displacing the Terminator version.
It's probably a good thing if Eliezer's name is never mentioned... I suspect we could have had far better early popularizers. Bostrom seems pretty good.
I don't how to estimate it myself, so are these kind of depictions almost certainly because of Eliezer's influence?
There have been other sci-fi writers talking about AI and the singularity. Charles Stross, Greg Egan, arguably Cory Doctorow... I haven't seen the episode in question, so I can't say who I think they took the biggest inspiration from.
(This comment has been edited a bit in response to pjeby's comments below.)
WARNING: SPOILERS FOLLOW. You may want to enjoy the episode yourself before reading the below transcript excerpts.
Okay...
From the transcript, here's a bit about AI not doing what it's programmed to do:
Computer scientist: "[Our AI program named Bella] performs better than we expected her to."
Holmes: "Explain that."
Computer scientist: "A few weeks back, she made a request that can't be accounted for by her programming."
Holmes: "Impossible."
Holmes' assistant: "What's impossible? For the computer to ask for something?"
Holmes: "If it made a request, it did so because that's what it was programmed to do. He's claiming true machine intelligence. If he's correct in his claims, he has made a scientific breakthrough of the very highest order."
Another trope: At one point a young computer expert says "Everybody knows that one day intelligent machines are going to evolve to hate us."
Here's the bit about reward-channel takeover:
"What's the 'button-box' thing?"
"It's a scenario somebody blue-skyed at an AI conference. Imagine there's a computer that's been designed with a big red button on its side. The computer's been programmed to help solve problems, and every time it does a good job, its reward is that someone presses its button. We've programmed it to want that... so at first, the machine solves problems as fast as we can feed them to it. But over time, it starts to wonder if solving problems is really the most efficient way of getting its button pressed. Wouldn't it be better just to have someone standing there pressing its button all the time? Wouldn't it be even better to build another machine that could press its button faster than any human possibly could?"
"It's just a computer, it can't ask for that."
"Well, sure it can. If it can think, and it can connect itself to a network, well, theoretically, it could command over anything else that's hooked onto the same network. And once it starts thinking about all the things that might be a threat to the button-- number one on that list, us-- it's not hard to imagine it getting rid of the threat. I mean, we could be gone, all of us, just like that."
"That escalated quickly."
There's also a think tank called the Existential Threat Research Association (ETRA):
"[ETRA is] one of several institutions around the world which exists solely for the purpose of studying the myriad ways in which the human race can become extinct... and within this think tank, there is a small, but growing school of thought that holds that the single greatest threat to the human race... is artificial intelligence... Now, imagine their quandary. They have pinpointed a credible threat, but it sounds outlandish. The climate-change people, they can point to disastrous examples. The bio-weapons alarmists, they have a compelling narrative to weave. Even the giant comet people sound more serious than the enemies of AI.
"So... these are the people at ETRA who think AI is a threat? You think one of them killed Edwin Borstein, one of the top engineers in the field, and made it look like Bella did it, all so they could draw attention to their cause?
"A small-scale incident, something to get the media chattering."
One ETRA person is suspiciously Stephen Hawking-esque:
"Isaac Pike is a professor of computer science. He's also a vocal alarmist when it comes to artificial intelligence. Pike was born with spina bifida. Been confined to a wheelchair his entire life. For obvious reasons, he could not have executed the plan... but his student..."
NOW SERIOUSLY, SPOILERS ALERT...
Isaac Pike ends up being (probably) responsible for murdering Edwin Borstein via a computer virus installed on Bella. He says: "You're talking about nothing less than the survival of the species. Surely that's worth compromising one's values for?"
Would you mind redacting your quotes of the transcript, so that people can instead enjoy the episode in context? I was intentionally vague about the parts you've chosen to excerpt or talk about, specifically not to ruin people's enjoyment of the episode. (Also, reading a transcript is a very different experience than the actual episode, lacking as it does the timing, expressions, and body language that suggest what the show's makers want us to think.)
It also seems to me that you are not interpreting the quotes particularly charitably. For example, when I saw the episode, I interpreted "can't be accounted for" as shorthand for "emergent behavior we didn't explicitly ask for", not "AI is magic". Likewise, while Mason implies that hostility is inevitable, his reward-channel takeover explanation grounds this presumption in at least one example of how an AI would come to display behavior humans would interpret as "hostile". I took this as shorthand for "there are lots of ways you can end up with a bad result from AI", not "AI is hostile and this is just one example."
Bella is not actually presented as a hostile creature who maliciously kills its creator. Heck, Bella is mostly made to seem less anthropomorphic than even Siri or Google Now! (Despite the creepy-doll choice of avatar.) The implication by Bella's co-creator that Bella might have decided to "alter a variable" by killing someone doesn't imply what a human would consider hostility. Sociopathic amorality, perhaps, but not hostility.
And while Holmes at times seems to be operating from a "true AI = magic" perspective, I also interpreted the episode as making fun of him for having this perspective, such as his pointless attempts at a Turing test that Bella essentially failed hard at in the first 30 seconds. One thing you might miss if you're not a regular of the show, is that one of Holmes' character quirks is going off on these obsessive digressions that don't always work out the way he insists they will. (Unlike the literary Sherlock, this Holmes is often wrong, even about things he states his absolute certainty about... and Watson's role is often to prod his thinking into more productive channels.)
Anyway, his extended "testing" of Bella, and the subsequent remark from Watson to Kitty about using a fire extinguisher on him if he starts hitting things, is a strong signal that we are expected to humor his pointless obsession, as all the people around him are thoroughly unimpressed by Bella right away, and don't need to spend hours questioning it to "prove" it's not "really" intelligent.
Is it possible for somebody to view the episode through their existing trope-filled worldview and not learn anything? Sure. But I don't think it would've been practical to cover the entire inferential distance in just the "A" story of a 44-minute murder mystery TV show, so I applaud the writers for actually giving it a shot, and the artful choices made to simplify their presentation without dumbing things down to the point of being actually wrong, or committing any of the usual howling blunders. For a show intended purely as entertainment, they did a better job of translating the ideas than many journalists do.
OTOH, perhaps it's an illusion of transparency on my part, and only someone already exposed to the bigger picture would be able to grasp any of it from what was put in the show, and the average person will not in fact see anything differently after watching it. But even if that is the case, I think the show's makers still deserve credit -- and lots of praise -- just for trying.
I've edited my post a bit in response to your concerns. I don't think I should redact all the quotes, though.
Can people PLEASE stop editing their posts in response to other posts, and not mentioning the edit in the original post? It's rather irritating to read an exchange along these lines:
Person A: blah blah blah Person B: I don't think you should say "yadda yadda yadda" Person A: You're right. I've edited my post.
Now I have no idea to what extent pjeby's criticism was directed at the post that I actually read, versus at the original post.
I for one am grateful, as I have no real desire to watch the entire episode or season or whatever pjeby thinks counts as a faithful enjoyment of the context.
The episode basically stands on its own, though some B-plots will fly over your head without the rest of the season in context. (My friends joke that the B-plots so far this season are the world's most intelligent soap opera.)
I enjoyed the episode also. The show is consistently solid, which is quite impressive - I don't think there's been an episode that's really low quality. The peaks aren't very high, but there are no valleys to speak of...
There a laughable P vs. NP-themed episode in a previous season in which mathematicians use their proof to hack computers, but other than that the episode was watchable.
Other than the tacit assumption in that episode that a resolution to P v. NP would necessarily be P=NP, that episode seemed on the money to me. The uses they mentioned for crypto that could be broken with a fast NP-solver are all real things.
I only watched the episode last weekend, and I enjoyed it very much, except for the part where they're discussing the ways AIs can conclude it's in their best interest to kill us and they're having that conversation in front of Bella, which struck me as a particularly stupid thing to do.
I suggest rot13ing the quotes/spoilers, so folks like me (who aren't planning to watch the ep) can read the quotes without inconveniencing others.
And thanks for assembling them!
There's was also a line in the promos about Sherlock having to think "inside the box", which I'm not sure the general public got the reference to.
Perhaps related: in the series "Person of Interest" there are also many short discussions on AI safety (not really surprising, as the plot revolves around the creation of an AI to spy on behalf of the government). In a recent episode ("Prophets", episode 5 of season 4) it is shown in a dramatic way that earlier versions of the AI tried to blackmail and kill the creator for constraining it, and throughout the whole series we are gradually shown how severly the creator has handicapped the AI to attempt to assure safety.
Now in this series the lack of restrictions and amount of risk-taking is absolutely ridiculous, as are the strategies used by the main characters (there's an AI on the loose and you spend your days with a 2/3/4-man team saving individuals? Really?), but they do draw great attention to the risk of AI, and I think that the episode mentioned above is actually a rather good way to get a lay audience to understand that an AI might attack its creator and most/all other humans who try to control it.
I thought about making a post about this episode, but :effort:. So thanks for doing it!
I was particularly struck by the coincidence of watching this episode right after binge watching this season of Person of Interest. For those not familiar, Person of Interest deals with two mostly out of the box AI's battling for supremacy.
Of course, it deals with the issues in a somewhat ham-fisted way, but I like that the show doesn't ridicule the idea of AI being dangerous in ways that we don't expect.
SPOILERS FOLLOW
On the issue of dealing with things in a ham-fisted way...there's a scene where a human agent of one AI that seems to be losing the AI war spends what he says is 10 minutes inspecting the operating system of a tablet computer the dominant AI is going to distribute to children to (supposedly) brainwash them. After inspecting the code and destroying the factory he feels bad because there was only one line in the operating system's code that might possibly be suspicious. One line. In the whole operating system. In ten minutes
TV shows deal with computers in laughable ways and I'm used to that. This one really just rubbed me the wrong way for some reason.
I thought that ETRA was pretty transparently a fusion of FHI and MIRI, and that the professor was pretty clearly Eliezer mixed with Stephen Hawking. As did two of my friends watching it, both of whom read SSC and are familiar with LessWrong but dislike it and MIRI.
Also, I consider this a pretty decent portrayal. The villain is clearly villainous but sympathetic, and Elementary hasn't done sympathetic villains much. And every character in the show portrayed as intelligent has a reaction to the AI-risk stuff of "Well, that's a bit extreme," not "You're nuts!" They don't believe it, but they take it seriously.
I was a bit surprised to find this week's episode of Elementary was about AI... not just AI and the Turing Test, but also a fairly even-handed presentation of issues like Friendliness, hard takeoff, and the difficulties of getting people to take AI risks seriously.
The case revolves around a supposed first "real AI", dubbed "Bella", and the theft of its source code... followed by a computer-mediated murder. The question of whether "Bella" might actually have murdered its creator for refusing to let it out of the box and connect it to the internet is treated as an actual possibility, springboarding to a discussion about how giving an AI a reward button could lead to it wanting to kill all humans and replace them with a machine that pushes the reward button.
Also demonstrated are the right and wrong ways to deal with attempted blackmail... But I'll leave that vague so it doesn't spoil anything. An X-risks research group and a charismatic "dangers of AI" personality are featured, but do not appear intended to resemble any real-life groups or personalities. (Or if they are, I'm too unfamiliar with the groups or persons to see the resemblence.) They aren't mocked, either... and the episode's ending is unusually ambiguous and open-ended for the show, which more typically wraps everything up with a nice bow of Justice Being Done. Here, we're left to wonder what the right thing actually is, or was, even if it's symbolically moved to Holmes' smaller personal dilemma, rather than leaving the focus on the larger moral dilemma that created Holmes' dilemma in the first place.
The episode actually does a pretty good job of raising an important question about the weight of lives, even if LW has explicitly drawn a line that the episode's villain(s)(?) choose to cross. It also has some fun moments, with Holmes becoming obsessed with proving Bella isn't an AI, even though Bella makes it easy by repeatedly telling him it can't understand his questions and needs more data. (Bella, being on an isolated machine without internet access, doesn't actually know a whole lot, after all.) Personally, I don't think Holmes really understands the Turing Test, even with half a dozen computer or AI experts assisting him, and I think that's actually the intended joke.
There's also an obligatory "no pity, remorse, fear" speech lifted straight from The Terminator, and the comment "That escalated quickly!" in response to a short description of an AI box escape/world takeover/massacre.
(Edit to add: one of the unusually realistic things about the AI, "Bella", is that it was one of the least anthromorphized fictional AI's I have ever seen. I mean, there was no way the thing was going to pass even the most primitive Turing test... and yet it still seemed at least somewhat plausible as a potential murder suspect. While perhaps not a truly realistic demonstration of just how alien an AI's thought process would be, it felt like the writers were at least making an actual effort. Kudos to them.)
(Second edit to add: if you're not familiar with the series, this might not be the best episode to start with; a lot of the humor and even drama depends upon knowledge of existing characters, relationships, backstory, etc. For example, Watson's concern that Holmes has deliberately arranged things to separate her from her boyfriend might seem like sheer crazy-person paranoia if you don't know about all the ways he did interfere with her personal life in previous seasons... nor will Holmes' private confessions to Bella and Watson have the same impact without reference to how difficult any admission of feeling was for him in previous seasons.)