You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

palladias comments on TV's "Elementary" Tackles Friendly AI and X-Risk - "Bella" (Possible Spoilers) - Less Wrong Discussion

26 Post author: pjeby 22 November 2014 07:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (18)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 23 November 2014 01:28:15AM *  7 points [-]

(This comment has been edited a bit in response to pjeby's comments below.)

WARNING: SPOILERS FOLLOW. You may want to enjoy the episode yourself before reading the below transcript excerpts.

Okay...

From the transcript, here's a bit about AI not doing what it's programmed to do:

Computer scientist: "[Our AI program named Bella] performs better than we expected her to."

Holmes: "Explain that."

Computer scientist: "A few weeks back, she made a request that can't be accounted for by her programming."

Holmes: "Impossible."

Holmes' assistant: "What's impossible? For the computer to ask for something?"

Holmes: "If it made a request, it did so because that's what it was programmed to do. He's claiming true machine intelligence. If he's correct in his claims, he has made a scientific breakthrough of the very highest order."

Another trope: At one point a young computer expert says "Everybody knows that one day intelligent machines are going to evolve to hate us."

Here's the bit about reward-channel takeover:

"What's the 'button-box' thing?"

"It's a scenario somebody blue-skyed at an AI conference. Imagine there's a computer that's been designed with a big red button on its side. The computer's been programmed to help solve problems, and every time it does a good job, its reward is that someone presses its button. We've programmed it to want that... so at first, the machine solves problems as fast as we can feed them to it. But over time, it starts to wonder if solving problems is really the most efficient way of getting its button pressed. Wouldn't it be better just to have someone standing there pressing its button all the time? Wouldn't it be even better to build another machine that could press its button faster than any human possibly could?"

"It's just a computer, it can't ask for that."

"Well, sure it can. If it can think, and it can connect itself to a network, well, theoretically, it could command over anything else that's hooked onto the same network. And once it starts thinking about all the things that might be a threat to the button-- number one on that list, us-- it's not hard to imagine it getting rid of the threat. I mean, we could be gone, all of us, just like that."

"That escalated quickly."

There's also a think tank called the Existential Threat Research Association (ETRA):

"[ETRA is] one of several institutions around the world which exists solely for the purpose of studying the myriad ways in which the human race can become extinct... and within this think tank, there is a small, but growing school of thought that holds that the single greatest threat to the human race... is artificial intelligence... Now, imagine their quandary. They have pinpointed a credible threat, but it sounds outlandish. The climate-change people, they can point to disastrous examples. The bio-weapons alarmists, they have a compelling narrative to weave. Even the giant comet people sound more serious than the enemies of AI.

"So... these are the people at ETRA who think AI is a threat? You think one of them killed Edwin Borstein, one of the top engineers in the field, and made it look like Bella did it, all so they could draw attention to their cause?

"A small-scale incident, something to get the media chattering."

One ETRA person is suspiciously Stephen Hawking-esque:

"Isaac Pike is a professor of computer science. He's also a vocal alarmist when it comes to artificial intelligence. Pike was born with spina bifida. Been confined to a wheelchair his entire life. For obvious reasons, he could not have executed the plan... but his student..."

NOW SERIOUSLY, SPOILERS ALERT...

Isaac Pike ends up being (probably) responsible for murdering Edwin Borstein via a computer virus installed on Bella. He says: "You're talking about nothing less than the survival of the species. Surely that's worth compromising one's values for?"

Comment author: palladias 24 November 2014 04:27:18PM 0 points [-]

I suggest rot13ing the quotes/spoilers, so folks like me (who aren't planning to watch the ep) can read the quotes without inconveniencing others.

And thanks for assembling them!