You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Risto_Saarelma comments on Open Thread, May 12 - 18, 2014 - Less Wrong Discussion

5 Post author: eggman 12 May 2014 08:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (201)

You are viewing a single comment's thread.

Comment author: Risto_Saarelma 14 May 2014 08:57:30AM 5 points [-]

The Person of Interest TV show is apparently getting pretty explicit about real-world AGI concerns.

With Finch trying to build a machine that can predict violent, aberrant human behavior, he finally realized that the only solution was to build something at least as smart as a human. And that’s the moment we’re in right now in history. Forget the show. We are currently engaged in an arms race — a very real one. But it’s being conducted not by governments, as in our show, but by private corporations to build an AGI — to build artificial intelligence roughly as intelligent as a human that can be industrialized and used toward specific applications.

...I’m pretty confident that we’re going to see the emergence of AGI in the next 10 years. We have friends and sources within Silicon Valley — there is currently a headlong rush and race between a couple of very rich people to try to solve this problem. Maybe it will even happen in a way that no one knows about; that’s the premise we take for our show. But we thought it would be a fun idea that the Manhattan Project of our era — which is preventing nuclear terrorism, that’s the quiet thing that people have been diligently working on for 10 years — that’s the subtext of the whole show.

They're still doing the privacy versus data mining narrative, not talking about what might happen if you could cut humans off the general research and industry loop, but they seem to be very much in with the idea of an AGI being possible very soon, with a potential massive societal impact and probably being inimical to humans by default.

Comment author: Douglas_Knight 15 May 2014 06:00:03PM 1 point [-]

One thing in the show that I see very rarely outside of LW is the AI taking over a person.

Comment author: TylerJay 15 May 2014 05:10:25AM 0 points [-]

So I watched the first episode a while back and it seemed like they have an AI that models the world so well that it knows what's going to happen and who is involved. Maybe I missed something, but if it can tell what's going to happen, why can't it tell the difference between the one responsible for the bad thing happening and the victim?

Comment author: MrMind 15 May 2014 07:46:19AM 0 points [-]

I feels there's someone really competent behind the show, because your concern is addresed.

Spoiler alert (not too much, but still): GUR TBBQ GUVAT VF GUNG VG PNA. UBJRIRE SVAPU AB YBATRE PBAGEBYF GUR ZNPUVAR, NAQ AB YBATRE PNA PBZZHAVPNGR JVGU VG. FB UR YRSG N IREL GVAL ONPXQBBE SBE UVZFRYS, V.R. GUR FFA UR ERPRVIRF ERTHYNEYL.