V_V comments on AI risk, executive summary - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (51)
This example is weird, since it seems to me that MIRI position is exactly ripped off from the premise of the Terminator franchise.
Yes, the individual terminator robot doesn't look much smart (*), but Skynet is. Hell, it even invented time travel! :D
(* does it? How would a super-intelligent terminator try to kill Sarah/John Connor?)
I think xkcd covered that one pretty well.
LoL!
Skynet is an idiot: http://lesswrong.com/lw/fku/the_evil_ai_overlord_list/
Beware! It may use time travel to acausally punish you for writing a list that makes its likelihood of existing less probable :D
Don't feed the basilisk! :p
Awww, but it's so cute...
Philip K. Dick's "The Second Variety" is far more representative of our likelihood of survival against a consistent terminator-level antagonist / AGI. Still worth reading, as is reading the other book "Soldier" by Harlan Ellison that Terminator is based on. The Terminator also wouldn't likely use a firearm to try to kill Sarah Connor, as xkcd notes :) ...but it also wouldn't use a drone.
It would do what Richard Kuklinski did: make friends with her, get close enough to spray her with cyanide solution (odorless, undetectable, she seemingly dies of natural causes), or do something like what the T-1000 did in T2: play a cop, then strike with total certainty. Or, a ricin spike or other "bio-defense-mimicking" method.
"Nature, you scary!"
Terminator meets Breaking Bad :D