derekz comments on Changing accepted public opinion and Skynet - Less Wrong

15 [deleted] 22 May 2009 11:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (58)

You are viewing a single comment's thread. Show more comments above.

Comment author: derekz 22 May 2009 04:19:16PM 1 point [-]

If dark arts are allowed, it certainly seems like hundreds of millions of dollars spent on AI-horror movies like Terminator are a pretty good start. Barring an actual demostration of progress toward AI, I wonder what could actually be more effective...

Sometime reasonably soon, getting real actual physical robots into the uncanny valley could start to help. Letting imagination run free, I imagine a stage show with some kind of spookily-competent robot... something as simple as competent control of real (not CGI) articulated robots would be rather scary... for example, suppose that this robot does something shocking like physically taking a human confederate and nailing him to a cross, blood and all. Or something less gross, heh.

Comment deleted 22 May 2009 06:00:16PM *  [-]
Comment author: Z_M_Davis 22 May 2009 10:47:25PM *  2 points [-]

interesting. I wouldn't want to rule out the "dark arts" , i.e. highly non rational methods of persuasion.

...

"Needless to say, those who come to me and offer their unsolicited advice {to lie} do not appear to be expert liars. For one thing, a majority of them don't seem to find anything odd about floating their proposals in publicly archived, Google-indexed mailing lists." ---Eliezer Yudkowsky

Comment deleted 23 May 2009 01:19:49AM [-]
Comment author: Vladimir_Nesov 23 May 2009 11:08:36AM *  0 points [-]

What's "rational persuation", anyway? Is a person supposed to already possess an ability to change their mind according to an agreed-to-be-safe protocol? Teaching rationality and then giving your complex case would be more natural, but isn't necessarily an option.

The problem is that it's possible to persuade that person of many wrong things, that the person isn't safe from falsity. But if whatever action you are performing causes them to get closer to the truth, it's a positive thing to do in their situation, one selected among many negative things that could be done and that happen habitually.

Comment author: orthonormal 25 May 2009 04:35:50PM 0 points [-]

You know, sci-fi that took the realities of mindspace somewhat seriously could be helpful in raising the sanity waterline on AGI; a well-imagined clash between a Friendly AI and a Paperclipper-type optimizer (or just a short story about a Paperclipper taking over) might at least cause readers to rethink the Mind Projection Fallacy.

Comment author: Vladimir_Nesov 25 May 2009 04:44:15PM 1 point [-]

Won't work, the clash will only happen in their minds (you don't fight a war if you know you'll lose; you can just proceed directly to the final truce agreement). Eliezer's Three Worlds Collide is a good middle ground, with non-anthropomorphic aliens of human-level intelligence allowing to describe familiar kind of action.

Comment author: orthonormal 26 May 2009 01:26:42AM 1 point [-]

IAWYC, but one ingredient of sci-fi is the willingness to sacrifice some true implications if it makes for a better story. It would be highly unlikely for a FAI and a Paperclipper to FOOM at the same moment with comparable optimization powers such that each thinks it gains by battling the other, and downright implausible for a battle between them to occur in a manner and at a pace comprehensible to the human onlookers; but you could make some compelling and enlightening rationalist fiction with those two implausibilities granted.

Of course, other scenarios can come into play. Has anyone even done a good Paperclipper-takeover story? I know there's sci-fi on 'grey goo', but that doesn't serve this purpose: readers have an easy time imagining such a calamity caused by virus-like unintelligent nanotech, but often don't think a superhuman intelligence could be so devoted to something of "no real value".

Comment deleted 01 June 2009 11:59:14AM [-]
Comment author: orthonormal 01 June 2009 03:58:50PM 0 points [-]

That's... the opposite of what I was looking for. It's pretty bad writing, and it's got the Mind Projection Fallacy written all over it. (Skynet is unhappy and worrying about the meaning of good and evil?)

Comment deleted 01 June 2009 04:03:59PM *  [-]
Comment author: orthonormal 01 June 2009 07:03:15PM 1 point [-]

Ironically, a line from the original Terminator movie is a pretty good intuition pump for Powerful Optimization Processes:

It can't be bargained with. It can't be 'reasoned' with. It doesn't feel pity or remorse or fear and it absolutely will not stop, ever, until [it achieves its goal].

Comment author: glenra 25 May 2009 02:17:43AM 0 points [-]

Robotics is not advanced enough for a robot to look scary, though military robotics is getting there fast.

Shakey the Robot was funded by DARPA; according to my dad, the grant proposals were usually written in such a way as to imply robot soldiers were right around the corner...in 1967. So it only took about 40 years.