Nick_Tarleton comments on My true rejection - Less Wrong

-16 Post author: dripgrind 14 July 2011 10:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread.

Comment author: Nick_Tarleton 14 July 2011 11:15:49PM 9 points [-]

assassinate any researchers who look like they're on track to deploying an unFriendly AI, then destroy their labs and backups.

You need to think much more carefully about (a) the likely consequences of doing this (b) the likely consequences of appearing to be a person or organization that would do this.

See also.

Comment author: dripgrind 14 July 2011 11:34:58PM 0 points [-]

Oh, I'm not saying that SIAI should do it openly. Just that, according to their belief system, they should sponsor false-flag cells who would (perhaps without knowing the master they truly serve). The absence of such false-flag cells indicates that SIAI aren't doing it - although their presence wouldn't prove they were. That's the whole idea of "false-flag".

If you really believed that unFriendly AI was going to dissolve the whole of humanity into smileys/jelly/paperclips, then whacking a few reckless computer geeks would be a small price to pay, ethical injunctions or no ethical injunctions. You know, "shut up and multiply", trillion specks, and all that.

Comment author: Nick_Tarleton 15 July 2011 12:12:07AM 6 points [-]

Just that, according to their belief system

It seems to you that according to their belief system.

they should sponsor false-flag cells who would (perhaps without knowing the master they truly serve).

Given how obvious the motivation is, and the high frequency with which people independently conclude that SIAI should kill AI researchers, think about the consequences of anyone doing this for anyone actively worried about UFAI.

If you really believed that unFriendly AI was going to dissolve the whole of humanity into smileys/jelly/paperclips, then whacking a few reckless computer geeks would be a small price to pay, ethical injunctions or no ethical injunctions.

Ethical injunctions are not separate values to be traded off against saving the world; they're policies you follow because it appears, all things considered, that following them has highest expected utility, even if in a single case you fallibly perceive that violating them would be good.

(If you didn't read the posts linked from that wiki page, you should.)