roystgnr comments on How can I reduce existential risk from AI? - Less Wrong

46 Post author: lukeprog 13 November 2012 09:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (92)

You are viewing a single comment's thread. Show more comments above.

Comment author: roystgnr 13 November 2012 11:35:58PM 8 points [-]

You're probably right about safety promotion, but calling it "clear" may be an overstatement. A possible counterargument:

Existing AI researchers are likely predisposed to think that their AGI is likely to naturally be both safe and powerful. If they are exposed to arguments that it will instead naturally be both dangerous and very powerful (the latter half of the argument can't be easily omitted; the potential danger is in part because of the high potential power), would it not be a natural result of confirmation bias for the preconception-contradicting "dangerous" half of the argument to be disbelieved and the preconception-confirming "very powerful" half of the argument to be believed?

Half of the AI researcher interviews posted to LessWrong appear to be with people who believe that "Garbage In, Garbage Out" only applies to arithmetic, not to morality. If the end result of persuasion is that as many as half of them have that mistake corrected while the remainder are merely convinced that they should work even harder, that may not be a net win.

Comment author: danieldewey 16 November 2012 09:48:06AM 3 points [-]

believe that "Garbage In, Garbage Out" only applies to arithmetic, not to morality

Catchy! Mind if I steal a derivative of this?

Comment author: roystgnr 19 November 2012 11:22:31PM 7 points [-]

I've lost all disrespect for the "stealing" of generic ideas, and roughly 25% of the intended purpose of my personal quotes files is so that I can "rob everyone blind" if I ever try writing fiction again. Any aphorisms I come up with myself are free to be folded, spindled, and mutilated. I try to cite originators when format and poor memory permit, and receiving the same favor would be nice, but I certainly wouldn't mind seeing my ideas spread completely unattributed either.

Comment author: [deleted] 20 November 2012 06:29:43PM 0 points [-]

I've lost all disrespect for the "stealing" of generic ideas

Relevant TED talk

Comment author: danieldewey 20 November 2012 11:58:01AM 0 points [-]

Noted; thanks.

Comment author: Alex_Altair 14 November 2012 12:01:38AM 1 point [-]

Yeah, quite possibly. But I wouldn't want people to run into analysis paralysis; I still think safety promotion is very likely to be a great way to reduce x-risk.

Comment author: [deleted] 20 November 2012 06:38:18PM *  0 points [-]

Half of the AI researcher interviews posted to LessWrong appear to be with people who believe that "Garbage In, Garbage Out" only applies to arithmetic, not to morality.

Does 'garbage in, garbage out' apply to morality, or not?

Comment author: MugaSofer 16 November 2012 01:39:15PM 0 points [-]

Upvoted for the "Garbage in, Garbage Out" line.