eli_sennesh comments on MIRI strategy - Less Wrong

5 Post author: ColonelMustard 28 October 2013 03:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread.

Comment author: [deleted] 10 November 2013 02:46:26PM 1 point [-]

Nastier issue: the harder argument of convincing people UFAI is an avoidable risk. If you can't convince people they've got a realistic chance (ie: one they would gamble on, given the possible benefits of FAI) of winning this issue, then it doesn't matter how informed they are.

See: Juergen Schmidhuber's interview on this very website, where we basically says, "We're damn near AI in my lab, and yes, it is a rational optimization process," followed by, "We see no way to prevent the paper-clipping of humanity whatsoever, so we stopped giving a damn and just focus on doing our research."