XiXiDu comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread.

Comment author: XiXiDu 30 October 2010 09:09:14AM *  4 points [-]

Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) (Thanks Kevin)

SIAI's leaders and community members have a lot of beliefs and opinions, many of which I share and many not, but the key difference between our perspectives lies in what I'll call SIAI's "Scary Idea", which is the idea that: progressing toward advanced AGI without a design for "provably non-dangerous AGI" (or something closely analogous, often called "Friendly AI" in SIAI lingo) is highly likely to lead to an involuntary end for the human race.

Of course it's rarely clarified what "provably" really means. A mathematical proof can only be applied to the real world in the context of some assumptions, so maybe "provably non-dangerous AGI" means "an AGI whose safety is implied by mathematical arguments together with assumptions that are believed reasonable by some responsible party"? (where the responsible party is perhaps "the overwhelming majority of scientists" … or SIAI itself?).

Please note that, although I don't agree with the Scary Idea, I do agree that the development of advanced AGI has significant risks associated with it.

Comment author: ciphergoth 30 October 2010 09:40:15AM 1 point [-]

Have turned this into a top-level article - many thanks for the pointer!