FrankAdamek comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: FrankAdamek 30 October 2010 01:38:59PM 1 point [-]

I agree that that risk exists as well, but much of SIAI's efforts revolve around increasing discussion of the risks of AGI, not just holding back their own efforts. Slowing down other efforts through awareness of the dangers is a factor that should be considered.

Also, discussions of caution may increase the number of "desirable organizations" working to develop AI. In terms of your model, such discussion could turn a black-hat organization into a smiley-faced one. No one is going to release an AI that they actually think is going to wipe out humanity. What's more, not every well-intentioned organization would be one we want to build AGI. While certain organizations are more likely to be scrupulous in their development, the risk of well-intentioned error is probably the largest one.

In addition, one should consider the extent to which Friendliness can be developed in parallel with AGI, not just something added on at the end of the process. If we assume that no one is currently close to AGI (a fair belief, I think), then now is a fantastic time to help support the development of that theory. If FAI can be developed before anyone can implement AGI, then humanity is in good shape. If it's easy to add FAI to a project, or if knowing about workable FAI would not help a group with the problem of AGI, then the solution can be released widely for anyone to incorporate into their project. SIAI's goal is not to be the ones to implement the first superintelligence, but just to make sure that the first one is Friendly.

Comment author: timtyler 31 October 2010 10:16:24PM *  3 points [-]

Also, discussions of caution may increase the number of "desirable organizations" working to develop AI. In terms of your model, such discussion could turn a black-hat organization into a smiley-faced one.

That seems like the (dubious) "engineers are incompetent and a bug takes over the world" scenario.

I think a much more obvious concern is where the "engineers successfully build the machine to do what it is told" scenario - where the machine helps its builders and sponsors - but all the other humans in the world - not so much.

Comment author: timtyler 30 October 2010 02:40:29PM 5 points [-]

SIAI's goal is not to be the ones to implement the first superintelligence, but just to make sure that the first one is Friendly.

That wasn't true not terribly long ago:

"The Singularity Institute was founded on the theory that in order to get a Friendly artificial intelligence, someone has got to build one. So, we’re just going to have an organization whose mission is: build a Friendly AI. That’s us."

Has there been a memo?