Jordan comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jordan 01 November 2010 09:02:18PM 3 points [-]

There is a large, continuous spectrum between making an AI and hoping it works out okay, and waiting for a formal proof of friendliness.

Exactly this!

I think there is a U-shaped response curve to risk versus rigor. Too little rigor ensures disaster, but too much rigor ensures a low rigor alternative is completed first.

When discussing the correct course of action, I think it is critical to consider not just probability of success but also time to success. So far as I've seen arguments in favor of SIAI's course of action have completely ignored this essential aspect of the decision problem.