khafra comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 30 October 2010 11:11:43AM *  3 points [-]

Some more grandmas dying would be "acceptable" damage. However, that isn't the problem.

The problem is this: The risks of caution.

1-line summary: if the good guys delay their projects to make them safer, the bad guys are more likely to win.

The video's "abstract":

It is commonly thought that caution in the initial development of machine intelligence is associated with better outcomes - and that things like extensive testing, sandoxes, and provable correctness are things that will help to produce safe and beneficial synthetic intelligent agents.

In this video, I cast doubt on that idea, by exhibiting a model in which delays caused by caution can lead to much poorer outcomes.

Comment author: khafra 30 October 2010 01:30:43PM 4 points [-]

LW's own rwallace wrote on the subject a while back.