jimrandomh comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 01 April 2011 09:22:40AM *  1 point [-]

Compare this with this Yudkowsky quote from 2005:

And if Novamente should ever cross the finish line, we all die.

This looks like a rather different probability estimate. It seems to me to be highly overconfident one.

I think the best way to model this is as FUD. Not Invented Here. A primate ego battle.

If this is how researchers deal with each other at this early stage, perhaps rough times lie ahead.

Comment author: jimrandomh 01 April 2011 02:39:43PM *  4 points [-]

A poll at the 2008 global catastrophic risks conference put the existential risk of machine intelligence at 5%

Compare this with this Yudkowsky quote from 2005: And if Novamente should ever cross the finish line, we all die

This looks like a rather different probability estimate. It seems to me to be highly overconfident one.

They're probabilities for two different things. The 5% estimate is for P(AIisCreated&AIisUnfriendly), while Yudkowsky's estimate is for P(AIisUnfriendly|AIisCreated&NovamenteFinishesFirst).