timtyler comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread.

Comment author: timtyler 02 January 2011 10:20:48AM *  0 points [-]

On Ben's blog post, I noted that a poll at the 2008 global catastrophic risks conference put the existential risk of machine intelligence at 5% - and that the people attending probably had some of the largest estimations of risk of anyone on the planet - since they were a self-selected group attending a conference on the topic.

"Molecular nanotech weapons" also get 5%. Presumably there's going to be a heavy intersection between those two figures - even though in the paper they seem to be adding them together!

Comment author: timtyler 01 April 2011 09:22:40AM *  1 point [-]

Compare this with this Yudkowsky quote from 2005:

And if Novamente should ever cross the finish line, we all die.

This looks like a rather different probability estimate. It seems to me to be highly overconfident one.

I think the best way to model this is as FUD. Not Invented Here. A primate ego battle.

If this is how researchers deal with each other at this early stage, perhaps rough times lie ahead.

Comment author: jimrandomh 01 April 2011 02:39:43PM *  4 points [-]

A poll at the 2008 global catastrophic risks conference put the existential risk of machine intelligence at 5%

Compare this with this Yudkowsky quote from 2005: And if Novamente should ever cross the finish line, we all die

This looks like a rather different probability estimate. It seems to me to be highly overconfident one.

They're probabilities for two different things. The 5% estimate is for P(AIisCreated&AIisUnfriendly), while Yudkowsky's estimate is for P(AIisUnfriendly|AIisCreated&NovamenteFinishesFirst).

Comment author: TheOtherDave 01 April 2011 03:01:25PM 1 point [-]

"perhaps"?

Comment author: timtyler 02 April 2011 07:44:02AM 0 points [-]

Well, a tendency towards mud-slinging might be counter-balanced by wanting to appear moral. Using FUD against competitors is usually regarded as a pretty low marketing strategy. Perhaps most of the mud-slinging can be delegated to anonymous minions, though.

Comment author: TheOtherDave 02 April 2011 03:33:31PM 1 point [-]

There's going to be a lot of mud-slinging in this space.

More generally, there's going to be a lot of primate tribal politics in this space. After all, not only does it have all the usual trappings of academic arguments, it is also predicated on some pretty fundamental challenges to where power comes from and how it propagates.