RomeoStevens comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong

3 Post author: XiXiDu 14 November 2011 11:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: RomeoStevens 15 November 2011 02:40:45AM 1 point [-]

have you read that alien message? http://lesswrong.com/lw/qk/that_alien_message/

Comment author: TimS 15 November 2011 04:07:05AM 0 points [-]

TheOtherDave showed that I mis-estimated the critical number. That said, there are several differences between my hypo and the story.

1) Most importantly, the difference between average human and Newton is smaller than the difference portrayed between aliens and humans.

2) There is a huge population of humans in the story, and I expressly limited my non-concern to much smaller populations.

3) The super-intelligents in the story do not appear to be know about by the relevant policy-makers (i.e. senior military officials) Not that it would matter in the story, but it seems likely to matter if the population of supers was much smaller.

Comment author: RomeoStevens 15 November 2011 04:42:56AM 0 points [-]

I'm not sure I see the point of the details you mention. The main thrust is that humans within the normal range given a million fold speedup (as silicon does) and unlimited collaboration would be a de facto super intelligence.

Comment author: TimS 15 November 2011 02:06:20PM *  0 points [-]

The humans were not within the current normal range. The average was explicitly higher. And I think that the aliens average intelligent was lower than the current human average, although the story is not explicit on that point. And there were billions of super-humans.

Let me put it this way: Google is smarter, wealthier, and more knowledgeable than I. But even if everyone at Google thought millions of times faster than everyone else, I still wouldn't worry about them taking over the world. Unless nobody else important knew about this capacity.

AI is a serious risk, but let's not underestimate how hard it is to be as capable as a Straumli Perversion.

Comment author: RomeoStevens 15 November 2011 08:38:36PM 0 points [-]

the higher average does not mean that they were not within the normal range. they are not individually super human.