RomeoStevens comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (94)
Could you expand this a little further. I'm not afraid of amoral, fast-thinking, miniature Isaac Newtons unless they are a substantial EDIT: number (>1000 at the very least) or are not known about by the relevant human policy-makers.
ETA: what it used to say at the edit was "faction of the human population (>1% at the very least)" TheOtherDave corrected my mis-estimate.
have you read that alien message? http://lesswrong.com/lw/qk/that_alien_message/
TheOtherDave showed that I mis-estimated the critical number. That said, there are several differences between my hypo and the story.
1) Most importantly, the difference between average human and Newton is smaller than the difference portrayed between aliens and humans.
2) There is a huge population of humans in the story, and I expressly limited my non-concern to much smaller populations.
3) The super-intelligents in the story do not appear to be know about by the relevant policy-makers (i.e. senior military officials) Not that it would matter in the story, but it seems likely to matter if the population of supers was much smaller.
I'm not sure I see the point of the details you mention. The main thrust is that humans within the normal range given a million fold speedup (as silicon does) and unlimited collaboration would be a de facto super intelligence.
The humans were not within the current normal range. The average was explicitly higher. And I think that the aliens average intelligent was lower than the current human average, although the story is not explicit on that point. And there were billions of super-humans.
Let me put it this way: Google is smarter, wealthier, and more knowledgeable than I. But even if everyone at Google thought millions of times faster than everyone else, I still wouldn't worry about them taking over the world. Unless nobody else important knew about this capacity.
AI is a serious risk, but let's not underestimate how hard it is to be as capable as a Straumli Perversion.
the higher average does not mean that they were not within the normal range. they are not individually super human.