Cthulhoo comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong

3 Post author: XiXiDu 14 November 2011 11:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: Logos01 14 November 2011 01:42:59PM *  0 points [-]

There isn't the need to have infinite recursion.

Or, frankly, recursion at all. Say we can't make anything smarter than humans... but we can make them reliably smart, and smaller than humans. AGI bots as smart as our average "brilliant" guy with no morals and the ability to accelerate as only solid-state equipment can... is frankly pretty damned scary all on its own.

(You could also count, under some auspices, "intelligence explosion" as meaning "an explosion in the number of intelligences". Imagine if for every human being the AGIs had 10,000 minds. Exactly what impact would the average human's mental contributions have? What, then, of 'intellectual labor'? Or manual labor?)

Comment author: Cthulhoo 14 November 2011 01:59:32PM 2 points [-]

Good point.

In addition, supposing the AI is slightly smarter than humans and can easily replicate itself, Black Team effects could possibly be relevant (just an hypothesis, really, but still interesting to consider).