Cthulhoo comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (94)
Or, frankly, recursion at all. Say we can't make anything smarter than humans... but we can make them reliably smart, and smaller than humans. AGI bots as smart as our average "brilliant" guy with no morals and the ability to accelerate as only solid-state equipment can... is frankly pretty damned scary all on its own.
(You could also count, under some auspices, "intelligence explosion" as meaning "an explosion in the number of intelligences". Imagine if for every human being the AGIs had 10,000 minds. Exactly what impact would the average human's mental contributions have? What, then, of 'intellectual labor'? Or manual labor?)
Good point.
In addition, supposing the AI is slightly smarter than humans and can easily replicate itself, Black Team effects could possibly be relevant (just an hypothesis, really, but still interesting to consider).