HalMorris comments on Intelligence explosion in organizations, or why I'm not worried about the singularity - Less Wrong

13 Post author: sbenthall 27 December 2012 04:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (187)

You are viewing a single comment's thread. Show more comments above.

Comment author: HalMorris 27 December 2012 03:34:40PM 3 points [-]

Deep Blue is far, far from being AGI, and is not a conceivable threat to the future of humanity, but its success suggests that implementation of combat strategy within a domain of imaginable possibilities is a far easier problem than AGI.

In combat, speed, both of getting a projectile or an attacking column to its destination, and speed of sizing up a situation so that strategies can be determined, just might be the most important advantage of all, and speed is the most trivial thing in AI.

In general, it is far easier to destroy than to create.

So I wouldn't dismiss an A-(not-so)G-I as a threat because it is poor at music composition, or true deep empathy(!), or even something potentially useful like biology or chemistry; i.e. it could be quite specialized, achieving a tiny fraction of the totality of AGI and still be quite a competent threat, capable of causing a singularity that is (merely) destructive.