You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

KatjaGrace comments on Superintelligence 17: Multipolar scenarios - Less Wrong Discussion

4 Post author: KatjaGrace 06 January 2015 06:44AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (38)

You are viewing a single comment's thread.

Comment author: KatjaGrace 06 January 2015 06:46:57AM 3 points [-]

Do you think a multipolar outcome is more or less likely than a singleton scenario?

Comment author: Alex123 06 January 2015 08:47:46PM 2 points [-]

Unless somebody specifically pushes for multipolar scenario its unlikely to arise spontaneously. With our military-oriented psychology any SI will be first considered for military purposes, including prevention of SI achievement by others. However, a smart group of people or organizations might purposefully multiply instances of near-ready SI in order to create competition which can increase our chances of survival. Creating social structure of SIs might make them socially aware and tolerant, which might include tolerance to people.

Comment author: Sebastian_Hagen 08 January 2015 01:23:40AM *  2 points [-]

Note that multipolar scenarios can arise well before we have the capability to implement a SI.

The standard Hansonian scenario starts with human-level "ems" (emulations). If from-scratch AI development turns out to be difficult, we may develop partial-uploading technology first, and a highly multipolar em scenario would be likely at that point. Of course, AI research would still be on the table in such a scenario, so it wouldn't necessarily be multipolar for very long.

Comment author: KatjaGrace 09 January 2015 02:48:41AM 1 point [-]

Why would military purposes preclude multiple parties having artificial intelligence? It seems you are assuming that if anyone achieves superintelligent machines, they will have a decisive enough advantage to prevent anyone else from having the technology. But if they are achieved incrementally, that need not be so.