You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

SodaPopinski comments on Superintelligence 12: Malignant failure modes - Less Wrong Discussion

7 Post author: KatjaGrace 02 December 2014 02:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread. Show more comments above.

Comment author: SodaPopinski 05 December 2014 07:18:00PM 1 point [-]

On one hand, I think the world is already somewhat close to a singleton (with regard to AI, obviously it is nowhere near singleton with regard to most other things). I mean google has a huge fraction of the AI talent. The US government has a huge fraction of the mathematics talent. Then, there is Microsoft, FB, Baidu, and a few other big tech companies. But every time an independent AI company gains some traction it seems to be bought out by the big guys. I think this is a good thing as I believe the big guys will act in there own best interest including their interest in preserving their own life (i.e., not ending the world). Of course if it is easy to make an AGI, then there is no hope anyway. But, if it requires companies of Google scale, then there is hope they will choose to avoid it.

Comment author: TRIZ-Ingenieur 07 December 2014 10:55:15AM 0 points [-]

The "own best interest" in a winner- takes-all scenario is to create an eternal monopoly on everything. All levels of Maslow's pyramide of human needs will be served by goods and services supplied by this singleton.