You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TRIZ-Ingenieur comments on Superintelligence Reading Group 2: Forecasting AI - Less Wrong Discussion

10 Post author: KatjaGrace 23 September 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (109)

You are viewing a single comment's thread. Show more comments above.

Comment author: TRIZ-Ingenieur 25 September 2014 12:32:31AM 1 point [-]

If a child does not receive love, is not allowed to play, gets only instructions and is beaten - you will get a few years later a traumatized paranoic human being, unable to love, nihilistic and dangerous. A socialization like this could be the outcome of a "successful" self improving AI project. If humanity tries to develop an antagonist AI it could end in a final world war. The nihilistic paranoic AI might find a lose-lose strategy favorable and destroys our world.

That we did not receive any notion of extraterrestial intelligence tells us that obviously no other intelligent civilization has managed to survive a million years. Why they collapsed is pure speculation, but evil AI could speed things up.

Comment author: Liso 25 September 2014 03:47:51AM 0 points [-]

But why collapsed evil AI after apocalypse?

Comment author: TRIZ-Ingenieur 25 September 2014 06:42:41AM 0 points [-]

It would collapse within apocalypse. It might trigger aggressive actions knowing to be eradicated itself. It wants to see the other lose. Dying is not connected with fear. If it can prevent the galaxy from being colonised by good AI it prefers perfect apocalypse.
Debating aftermath of apocalypse gets too speculative to me. I wanted to point out that current projects do not have the intention to create a balanced good AI character. Projects are looking for fast success and an evil paranoic AI might result in the far end.