eli_sennesh comments on FAI Research Constraints and AGI Side Effects - Less Wrong

14 Post author: JustinShovelain 03 June 2015 07:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (58)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 04 June 2015 03:18:03PM *  0 points [-]

This basically is the AI Foom scenario, where the moment an AGI is created, it will either kill us or all or bring about utopia (or both).

The question is not "if". The questions are "how quickly" and "to what height". An AI capable of self-improving to world-destroying levels within moments is plainly unrealistic. An AI capable of self-improving to dangerous levels (viz: levels where it can start making humans do the dangerous work for it) in the weeks, months, or even years it would take a team of human operators to cross-examine the formally unspecified motivation engines for Friendliness is dangerously realistic.