You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Viliam_Bur comments on Thinking soberly about the context and consequences of Friendly AI - Less Wrong Discussion

9 Post author: Mitchell_Porter 16 October 2012 04:33AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam_Bur 16 October 2012 06:37:11PM 1 point [-]

Whether the recursive self-improvement is possible or not, is the property of the universe. Also other details, like how much additional time and energy is necessary for another increase in intelligence.

The answer to this question can make some outcomes more likely. For example, if recursive self-improvement is possible, and at some level you can get a huge increase in intelligence very quickly and relatively cheaply, one of the centers of power could easily overpower the other ones. Perhaps even in situations where every super-agent would read and analyze the source code of all other super-agents all the time; the increased intelligence could allow one of them to make changes that will seem harmless to the other ones.

On the other hand the multiple centers of power scenario is more likely if humankind spreads to many planets, and there is some natural limit how high an intelligence can become before it somehow collapses or starts needing insane amounts of energy; so no super-agent could be smart enough to conquer the rest of the world.