You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Capla comments on [Link] If we knew about all the ways an Intelligence Explosion could go wrong, would we be able to avoid them? - Less Wrong Discussion

1 Post author: the-citizen 23 November 2014 10:30AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (15)

You are viewing a single comment's thread. Show more comments above.

Comment author: Capla 25 November 2014 03:33:09PM *  0 points [-]

This is something that I think is neglected (in part because it's not the relevant problem yet) in thinking about friendly AI. Even if we had solved all of the problems of stable goal systems, there could still be trouble, depending on who's goals are implemented. If it's a fast take-off, whoever cracks recursive self-improvement first, basically gets Godlike powers (in the form a genii that reshapes the world according to your wish). They define the whole future of the expanding visible universe. There are a lot of institutions who I do not trust to have the foresight to think "We can create utopia beyond anyone's wildest dreams" and instead to default to "We'll skewer the competition in the next quarter."

However, there are unsubstantiated rumors that Google has taken some ex-MIRI people for work on a project of some kind.