You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

earthwormchuck163 comments on Imposing FAI - Less Wrong Discussion

3 Post author: asparisi 17 May 2012 09:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (8)

You are viewing a single comment's thread.

Comment author: earthwormchuck163 17 May 2012 09:52:32PM 6 points [-]

The standard answer is there is such a strong "first mover advantage" for self-improving AIs that it only matters which comes first: If an FAI comes first, it would be enough to stop the creation of uFAI's (and also vice versa). This is addressed at some length in Eliezer's paper Artificial Intelligence as a Positive and Negative Factor in Global Risk.

I don't find this answer totally satisfying. It seems like an awfully detailed prediction to make in absence of a technical theory of AGI.