earthwormchuck163 comments on Imposing FAI - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (8)
The standard answer is there is such a strong "first mover advantage" for self-improving AIs that it only matters which comes first: If an FAI comes first, it would be enough to stop the creation of uFAI's (and also vice versa). This is addressed at some length in Eliezer's paper Artificial Intelligence as a Positive and Negative Factor in Global Risk.
I don't find this answer totally satisfying. It seems like an awfully detailed prediction to make in absence of a technical theory of AGI.