You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

asparisi comments on Imposing FAI - Less Wrong Discussion

3 Post author: asparisi 17 May 2012 09:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (8)

You are viewing a single comment's thread. Show more comments above.

Comment author: asparisi 17 May 2012 10:05:01PM 2 points [-]

Huh. Seeing this answer twice, I can't help but think that the standard strategy for any UFAI then is to first convince you that it is an FAI, and then to convince you that there is another UFAI "almost ready" somewhere.

Heck, if it can do #2, it might be able to skip #1 if it was able to argue that it was the less-dangerous of the two.

Comment author: shminux 17 May 2012 10:13:27PM 9 points [-]

That's probably why EY is so cautious about it and does not want any meaningful AGI research progress to happen until a "provably friendly AI" theory is developed. An admirable goal, though many remain skeptical of the odds of success of such an approach, or even the rationale behind it.