OrphanWilde comments on Why AGI is extremely likely to come before FAI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (29)
I believe there's an implicit assumption here that it's possible to create AGI without an understanding of intelligence and motives that would lead one to choose FAI. (Of course, there's a second question about whether an AI designed to be friendly will actually -be- friendly, but that's another issue altogether.)