timtyler comments on asking an AI to make itself friendly - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (30)
So: a sufficiently intelligent agent would be able to figure out what humans wanted. We have to make it care about what we want - and also tell it how to peacefully resolve our differences when our wishes conflict.
Uh huh. So: it sounds as though you have your work cut out for you.