Eliezer_Yudkowsky comments on Friendly AI Research and Taskification - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (44)
Nesov's reply sounds right to me. It doesn't handle goal stability automatically, it sweeps an issue that you confess you don't understand under the carpet and hopes the AI handles it, in a case where you haven't described an algorithm that you know will handle it and why.
Thanks. I don't understand your reply yet (and about half of Nesov's points are also unparseable to me as usual), but will think more.