cousin_it comments on Friendly AI Research and Taskification - Less Wrong

22 Post author: multifoliaterose 14 December 2010 06:30AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 14 December 2010 10:25:39AM *  4 points [-]

The beginning of your post My Kind of Reflection seems to talk about that. Couldn't find anything more direct.

FWIW, dealing with self-modification isn't very high on my to-do list, because for now I've shifted to thinking of AI as a one-action construction. This approach handles goal stability pretty much automatically, but I'm not sure if it satisfies your needs.

Comment author: Eliezer_Yudkowsky 14 December 2010 10:31:42AM 5 points [-]

Nesov's reply sounds right to me. It doesn't handle goal stability automatically, it sweeps an issue that you confess you don't understand under the carpet and hopes the AI handles it, in a case where you haven't described an algorithm that you know will handle it and why.

Comment author: cousin_it 14 December 2010 10:35:36AM *  3 points [-]

Thanks. I don't understand your reply yet (and about half of Nesov's points are also unparseable to me as usual), but will think more.