Gust comments on Debunking Fallacies in the Theory of AI Motivation - Less Wrong

8 Post author: Richard_Loosemore 05 May 2015 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: Gust 16 July 2015 07:21:28PM 0 points [-]

You have to hardcode something, don't you?

Comment author: TheAncientGeek 16 July 2015 07:24:44PM -1 points [-]

I meant not hardcoding values or ethics.

Comment author: Gust 16 July 2015 07:50:00PM 0 points [-]

Well, you'd have to hardcode at least a learning algorithm for values if you expect to have any real chance that the AI behaves like a useful agent, and that falls within the category of important functionalities. But then I guess you'll agree with that.

Comment author: hairyfigment 16 July 2015 08:15:01PM 0 points [-]

Don't feed the troll. "Not hardcoding values or ethics" is the idea behind CEV, which seems frequently "explored round here." Though I admit I do see some bizarre misunderstandings.