Gust comments on Debunking Fallacies in the Theory of AI Motivation - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (343)
You have to hardcode something, don't you?
I meant not hardcoding values or ethics.
Well, you'd have to hardcode at least a learning algorithm for values if you expect to have any real chance that the AI behaves like a useful agent, and that falls within the category of important functionalities. But then I guess you'll agree with that.
Don't feed the troll. "Not hardcoding values or ethics" is the idea behind CEV, which seems frequently "explored round here." Though I admit I do see some bizarre misunderstandings.