cousin_it comments on What is Eliezer Yudkowsky's meta-ethical theory? - Less Wrong

33 Post author: lukeprog 29 January 2011 07:58PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (368)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 03 February 2011 12:16:36PM *  4 points [-]

are right

Huh?

I'd be okay with a strong AI that correctly followed my values, regardless of whether they're "right" by any other criterion.

If you think you wouldn't be okay with such an AI, I suspect the most likely explanation is that you're confused about the concept of "your values". Namely, if you yearn to discover some simple external formula like the categorical imperative and then enact the outcomes prescribed by that formula, then that's just another fact about your personal makeup that has to be taken into account by the AI.

And if you agree that you would be okay with such an AI, that means Eliezer's metaethics is adequate for its stated goal (creating friendly AI), whatever other theoretical drawbacks it might have.