You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheOtherDave comments on Why didn't people (apparently?) understand the metaethics sequence? - Less Wrong Discussion

12 Post author: ChrisHallquist 29 October 2013 11:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (229)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 30 October 2013 02:23:04PM 1 point [-]

Well... according to the SEP, metaethics encompasses the attempt to understand the presuppositions and commitments of moral practice.

If I'm trying to engineer a system that behaves morally (which is what FAI design is, right?), it makes some sense that I'd want to understand that stuff, just as if I'm trying to engineer a system that excavates tunnels I'd want to understand the presuppositions and commitments of tunnel excavation.

That said, from what I've seen it's not clear to me that the actual work that's been done in this area (e.g., in the Metaethics Sequence) actually serves any purpose other than rhetorical.

Comment author: V_V 31 October 2013 12:47:09PM 0 points [-]

I think that framing the issue of AI safety in terms of "morality" or "friendliness" is a form of misleading anthropomorphization. Morality and friendliness are specific traits of human psychology which won't necessarily generalize well to artificial agents (even attempts to generalize them to non-human animals are often far-fetched).
I think that AI safety would be probably best dealt with in the framework of safety engineering.

Comment author: TheOtherDave 31 October 2013 01:40:32PM 0 points [-]

All right. I certainly agree with you that talking about "morality" or "friendliness" without additional clarifications leads most people to conclusions that have very little to do with safe AI design. Then again, if we're talking about self-improving AIs with superhuman intelligence (as many people on this site are) I think the same is true of talking about "safety."