Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

lukeprog comments on The Urgent Meta-Ethics of Friendly Artificial Intelligence - Less Wrong

45 Post author: lukeprog 01 February 2011 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 06 February 2011 07:18:19AM 2 points [-]

Wei_Dai,

Alonzo Fyfe and I are currently researching and writing a podcast on desirism, and we'll eventually cover this topic. The most important thing to note right now is that desirism is set up as a theory that explains things very specific things: human moral concepts like negligence, excuse, mens rea, and a dozen other things. You can still take the foundational meta-ethical principles of desirism - which are certainly not unique to desirism - and come up with implications for FAI. But they may have little in common with the bulk of desirism that Alonzo usually talks about.

But I'm not trying to avoid your question. These days, I'm inclined to do meta-ethics without using moral terms at all. Moral terms are so confused, and carry such heavy connotational weights, that using moral terms is probably the worst way to talk about morality. I would rather just talk about reasons and motives and counterfactuals and utility functions and so on.

Leaving out ethical terms, what implications do my own meta-ethical views have for Friendly AI? I don't know. I'm still catching up with the existing literature on Friendly AI.

Comment author: Wei_Dai 07 February 2011 03:29:34AM 2 points [-]

What are the foundational meta-ethical principles of desirism? Do you have a link?

Comment author: lukeprog 07 February 2011 04:39:05AM *  1 point [-]

Hard to explain. Alonzo Fyfe and I are currently developing a structured and technical presentation of the theory, so what you're asking for is coming but may not be ready for many months. It's a reasons-internalist view, and actually I'm not sure how much of the rest of it would be relevant to FAI.