You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DanielLC comments on Autonomy, utility, and desire; against consequentialism in AI design - Less Wrong Discussion

3 Post author: sbenthall 03 December 2014 05:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (5)

You are viewing a single comment's thread.

Comment author: DanielLC 03 December 2014 06:40:44PM -1 points [-]

How could an AI be compassionate? Perhaps an AI could be empathetic if it could perceive, through its sensors, the desires (or empirical goals, or reflective goals) of other agents and internalize them as its own.

In other words, it tries to maximize human values. Isn't this the standard way of programming a Friendly AI?

Comment author: ChristianKl 06 December 2014 10:49:24PM 0 points [-]

Isn't this the standard way of programming a Friendly AI?

I don't think it makes sense to speak about a standard way of programming a Friendly AI.

Comment author: DanielLC 07 December 2014 03:41:47AM 0 points [-]

"Designing" would probably be a better word. The standard idea for how you could make an AI friendly.