Tim_Tyler comments on No Universally Compelling Arguments - Less Wrong

33 Post author: Eliezer_Yudkowsky 26 June 2008 08:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (46)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Tim_Tyler 10 August 2008 04:21:20PM 1 point [-]

I think it is a very big mistake to create a utility-maximizing rational economic agent a la Steve Omohundro, because such an agent is maximally ethically constrained - it cannot change it's mind about any ethical question whatsoever, because a utility maximizing agent never changes it's utility function.

That argument assumes that all ethical values are terminal values: that no ethical values are instrumental values. I assume I don't need to explain how unlikely it is that anyone will ever build an AI with terminal values which provide environment-independent solutions to all the ethical conundrums which an AI might face.