Unknowns comments on VNM expected utility theory: uses, abuses, and interpretation - Less Wrong

21 Post author: Academian 17 April 2010 08:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread. Show more comments above.

Comment author: Academian 18 April 2010 12:06:34PM *  2 points [-]

Indeed, the question is whether non-Archimedean values should be normatively outlawed for rational agents.

Do you think they should? How about for example, an AI whose primary goal is maximize the brightness of a certain light, with a secondary goal is to adjust its color closer to red. If it is (say, by design) totally unwilling to sacrifice some of the latter for the former, would you say it is necessarily irrational? How about a human with the same preferences?

Comment author: Unknowns 19 April 2010 06:22:23PM 0 points [-]

I wasn't claiming that they should be normatively outlawed, just that in practice in human beings, they lead to logical inconsistency. On the other hand, in a perfect AI, they wouldn't necessarily lead to inconsistency, but the less important goal would be completely ignored, as Liron says, and therefore effectively you would still have Archimedean values.