Matt_Simpson comments on VNM expected utility theory: uses, abuses, and interpretation - Less Wrong

21 Post author: Academian 17 April 2010 08:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread. Show more comments above.

Comment author: Academian 18 April 2010 12:06:34PM *  2 points [-]

Indeed, the question is whether non-Archimedean values should be normatively outlawed for rational agents.

Do you think they should? How about for example, an AI whose primary goal is maximize the brightness of a certain light, with a secondary goal is to adjust its color closer to red. If it is (say, by design) totally unwilling to sacrifice some of the latter for the former, would you say it is necessarily irrational? How about a human with the same preferences?

Comment author: Matt_Simpson 19 April 2010 02:02:10AM *  0 points [-]

Secondary preferences may as well not exist. The only time they have an effect on behavior is in the case of two or more possible optimal actions with identical expected utilities in terms of the primary good. How likely is that?

ETA: Douglas Knight said it first.

Comment author: Academian 19 April 2010 06:14:25AM *  0 points [-]

Yes (see above). It seems well agreed upon; I think I'll ETA a note about this to the post proper.

ETA: "noise" in your expected utility calculations might constitute positive-likelihood occasions to treat two expectations as equal...