You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

evec comments on Against utility functions - Less Wrong Discussion

40 Post author: Qiaochu_Yuan 19 June 2014 05:56AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

You are viewing a single comment's thread. Show more comments above.

Comment author: evec 19 June 2014 09:29:05PM 4 points [-]

I think your original post would have been better if it included any arguments against utility functions, such as those you mention under "e.g." here.

Besides being a more meaningful post, we would also be able to discuss your comments. For example, without more detail, I can't tell whether your last comment is addressed sufficiently by the standard equivalence of normal-form and extensive-form games.

Comment author: Qiaochu_Yuan 20 June 2014 03:17:26AM *  12 points [-]

Essentially every post would have been better if it had included some additional thing. Based on various recent comments I was under the impression that people want more posts in Discussion so I've been experimenting with that, and I'm keeping the burden of quality deliberately low so that I'll post at all.

Comment author: asr 20 June 2014 03:58:28PM *  5 points [-]

I appreciate you writing this way -- speaking for myself, I'm perfectly happy with a short opening claim and then the subtleties and evidence emerges in the following comments. A dialogue can be a better way to illuminate a topic than a long comprehensive essay.

Comment author: evec 20 June 2014 10:16:24PM 3 points [-]

Let me rephrase: would you like to describe your arguments against utility functions in more detail?

For example, as I mentioned, there's an obvious mathematical equivalence between making a plan at the beginning of time and planning as you go, which is directly analogous to how one converts games from extensive-form to normal-form. As such, all aspects of acquiring information is handled just fine (from a mathematical standpoint) in the setup of vNM.

The standard response to the discussion of knowing probabilities exactly and to concerns about computational complexity (in essence) is that we may want to throw aside epistemic concerns and simply learn what we can from a theory that is not troubled by them (a la air resistance in physics..)? Is your objection essentially that those factors are more dominant in human morality than LW acknowledges? And if so, is the objection to the normal-form assumption essentially the same?

Comment author: Qiaochu_Yuan 22 June 2014 06:32:07PM 1 point [-]

For example, as I mentioned, there's an obvious mathematical equivalence between making a plan at the beginning of time and planning as you go, which is directly analogous to how one converts games from extensive-form to normal-form. As such, all aspects of acquiring information is handled just fine (from a mathematical standpoint) in the setup of vNM.

Can you give more details here? I'm not familiar with extensive-form vs. normal-form games.

The standard response to the discussion of knowing probabilities exactly and to concerns about computational complexity (in essence) is that we may want to throw aside epistemic concerns and simply learn what we can from a theory that is not troubled by them (a la air resistance in physics..)? Is your objection essentially that those factors are more dominant in human morality than LW acknowledges?

Something like that. It seems like the computational concerns are extremely important: after all, a theory of morality should ultimately output actions, and to output actions in the context of a utility function-based model you need to be able to actually calculate probabilities and utilities.

Comment author: evec 22 June 2014 09:03:03PM 2 points [-]

Sure. Say you have to make some decision now, and you will be asked to make a decision later about something else. Your decision later may depend on your decision now as well as part of the world that you don't control, and you may learn new information from the world in the meantime. Then the usual way of rolling all of that up into a single decision now is that you make your current decision as well as a decision about how you would act in the future for all possible changes in the world and possible information gained.

This is vaguely analogous to how you can curry a function of multiple arguments. Taking one argument X and returning (a function of one argument Y that returns Z) is equivalent to taking two arguments X and Y and returning X.

There's potentially a huge computational complexity blowup here, which is why I stressed mathematical equivalence in my posts.

Comment author: Qiaochu_Yuan 22 June 2014 09:12:08PM 2 points [-]

Thanks for the explanation! It seems pretty clear to me that humans don't even approximately do this, though.

Comment author: jsteinhardt 24 June 2014 03:57:15PM 1 point [-]

Then the usual way of rolling all of that up into a single decision now is that you make your current decision as well as a decision about how you would act in the future for all possible changes in the world and possible information gained.

Sounds not very feasible...