You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Wei_Dai comments on Against utility functions - Less Wrong Discussion

40 Post author: Qiaochu_Yuan 19 June 2014 05:56AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

You are viewing a single comment's thread.

Comment author: Wei_Dai 19 June 2014 09:24:50PM 10 points [-]

It seems worth reflecting on the fact that the point of the foundational LW material discussing utility functions was to make people better at reasoning about AI behavior and not about human behavior.

I think part of Eliezer's point was also to introduce decision theory as an ideal for human rationality. (See http://lesswrong.com/lw/my/the_allais_paradox/ for example.) Without talking about utility functions, we can't talk about expected utility maximization, so we can't define what it means to be ideally rational in the instrumental sense (and we also can't justify Bayesian epistemology based on decision theory).

So I agree with the problem stated here, but "let's stop talking about utility functions" can't be the right solution. Instead we need to emphasize more that having the wrong values is often worse than being irrational, so until we know how to obtain or derive utility functions that aren't wrong, we shouldn't try to act as if we have utility functions.

Comment author: asr 20 June 2014 04:28:36PM *  2 points [-]

Without talking about utility functions, we can't talk about expected utility maximization, so we can't define what it means to be ideally rational in the instrumental sense

I like this explanation of why utility-maximization matters for Eliezer's overarching argument. I hadn't noticed that before.

But it seems like utility functions are an unnecessarily strong assumption here. If I understand right, expected utility maximization and related theorems imply that if you have a complete preference over outcomes, and have probabilities that tell you how decisions influence outcomes, you have implicit preferences over decisions.

But even if you have only partial information about outcomes and partial preferences, you still have some induced ordering of the possible actions. We lose the ability to show that there is always an optimal 'rational' decision, but we can still talk about instances of irrational decision-making.

Comment author: David_Gerard 21 June 2014 09:38:42PM 3 points [-]

The trouble is the people who read the Sequences and went "EY said it, it's probably right, I'll internalise it." This is an actual hazard around here. (Even Eliezer can't make people think, rather than just believe in thinking.)

Comment author: torekp 22 June 2014 09:13:16PM *  1 point [-]

Yes, decision theory has been floated as a normative standard for human rationality. The trouble is that the standard is bogus. Conformity to the full set of axioms is not a rational requirement. The Allais Paradox and the Ellsberg Paradox are cases in point. Plenty of apparently very intelligent and rational people make decisions that violate the axioms, even when shown how their decisions violate the VNM axioms. I tentatively conclude that the problem lies in the axioms, rather than these decision makers. In particular, the Independence of "Irrelevant" Alternatives and some strong ordering assumptions both look problematic. Teddy Seidenfeld has a good paper <pdf> on the ordering assumptions.

Comment author: jsteinhardt 20 June 2014 09:06:17AM 1 point [-]

It's not obvious to me that Qiaochu would endorse utility functions as a standard for "ideal rationality". I, for one, do not.

Comment author: Wei_Dai 21 June 2014 06:20:10PM 4 points [-]

It's not obvious to me that Qiaochu would endorse utility functions as a standard for "ideal rationality". I, for one, do not.

Talking about utility functions can be useful if one believes any of the following about ideal rationality, as a concrete example of what one means if nothing else.

  1. An ideally rational agent uses one of the standard decision theories (vNM, EDT, CDT, etc.)
  2. An ideally rational agent does EU maximization.
  3. An ideally rational agent is consequentialist.
  4. An ideally rational agent, when evaluating the consequences of its actions, divides up the domain of evaluation into two or more parts, evaluates them separately, and then adds their values together. (For example, for an EU maximizer, the "parts" are possible outcomes or possible world-histories. For a utilitarian, the "parts" are individual persons within each world.)
  5. An ideally rational agent has values/preferences that are (or can be) represented by a clearly defined mathematical object.

I guess when you say you don't "endorse utility functions" you mean that you don't endorse 1 or 2. Do you endorse any of the others, and if so what would you use instead of utility functions to illustrate what you mean?

Comment author: jsteinhardt 22 June 2014 09:22:38AM 3 points [-]

It's hard for me to know that 4 and 5 really mean since they are so abstract. I definitely don't endorse 1 or 2 and I'm pretty sure I don't endorse 4 either (integrating over uncertainty in what you meant). I'm uncertain about 3; it seems plausible but far from clear. I'm certainly not consequentialist and don't want to be, but maybe I would want to be in some utopian future. Again, I'm not really sure what you mean by 5, it seems almost tautological since everything is a mathematical object.

Comment author: jsalvatier 20 June 2014 07:06:15PM 1 point [-]

Even if you don't think it's the ideal, utility based decision theory it does give us insights that I don't think you can naturally pick up from anywhere else that we've discovered yet.