Comment author: Eliezer_Yudkowsky 19 June 2014 11:43:53PM 4 points [-]

Both your examples are actually just about diminishing marginal penalties as you add more attention demands, moving away from 1, or as you add more defections, moving away from 0. The real question is whether there's a resource with no natural maximum that increases in marginal utility; and this shall perhaps be difficult to find.

Comment author: Qiaochu_Yuan 20 June 2014 03:15:14AM 0 points [-]

That's a good way of putting it. I had a vague thought pointing in this direction but wasn't able to verbalize it.

Comment author: AlexMennen 19 June 2014 08:22:27PM 2 points [-]

Can you give some specific examples of people misusing utility functions? Or if you don't want to point fingers, can you construct examples similar to those you've seen people use?

Comment author: Qiaochu_Yuan 20 June 2014 03:14:18AM 2 points [-]

This thread was prompted by this comment in the Open Thread.

Comment author: Vladimir_Nesov 19 June 2014 08:11:38AM *  3 points [-]

It seems worth reflecting on the fact that the point of the foundational LW material discussing utility functions was to make people better at reasoning about AI behavior and not about human behavior.

For value extrapolation problem, you need to consider both what an AI could do with a goal (how to use it, what kind of thing it is), and which goal represents humane values (how to define it).

Comment author: Qiaochu_Yuan 19 June 2014 04:55:25PM 5 points [-]

I still think there's too much confusion between ethics-for-AI and ethics-for-humans discussions here. There's no particular reason that a conceptual apparatus suited for the former discussion should also be suited for the latter discussion.

Comment author: redlizard 19 June 2014 08:13:23AM 12 points [-]

It's more than a metaphor; a utility function is the structure any consistent preference ordering that respects probability must have. It may or may not be a useful conceptual tool for practical human ethical reasoning, but "just a metaphor" is too strong a judgment.

Comment author: Qiaochu_Yuan 19 June 2014 04:52:34PM 27 points [-]

a utility function is the structure any consistent preference ordering that respects probability must have.

This is the sort of thing I mean when I say that people take utility functions too seriously. I think the von Neumann-Morgenstern theorem is much weaker than it initially appears. It's full of hidden assumptions that are constantly violated in practice, e.g. that an agent can know probabilities to arbitrary precision, can know utilities to arbitrary precision, can compute utilities in time to make decisions, makes a single plan at the beginning of time about how they'll behave for eternity (or else you need to take into account factors like how the agent should behave in order to acquire more information in the future and that just isn't modeled by the setup of vNM at all), etc.

Against utility functions

40 Qiaochu_Yuan 19 June 2014 05:56AM

I think we should stop talking about utility functions.

In the context of ethics for humans, anyway. In practice I find utility functions to be, at best, an occasionally useful metaphor for discussions about ethics but, at worst, an idea that some people start taking too seriously and which actively makes them worse at reasoning about ethics. To the extent that we care about causing people to become better at reasoning about ethics, it seems like we ought to be able to do better than this.

The funny part is that the failure mode I worry the most about is already an entrenched part of the Sequences: it's fake utility functions. The soft failure is people who think they know what their utility function is and say bizarre things about what this implies that they, or perhaps all people, ought to do. The hard failure is people who think they know what their utility function is and then do bizarre things. I hope the hard failure is not very common. 

It seems worth reflecting on the fact that the point of the foundational LW material discussing utility functions was to make people better at reasoning about AI behavior and not about human behavior. 

Comment author: Gavin 19 June 2014 12:47:46AM 5 points [-]

I'm pretty confident that I have a strong terminal goal of "have the physiological experience of eating delicious barbecue." I have it in both near and far mode, and remains even when it is disadvantageous in many other ways. Furthermore, I have it much more strongly than anyone I know personally, so it's unlikely to be a function of peer pressure.

That said, my longer term goals seem to be a web of both terminal and instrumental values. Many things are terminal goals as well as having instrumental value. Sex is a good in itself but also feeds needs other big picture psychological and social needs.

Comment author: Qiaochu_Yuan 19 June 2014 05:30:11AM 1 point [-]

Hmm. I guess I would describe that as more of an urge than as a terminal goal. (I think "terminal goal" is supposed to activate a certain concept of deliberate and goal-directed behavior and what I'm mostly skeptical of is whether that concept is an accurate model of human preferences.) Do you, for example, make long-term plans based on calculations about which of various life options will cause you to eat the most delicious barbecue?

In response to The Power of Noise
Comment author: Qiaochu_Yuan 17 June 2014 04:11:54AM 4 points [-]

Thanks for writing this! This debate was bugging me too; I don't have the background to dive into it in detail as in this post but the fact that Eliezer was implicitly taking a pretty strong stance on P vs. BPP bugged me ever since I read the original LW post. This is also a great example of why I think LW needs more discussion of topics like computational complexity.

Comment author: Qiaochu_Yuan 17 June 2014 04:00:49AM *  10 points [-]

+1! I too am skeptical about whether I or most of the people I know really have terminal goals (or, even if they really have them, whether they're right about what they are). One of the many virtues (!) of a virtue ethics-based approach is that you can cultivate "convergent instrumental virtues" even in the face of a lot of uncertainty about what you'll end up doing, if anything, with them.

Comment author: Multiheaded 16 June 2014 09:10:27AM 3 points [-]

But cheating on spouses in general undermines the trust that spouses should have in each other, and the cumulative impact of even 1% of spouses cheating on the institution of marriage as a whole could be quite negative.

In the comments on Scott's blog, I've recently seen the claim that this is the opposite of how traditional marriage actually worked; there used to be a lot more adultery in old times, and it acted as a pressure valve for people who would've divorced nowdays, but naturally it was all swept under the rug.

Comment author: Qiaochu_Yuan 17 June 2014 03:57:37AM 0 points [-]

Interesting. Link?

Comment author: Qiaochu_Yuan 16 June 2014 04:23:23AM 6 points [-]

Thanks for writing a post about this!

View more: Prev | Next