You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

eli_sennesh comments on Against utility functions - Less Wrong Discussion

40 Post author: Qiaochu_Yuan 19 June 2014 05:56AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

You are viewing a single comment's thread.

Comment author: [deleted] 20 June 2014 02:59:03PM 1 point [-]

On the one hand, you are correct regarding philosophy for humans: we do ethics and meta-ethics to reduce our uncertainty about our utility functions, not as a kind of game-tree planning based on already knowing those functions.

On the other hand, the Von-Neumann-Morgenstern Theorem says blah blah blah blah.

On the third hand, if you have a mathematical structure we can use to make no-Dutch-book decisions that better models the kinds of uncertainty we deal with as embodied human beings in real life, I'm all ears.

Comment author: Qiaochu_Yuan 20 June 2014 05:18:18PM 7 points [-]

I don't think Dutch book arguments matter in practice. An easy way to avoid being Dutch booked is to refuse bets being offered to you by people you don't trust.

Comment author: drethelin 20 June 2014 06:05:53PM 3 points [-]

Not that I fully support utility functions as a useful concept, but having a consistent one also keeps you from dutch booking yourself. You can interpret any decision as a bet using utility and people often make decisions that cost them effort and energy but leave them in the same place where they started. So it's possible trying to figure out one's utility function can help prevent eg anxious looping behavior.

Comment author: Qiaochu_Yuan 22 June 2014 06:25:05PM 2 points [-]

Sure, if you're right about your utility function. The failure mode I'm worried about is people believing they know what their utility function is and being wrong, maybe disastrously wrong. Consistency is not a virtue if, in reaching for consistency, you make yourself consistent in the wrong direction. Inconsistency can be a hedge against making extremely bad decisions.

Comment author: [deleted] 20 June 2014 07:23:34PM *  2 points [-]

You are of course correct about the concrete scenario of being Dutch Booked in a hypothetical gamble (and I am not a gambler for reasons similar to this: we all know the house always wins!). However, if we're going to discard the Dutch Book criterion, then we need to replace it with some other desiderata for preventing self-contradictory preferences that cause no-win scenarios.

Even if your own mind comes preprogrammed with decision-making algorithms that can go into no-win scenarios under some conditions, you should recognize those as a conscious self-patching human being, and consciously employ other algorithms that won't hurt themselves.

I mean, let me put it this way, probabilities aside, if you make decisions that form a cyclic preference ordering rather than even forming a partial ordering, isn't there something rather severely bad about that?

Comment author: Qiaochu_Yuan 22 June 2014 06:26:33PM 1 point [-]

we need to replace it with some other desiderata for preventing self-contradictory preferences that cause no-win scenarios.

Why?

Comment author: [deleted] 23 June 2014 05:11:36AM 0 points [-]

Do you want to program an agent to put you in a no-win scenario? Do you want to put yourself in a no-win scenario?

Comment author: David_Gerard 21 June 2014 09:31:24PM *  1 point [-]

The idea is that the universe offers you Dutch-book situations and you make and take bets on uncertain outcomes implicitly.

That said, I concur with your basic point: universal overarching utility functions - not just small ones for a given situation, but a single large one for you as a human - are something humans don't, and I think can't, do - and realising how mathematically helpful it would be if they did still doesn't mean they can, and trying to turn oneself into an expected utility maximiser is unlikely to work.

(And, I suspect, will merely leave you vulnerable to everyday human-level exploits - remember that the actual threat model we evolved in is beating other humans, and as long as we're dealing with humans we need to deal with humans.)

Comment author: Qiaochu_Yuan 22 June 2014 06:18:54PM *  3 points [-]

The idea is that the universe offers you Dutch-book situations

But does it in fact do that? To the extent that you believe that humans are bad Bayesians, you believe that the environment in which humans evolved wasn't constantly Dutch-booking them, or that if it was then humans evolved some defense against this which isn't becoming perfect Bayesians.

Comment author: David_Gerard 23 June 2014 07:19:15AM *  0 points [-]

I do suspect that our thousand shards of desire being contradictory and not resolving is selected for, in that we are thus money-pumped into propagating our genes.

Comment author: jsteinhardt 20 June 2014 06:38:52PM 2 points [-]

Why do you care so much about Dutch booking relative to the myriad other considerations one might care about?

Comment author: [deleted] 20 June 2014 07:20:00PM *  0 points [-]

Because it's a desideratum indicating that my preferences contain or don't contain an unconditional and internal contradiction, something that would screw me over eventually no matter what possible world I land in.

Comment author: lmm 20 June 2014 10:55:45PM 0 points [-]

ITYM a desideratum.

Comment author: TheAncientGeek 20 June 2014 06:53:08PM 0 points [-]

On the fourth hand, we do ethics and metaethics to extrapolate better ethics.

Comment author: [deleted] 20 June 2014 07:32:29PM *  0 points [-]

Yes, that's right. We lack knowledge of the total set of concerns which move us, and the ordering among those of which move us more. Had we total knowledge of this, we would have no need for any such thing as "ethics" or "meta-ethics", and would simply view our preferences and decision concerns in their full form, use our reason to transform them into a coherent ordering over possible worlds, and act according to that ordering. This sounds strange and alien because I'm using meta-language rather than object-language, but in real life it would mostly mean just having a perfectly noncontradictory way of weighing things like love or roller-skating or reading that would always output a definite way to end up happy and satisfied.

However, we were built by evolution rather than a benevolent mathematician-god, so instead we have various modes of thought-experiment and intuition-pump designed to help us reduce our uncertainty about our own nature.