pnrjulius comments on Polyhacking - Less Wrong

75 Post author: Alicorn 28 August 2011 08:35AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (603)

You are viewing a single comment's thread. Show more comments above.

Comment author: pnrjulius 08 June 2012 03:01:10AM 1 point [-]

Yet it described human behaviour accurately. People take significant risk of loosing decades of beta to get 5 minutes of alpha.

I hope you're not assuming that all human behavior is rational...

Comment author: CharlieSheen 08 June 2012 06:58:34AM *  0 points [-]

I'm not assuming it is. The maxim does however encapsulated revealed preferences of women. It would be irrational of men to pretend they don't.

Edit: I don't agree with the statement below any more. It is a misuse of the word rational.

In any case I would argue that this behaviour happens to be rational when women don't need men to provide materially for their offspring.

Comment author: pnrjulius 08 June 2012 10:46:37PM 1 point [-]

But if someone's revealed preferences are irrational (as revealed human preferences often, nay typically are), then it doesn't serve anyone to follow them. So contrary to your assertion, you are assuming that these preferences are rational, or else you wouldn't be encouraging people to follow them.

So my question is this: Is a woman who has sex with Brad Pitt once and remains alone for the rest of her life actually happier than a woman who is comfortably married to an ordinary guy for several years?

If the answer is no---and I think it's pretty obvious that the answer is, in fact, no---then your maxim fails, and any woman who follows it is being irrational and self-destructive. She's following her genes right off a cliff.

Comment author: [deleted] 09 June 2012 03:37:31PM -1 points [-]

preferences are irrational

The utility function is not up for grabs.

Comment author: pnrjulius 11 June 2012 01:16:54AM -1 points [-]

Yes it is, if your "utility function" doesn't obey the axioms of Von Neumann-Morgenstern utility, which it doesn't, if you are at all a normal human.

Prospect theory? Allais paradox?

Seriously, what are we even doing on Less Wrong, if you think that the decisions people make are automatically rational just because people made them?

Comment author: [deleted] 11 June 2012 01:22:00PM 1 point [-]

Actually, if your "utility function" doesn't obey the axioms of Von Neumann-Morgenstern utility, it's not an utility function in the normal sense of the world.

Comment author: smk 11 June 2012 01:27:14PM 1 point [-]

I suppose that's why pnrjulius put "utility function" in quotes.

Comment author: nshepperd 11 June 2012 04:28:55AM *  1 point [-]

Downvoted for trying to argue against a principle that is actually irrelevant to your claims. ("The utility function is not up for grabs" doesn't mean that decisions are always rational, and is actually inapplicable here.)

Comment author: [deleted] 11 June 2012 01:29:52PM 1 point [-]

I didn't mean decisions are always rational. I meant that it makes no sense for preferences to be rational or irrational: they just are. Rationality is a property of decisions, not of preferences: if a decision maximizes the expectation of your preferences it's rational and if it doesn't it isn't.

Comment author: TheOtherDave 11 June 2012 02:36:12PM 1 point [-]

Preferences can, however, be inconsistent.
And rational decision-making across inconsistent preferences is sometimes difficult to distinguish from irrational decision-making.

Comment author: pnrjulius 11 June 2012 01:22:25AM -1 points [-]

In fact, it's worse than that. Utility is still up for grabs, even if it does obey the axioms---because we will soon be in the condition of being able to modify our own utility functions! (If we aren't already: Addictive drugs alter your ability to experience non-drug pleasure; and could psychotherapy change my level of narcissism, or my level of empathy?)

Indeed, the entire project of Friendly AI can be taken to be the project of specifying the right utility function for a superintelligent AI. If any utility that follows the axioms would qualify, then a paperclipper would be just fine.

So not only does "the utility function is not up for grabs" not work in this situation (because I'm saying precisely that women who behave this way are denying themselves happiness); I'm not sure it works in any situation. Even if you are sufficiently rational that you really do obey a consistent utility function in everything you do, that could still be a bad utility function (you could be a psychopath, or a paperclipper).

Comment author: nshepperd 11 June 2012 04:12:24AM 0 points [-]

You mean, if their revealed preferences are not their actual preferences, which is often the case, because of irrationality?

Comment author: CharlieSheen 09 June 2012 10:54:24AM 0 points [-]

So my question is this: Is a woman who has sex with Brad Pitt once and remains alone for the rest of her life actually happier than a woman who is comfortably married to an ordinary guy for several years?

You make a compelling argument. I clearly misused the word rational when I was just looking at what the genes "want". I thus retract that part of the statement.

I do wish to emphasise that "5 minutes of Alpha is worth 5 years of beta" while mostly hyperbole is something people should be keeping in mind when trying to predict the sexual and romantic behaviour of women.