Douglas_Knight comments on The Irrationality Game - Less Wrong

38 Post author: Will_Newsome 03 October 2010 02:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (910)

You are viewing a single comment's thread. Show more comments above.

Comment author: Douglas_Knight 03 October 2010 04:42:01AM *  0 points [-]

I think there is no values-preserving representation of any human's approximation of a utility function according to which risk neutrality is unambiguously rational.

Could you clarify this?
I think you are saying that human values are not well-described by a utility function (and stressing certain details of the failure), but you seem to explicitly assume a good approximation by a utility function, which makes me uncertain.

Risk neutrality is often used with respect to a resource. But if you just want to say that humans are not risk-neutral about money, there's no need to mention representations - you can just talk about preferences.
So I think you're talking about risk neutrality with respect putative-utiles. But to be a utility function, to satisfy the vNM axioms, is exactly risk neutrality about utiles. If one satisfies the axioms, the way one reconstructs the utility function is by risk-neutrality with respect to a reference utile.

I propose:

I think there is no numeric representation of any human's values according to which risk neutrality is unambiguously rational.

Am I missing the point?

Comment author: Alicorn 03 October 2010 04:52:03AM 2 points [-]

I don't think that human values are well described by a utility function if, by "utility function", we mean "a function which an optimizing agent will behave risk-neutrally towards". If we mean something more general by "utility function", then I am less confident that human values don't fit into one.

Comment author: Eugine_Nier 03 October 2010 05:01:36AM 1 point [-]

Can you give an example of a non-risk-neutral utility function that can't be converted a standard utility function by rescaling.

Bonus points if it doesn't make you into a money pump.

Comment author: Alicorn 03 October 2010 02:15:31PM 0 points [-]

No, because I don't have a good handle on what magic can and cannot be done with math; when I have tried to do this in the past, it looks like this.

Me: But thus and so and thresholds and ambivalence without indifference and stuff.

Mathemagician: POOF! Look, this thing you don't understand satisfies your every need.

Comment author: timtyler 03 October 2010 12:11:09PM 0 points [-]

It seems challenging to understand you. What does it mean to behave risk-neutrally towards a function? To behave risk-neutrally, there has to be an environment with some potential risks in it.

Comment author: Alicorn 03 October 2010 02:18:02PM 0 points [-]

...It seems challenging to understand you, too. Everything that optimizes for a function needs an environment to do it in. Indeed, any utility function extracted from a human's values would make sense only relative to an environment with risks in it, whether the agent trying to optimize that function is a human or not, risk-neutral or not. So what are you asking?

Comment author: timtyler 03 October 2010 02:37:09PM *  1 point [-]

I was trying to get you to clarify what you meant.

As far as I can tell, your reply makes no attempt to clarify :-(

"Utility function" does not normally mean:

"a function which an optimizing agent will behave risk-neutrally towards".

It means the function which, when maximised, explains an agent's goal-directed actions.

Apart from the issue of "why-redefine", the proposed redefinition appears incomprehensible - at least to me.

Comment author: Alicorn 03 October 2010 02:52:25PM 1 point [-]

I have concluded to my satisfaction that it would not be an efficient expenditure of our time to continue attempting to understand each other in this matter.

Comment author: magfrump 03 October 2010 06:32:49PM 1 point [-]

My guess would be that she meant that there is no physical event that corresponds to a utile with which humans want to behave risk-neutrally toward, and/or that if you abstracted human values enough to create an abstract such utile, it would be unrecognizable and unFriendly.

Comment author: Alicorn 03 October 2010 06:40:04PM 0 points [-]

This is at least close, if I understand what you're saying.