Alicorn comments on The Irrationality Game - Less Wrong

38 Post author: Will_Newsome 03 October 2010 02:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (910)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alicorn 03 October 2010 04:52:03AM 2 points [-]

I don't think that human values are well described by a utility function if, by "utility function", we mean "a function which an optimizing agent will behave risk-neutrally towards". If we mean something more general by "utility function", then I am less confident that human values don't fit into one.

Comment author: Eugine_Nier 03 October 2010 05:01:36AM 1 point [-]

Can you give an example of a non-risk-neutral utility function that can't be converted a standard utility function by rescaling.

Bonus points if it doesn't make you into a money pump.

Comment author: Alicorn 03 October 2010 02:15:31PM 0 points [-]

No, because I don't have a good handle on what magic can and cannot be done with math; when I have tried to do this in the past, it looks like this.

Me: But thus and so and thresholds and ambivalence without indifference and stuff.

Mathemagician: POOF! Look, this thing you don't understand satisfies your every need.

Comment author: timtyler 03 October 2010 12:11:09PM 0 points [-]

It seems challenging to understand you. What does it mean to behave risk-neutrally towards a function? To behave risk-neutrally, there has to be an environment with some potential risks in it.

Comment author: Alicorn 03 October 2010 02:18:02PM 0 points [-]

...It seems challenging to understand you, too. Everything that optimizes for a function needs an environment to do it in. Indeed, any utility function extracted from a human's values would make sense only relative to an environment with risks in it, whether the agent trying to optimize that function is a human or not, risk-neutral or not. So what are you asking?

Comment author: timtyler 03 October 2010 02:37:09PM *  1 point [-]

I was trying to get you to clarify what you meant.

As far as I can tell, your reply makes no attempt to clarify :-(

"Utility function" does not normally mean:

"a function which an optimizing agent will behave risk-neutrally towards".

It means the function which, when maximised, explains an agent's goal-directed actions.

Apart from the issue of "why-redefine", the proposed redefinition appears incomprehensible - at least to me.

Comment author: Alicorn 03 October 2010 02:52:25PM 1 point [-]

I have concluded to my satisfaction that it would not be an efficient expenditure of our time to continue attempting to understand each other in this matter.