Alicorn comments on The Irrationality Game - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (910)
...It seems challenging to understand you, too. Everything that optimizes for a function needs an environment to do it in. Indeed, any utility function extracted from a human's values would make sense only relative to an environment with risks in it, whether the agent trying to optimize that function is a human or not, risk-neutral or not. So what are you asking?
I was trying to get you to clarify what you meant.
As far as I can tell, your reply makes no attempt to clarify :-(
"Utility function" does not normally mean:
"a function which an optimizing agent will behave risk-neutrally towards".
It means the function which, when maximised, explains an agent's goal-directed actions.
Apart from the issue of "why-redefine", the proposed redefinition appears incomprehensible - at least to me.
I have concluded to my satisfaction that it would not be an efficient expenditure of our time to continue attempting to understand each other in this matter.