conchis comments on Utilons vs. Hedons - Less Wrong

28 Post author: Psychohistorian 10 August 2009 07:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (112)

You are viewing a single comment's thread. Show more comments above.

Comment author: DanArmak 13 August 2009 04:35:10PM 0 points [-]

I guess what I'm suggesting, in part, is that the actual problem at hand isn't well-defined, unless you specify what you mean by utility in advance.

Utility means "the function f, whose expectation I am in fact maximizing". The discussion then indeed becomes whether f exists and whether it can be doubled.

My point is that you can't learn anything interesting from the thought experiment if Omega is offering to double f(x), so we shouldn't set it up that way.

That was the original point of the thread where the thought experiment was first discussed, though.

The interesting result is that if you're maximizing something you may be vulnerable to a failure mode of taking risks that can be considered excessive. This is in view of the original goals you want to achieve, to which maximizing f is a proxy - whether a designed one (in AI) or an evolved strategy (in humans).

"Valutilons" are specifically defined to be a measure of what we value.

If "we" refers to humans, then "what we value" isn't well defined.

Comment author: conchis 13 August 2009 05:04:06PM *  0 points [-]

Crap. Sorry about the delete. :(