Dagon comments on Post Your Utility Function - Less Wrong

28 Post author: taw 04 June 2009 05:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (273)

You are viewing a single comment's thread.

Comment author: Dagon 04 June 2009 03:50:39PM *  6 points [-]

I've put a bit of thought into this over the years, and don't have a believable theory yet. I have learned quite a bit from the excercise, though.

1) I have many utility functions. Different parts of my identity or different frames of thought engage different preference orders, and there is no consistent winner. I bite this bullet: personal identity is a lie - I am a collective of many distinct algorithms. I also accept that Arrow’s impossibility theorem applies to my own decisions.

2) There are at least three dimensions (time, intensity, and risk) to my utility curves. None of these are anywhere near linear - the time element seems to be hyperbolic in terms of remembered happiness for past events, and while I try to keep it sane for future events, that's not my natural state, and I can't do it for all my pieces with equal effectiveness.

3) They change over time (which is different than the time element within the preference space). Things I prefer now, I will not necessarily prefer later. The meta-utility of balancing this possibly-anticipated change against the timeframe of the expected reward is very high, and I can sometimes even manage it.

Comment deleted 07 June 2009 02:05:09PM *  [-]
Comment author: Dagon 08 June 2009 11:06:28PM 0 points [-]

It's not clear to me that my subpersonal algorithms have the ability to enforce reciprocity well enough, or to reflectively alter themselves with enough control to even make an attempt at unification. Certainly parts of me attempt to modify other parts in an attempt to do so, but that's really more conquest than reciprocity (a conquest "I" pursue, but still clearly conquest).

Unification is a nice theory, but is there any reason to think it's possible for subpersonal evaluation mechanisms any more than it is for interpersonal resource sharing?

Comment author: Vladimir_Nesov 07 June 2009 02:18:37PM 0 points [-]

It is in interest of each and every agent to unify (coordinate) more with other agents, so this glosses over the concept of the individual.

Comment author: Cyan 07 June 2009 02:42:51PM *  1 point [-]

...this glosses over the concept of the individual.

This misses the mark, I think. Here's a mutation:

"It is in interest of each and every cell to unify (coordinate) more with other cells, so this glosses over the concept of the organism."

The coordination of cells is what allows us to speak of an organism as a whole. I won't go so far as to declare that co-ordination of agents justifies the concept of the individual, but I do think the idea expressed in the parent is more wrong than right.