endoself comments on A summary of Savage's foundations for probability and utility. - Less Wrong

34 Post author: Sniffnoy 22 May 2011 07:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (88)

You are viewing a single comment's thread. Show more comments above.

Comment author: endoself 08 August 2013 11:30:15PM 3 points [-]

Neural signals represent things cardinally rather than ordinally, so those voting paradoxes probably won't apply.

Even conditional on humans not having transitive preferences even in an approximate sense, I find it likely that it would be useful to come up with some 'transativization' of human preferences.

Agreed that there's a good chance that game-theoretic reasoning about interacting submodules will be important for clarifying the structure of human preferences.

Comment author: [deleted] 10 August 2013 04:18:23PM 0 points [-]

Neural signals represent things cardinally rather than ordinally

I'm not sure what you mean by this. In the general case, resolution of signals is highly nonlinear, i.e. vastly more complicated than any simple ordinal or weighted ranking method. Signals at synapses are nearly digital, though: to first order, a synapse is either firing or it isn't. Signals along individual nerves are also digital-ish--bursts of high-frequency constant-amplitude waves interspersed with silence.

My point, though, is that it's not reasonable to assume that transitivity holds axiomatically when it's simple to construct a toy model where it doesn't.

On a macro level, I can imagine a person with dieting problems preferring starving > a hot fudge sundae, celery > starving, and a hot fudge sundae > celery.

Comment author: Vaniver 10 August 2013 09:09:23PM 2 points [-]

On a macro level, I can imagine a person with dieting problems preferring starving > a hot fudge sundae, celery > starving, and a hot fudge sundae > celery.

My experience is that this is generally because of a measurement problem, not a reflectively endorsed statement.

Comment author: [deleted] 11 August 2013 02:03:12AM 0 points [-]

Well, it's clearly pathological in some sense, but the space of actions to be (pre)ordered is astronomically big and reflective endorsement is slow, so you can't usefully error-check the space that way. cf. Lovecraft's comment about "the inability of the human mind to correlate all its contents".

I don't think it will do to simply assume that an actually instantiated agent will have a transitive set of expressed preferences. Bit like assuming your code is bugfree.