timtyler comments on Morality as Parfitian-filtered Decision Theory? - Less Wrong

24 Post author: SilasBarta 30 August 2010 09:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (270)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 31 August 2010 08:42:38PM *  0 points [-]

The basis of using utilities is that you can consider agent's possible actions, assign real-valued utilities to them, and then choose the one with the most utility. If you can use a utility function built from a partially-recursive language, then you can always do that - provided that your decision process is computable in the first place. That's a pretty general framework - about the only assumption that can be argued with is its quantising of spacetime.

The von Neumann-Morgenstern axioms layer on top of that basic idea. The independence axiom is the one about combining utilities by adding them up. I would say it is the one most closely associated with fungibility.

Comment author: pjeby 31 August 2010 09:28:51PM 2 points [-]

The basis of using utilities is that you can consider agent's possible actions, assign real-valued utilities to them, and then choose the one with the most utility. If you can use a utility function built from a partially-recursive language, then you can always do that - provided that your decision process is computable in the first place.

And that is not what humans do (although we can of course lamely attempt to mimic that approach by trying to turn off all our parallel processing and pretending to be a cheap sequential computer instead).

Humans don't compute utility, then make a decision. Heck, we don't even "make decisions" unless there's some kind of ambiguity, at which point we do the rough equivalent of making up a new utility function, specifically to resolve the conflict that forced us pay conscious attention in the first place!

This is a major (if not the major) "impedance mismatch" between linear "rationality" and actual human values. Our own thought processes are so thoroughly and utterly steeped in context-dependence that it's really hard to see just how alien the behavior of an intelligence based on a consistent, context-independent utility would be.

Comment author: timtyler 31 August 2010 09:36:07PM 0 points [-]

The basis of using utilities is that you can consider agent's possible actions, assign real-valued utilities to them, and then choose the one with the most utility. If you can use a utility function built from a partially-recursive language, then you can always do that - provided that your decision process is computable in the first place.

And that is not what humans do (although we can of course lamely attempt to mimic that approach by trying to turn off all our parallel processing and pretending to be a cheap sequential computer instead).

There's nothing serial about utility maximisation!

...and it really doesn't matter how the human works inside. That type of general framework can model the behaviour of any computable agent.

Comment author: pjeby 31 August 2010 09:46:18PM 3 points [-]

There's nothing serial about utility maximisation!

I didn't say there was. I said that humans needed to switch to slow serial processing in order to do it, because our brains aren't set up to do it in parallel.

...and it really doesn't matter how the human works inside. That type of general framework can model the behaviour of any computable agent.

Great! So you can show me how to use a utility function to model being indecisive or uncertain, then? ;-)

Comment author: timtyler 31 August 2010 09:55:18PM 0 points [-]

There's nothing serial about utility maximisation!

I didn't say there was. I said that humans needed to switch to slow serial processing in order to do it, because our brains aren't set up to do it in parallel.

I think this indicates something about where the problem lies. You are apparently imagining an agent consciously calculating utilities. That idea has nothing to do with the idea that utility framework proponents are talking about.

Comment author: pjeby 31 August 2010 10:06:11PM 2 points [-]

You are apparently imagining an agent consciously calculating utilities.

No, I said that's what a human would have to do in order to actually calculate utilities, since we don't have utility-calculating hardware.

Comment author: timtyler 31 August 2010 10:09:28PM 0 points [-]

Ah - OK, then.

Comment author: wnoise 31 August 2010 10:00:35PM 0 points [-]

When humans don't consciously calculate, the actions they take are much harder to fit into a utility-maximizing framework, what with inconsistencies cropping up everywhere.

Comment author: timtyler 31 August 2010 10:06:34PM *  0 points [-]

It depends on the utility-maximizing framework you are talking about - some are more general than others - and some are really very general.

Comment author: FAWS 31 August 2010 09:55:08PM 0 points [-]

Great! So you can show me how to use a utility function to model being indecisive or uncertain, then? ;-)

Negative term for having made what later turns out to have been a wrong decision, perhaps proportional to the importance of the decision, and choices otherwise close to each other in expected utility, but with a large potential difference in actually realized utility.