Emile comments on Morality as Parfitian-filtered Decision Theory? - Less Wrong

24 Post author: SilasBarta 30 August 2010 09:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (270)

You are viewing a single comment's thread. Show more comments above.

Comment author: Emile 31 August 2010 10:22:54PM 0 points [-]

How do you represent that uncertainty in a number, or a sorted list of numbers representing the utility of various choices?

The number could be the standard deviation of the probability distribution for the utility (the mean being the expected utility, which you would use for sorting purposes).

So if you ("you" being the linear-utility-maximizing agent) have two path of actions whose expected utility are close, but with a lot of uncertainty, it could be worth collecting more information to try to narrow down your probability distributions.\

It seems that an utility-maximizing agent could be in a state that could be qualified as "indecisive".

Comment author: pjeby 31 August 2010 10:34:16PM 0 points [-]

It seems that an utility-maximizing agent could be in a state that could be qualified as "indecisive".

But only if you add new entities to the model, thereby complicating it. You now need a separate meta-cognitive system to manage this uncertainty. And what if those options are uncertain? Now you need another meta-cognitive system!

Human brains, OTOH, represent all this stuff in a single layer. We can consider actions, meta-actions, and meta-meta-actions in the same process without skipping a beat.

Comment author: Emile 01 September 2010 08:11:40AM *  0 points [-]

But only if you add new entities to the model, thereby complicating it. You now need a separate meta-cognitive system to manage this uncertainty. And what if those options are uncertain? Now you need another meta-cognitive system!

Possible, I'm not arguing that a utility maximizing agent would be simpler, only that an agents whose preferences are encoded in a utility function (even a "simple" one like "number of paperclips in existence") could be indecisive. Even if you have a simple utility function that gives you the utility of a world state, you might still have a lot of uncertainty about the current state of the world, and how your actions will impact the future. It seems very reasonable to represent that uncertainty one way or the other; in some cases the most rational action from a strictly utility-maximizing point of view is to defer the decision and aquire more information, even at a cost.

Comment author: pjeby 01 September 2010 03:59:46PM *  0 points [-]

Possible, I'm not arguing that a utility maximizing agent would be simpler,

Good. ;-)

Only that an agents whose preferences are encoded in a utility function (even a "simple" one like "number of paperclips in existence") could be indecisive.

Sure. But at that point, the "simplicity" of using utility functions disappears in a puff of smoke, as you need to design a metacognitive architecture to go with it.

One of the really elegant things about the way brains actually work, is that the metacognition is "all the way down", and I'm rather fond of such architectures. (My predicate dispatcher, for instance, uses rules to understand rules, in the same sort of Escherian level-crossing bootstrap.)