sebmathguy comments on Model of unlosing agents - Less Wrong

3 Post author: Stuart_Armstrong 02 August 2014 07:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 02 August 2014 08:44:46PM 0 points [-]

only that they would be VNM-rational

But if the agent can't be subject to Dutch books, what's the point of being VNM-rational? (in fact, in my construction, the agent need not be initially complete).

But the main point is that VNM-rational isn't clearly defined. Is it over all possible decisions, or just over decisions the agent actually faces? Given that rationality is often defined on Less Wrong in a very practical way (generalised "winning") I see no reason to need to assume the first. It weakens the arguments for VNM-rationality, makes it into a philosophical ideal rather than a practical tool.

And so while it's clear that an AI would want to make itself into an unlosing agent, it's less clear that it would want to make itself into an expected utility maximiser. In fact, it's very clear that in some cases it wouldn't: if it knew that outcomes A and B were impossible, and it currently didn't have preferences between them, then there is no reason it would ever bother to develop preferences there (baring social signalling and similar).

Comment author: sebmathguy 03 August 2014 06:12:17AM *  1 point [-]

There's actually no need to settle for finite truncations of a decision agent. The unlosing decision function (on lotteries) can be defined in first-order logic, and your proof that there are finite approximations of a decision function is sufficient to use the compactness theorem to produce a full model.