Comment author: Neel_Krishnaswami 30 October 2007 05:28:00PM -1 points [-]

Robin, of course it's not obvious. It's only an obvious conclusion if the global utility function from the dust specks is an additive function of the individual utilities, and since we know that utility functions must be bounded to avoid Dutch books, we know that the global utility function cannot possibly be additive -- otherwise you could break the bound by choosing a large enough number of people (say, 3^^^3).


From a more metamathematical perspective, you can also question whether 3^^3 is a number at all. It's perfectly straightforward to construct a perfectly consistent mathematics that rejects the axiom of infinity. Besides the philosophical justification for ultrafinitism (ie, infinite sets don't really exist), these theories corresponds to various notions of bounded computation (such as logspace or polytime). This is a natural requirement, if we want to require moral judgements to be made quickly enough to be relevant to decision making -- and that rules out seriously computing with numbers like 3^^^3.

Comment author: Neel_Krishnaswami 20 October 2007 10:24:09PM 1 point [-]

Vann McGee has proven that if you have an agent with an unbounded utility function and who thinks there are infinitely many possible states of the world (ie, assigns them probability greater than 0), then you can construct a Dutch book against that agent. Next, observe that anyone who wants to use Solomonoff induction as a guide has committed to infinitely many possible states of the world. So if you also want to admit unbounded utility functions, you have to accept rational agents who will buy a Dutch book.

And if you do that, then the subjectivist justification of probability theory collapses, taking Bayesianism with it, since that's based on non-Dutch-book-ability.

I think the cleanest option is to drop unbounded utility functions, since they buy you zero additional expressive power. Suppose you have an event space S, a preference relation P, and a utility function f from events to nonnegative real numbers such that if s1 P s2, then f(s1) < f(s2). Then, you can easily turn this into a bounded utility function g(s) = f(s)/(f(s) + 1). It's easily seen that g respects the preference relation P in exactly the same way as f did, but is now bounded to the interval [0, 1).

Comment author: Neel_Krishnaswami 20 October 2007 07:01:08PM 3 points [-]

Utility functions have to be bounded basically because genuine martingales screw up decision theory -- see the St. Petersburg Paradox for an example.

Economists, statisticians, and game theorists are typically happy to do so, because utility functions don't really exist -- they aren't uniquely determined from someone's preferences. For example, you can multiply any utility function by a constant, and get another utility function that produces exactly the same observable behavior.

Comment author: Neel_Krishnaswami 30 August 2007 10:51:57AM 2 points [-]

One of my mistakes was believing in Bayesian decision theory, and in constructive logic at the same time. This is because traditional probability theory is inherently classical, because of the axiom that P(A + not-A) = 1. This is an embarassingly simple inconsistency, of course, but it lead me to some interesting ideas.

Upon reflection, it turns out that the important idea is not Bayesianism proper, which is merely one of an entire menagerie of possible rationalities, but rather de Finetti's operationalization of subjective belief in terms of avoiding Dutch book bets. It turns out there are a lot of ways of doing that, because the only physically realizable bets are of finitely refutable propositions.

So you can have perfectly rational agents who never come to agreement, no matter how much evidence they see, because no finite amount of evidence can settle questions like whether the law of the excluded middle holds for propositions over the natural numbers.

View more: Prev