Neel_Krishnaswami
Neel_Krishnaswami has not written any posts yet.

Neel_Krishnaswami has not written any posts yet.

You've profoundly misunderstood McGee's argument, Eliezer. The reason you need the expectation of the sum of an infinite number of random variables to equal the sum of the expectations of those random variable is exactly to ensure that choosing an action based on the expected value actually yields an optimal course of action.
McGee observed that if you have an infinite event space and unbounded utilities, there are a collection of random utility functions U1, U2, ... such that E(U1 + U2 + ...) != E(U1) + E(U2) + .... McGee then observes that if you restrict utilities to a bounded range, then in fact E(U1 + U2 + ...) == E(U1) +... (read more)
I think claims like "exactly twice as bad" are ill-defined.
Suppose you have some preference relation on possible states R, so that X is preferred to Y if and only if R(X, Y) holds. Next, suppose we have a utility function U, such that if R(X, Y) holds, then U(X) > U(Y). Now, take any monotone transformation of this utility function. For example, we can take the exponential of U, and define U'(X) = 2^(U(X)). Now, note that U(X) > U(Y) if and only if that U'(X) > U'(Y). Now, even if U is additive along some dimension of X, U' won't be.
But there's no principled reason to believe that U... (read more)
Bob: Sure, if you specify a disutility function that mandates lots-o'-specks to be worse than torture, decision theory will prefer torture. But that is literally begging the question, since you can write down a utility function to come to any conclusion you like. On what basis are you choosing that functional form? That's where the actual moral reasoning goes. For instance, here's a disutility function, without any of your dreaded asymptotes, that strictly prefers specks to torture:
U(T,S) = ST + S
Freaking out about asymptotes reflects a basic misunderstanding of decision theory, though. If you've got a rational preference relation, then you can always give a bounded utility function. (For example, the... (read more)
If you don't want to assume the existence of certain propositions, you're asking for a probability theory corresponding to a co-intutionistic variant of minimal logic. (Cointuitionistic logic is the logic of affirmatively false propositions, and is sometimes called Popperian logic.) This is a logic with false, or, and (but not truth), and an operation called co-implication, which I will write a <-- b.
Take your event space L to be a distributive lattice (with ordering <), which does not necessarily have a top element, but does have dual relative pseudo-complements. Take < to be the ordering on the lattice. (a <-- b) if for all x in the lattice L,
for all x,... (read more)
With the graphical-network insight in hand, you can give a mathematical explanation of exactly why first-order logic has the wrong properties for the job, and express the correct solution in a compact way that captures all the common-sense details in one elegant swoop.
Consider the following example, from Menzies's "Causal Models, Token Causation, and Processes"[*]:
An assassin puts poison in the king's coffee. The bodyguard responds by pouring an antidote in the king's coffee. If the bodyguard had not put the antidote in the coffee, the king would have died. On the other hand, the antidote is fatal when taken by itself and if the poison had not been poured in first, it would... (read more)
g: that's exactly what I'm saying. In fact, you can show something stronger than that.
Suppose that we have an agent with rational preferences, and who is minimally ethical, in the sense that they always prefer fewer people with dust specks in their eyes, and fewer people being tortured. This seems to be something everyone agrees on.
Now, because they have rational preferences, we know that a bounded utility function consistent with their preferences exists. Furthermore, the fact that they are minimally ethical implies that this function is monotone in the number of people being tortured, and monotone in the number of people with dust specks in their eyes. The combination of... (read more)
Tom, your claim is false. Consider the disutility function
D(Torture, Specks) = [10 * (Torture/(Torture + 1))] + (Specks/(Specks + 1))
Now, with this function, disutility increases monotonically with the number of people with specks in their eyes, satisfying your "slight aggregation" requirement. However, it's also easy to see that going from 0 to 1 person tortured is worse than going from 0 to any number of people getting dust specks in their eyes, including 3^^^3.
The basic objection to this kind of functional form is that it's not additive. However, it's wrong to assume an additive form, because that assumption mandates unbounded utilities, which are a bad idea, because they are not computationally realistic and admit Dutch books. With bounded utility functions, you have to confront the aggregation problem head-on, and depending on how you choose to do it, you can get different answers. Decision theory does not affirmatively tell you how to judge this problem. If you think it does, then you're wrong.
Eliezer, both you and Robin are assuming the additivity of utility. This is not justifiable, because it is false for any computationally feasible rational agent.
If you have a bounded amount of computation to make a decision, we can see that the number of distinctions a utility function can make is in turn bounded. Concretely, if you have N bits of memory, a utility function using that much memory can distinguish at most 2^N states. Obviously, this is not compatible with additivity of disutility, because by picking enough people you can identify more distinct states than the 2^N distinctions your computational process can make.
Now, the reason for adopting additivity comes from... (read more)
Eliezer, in your response to g, are you suggesting that we should strive to ensure that our probability distribution over possible beliefs sum to 1? If so, I disagree: I don't think this can be considered a plausible requirement for rationality. When you have no information about the distribution, you ought to assign probabilities uniformly, according to Laplace's principle of indifference. But the principle of indifference only works for distributions over finite sets. So for infinite sets you have to make an arbitrary choice of distribution, which violates indifference.
Eliezer: Never mind having the expectation of a sum of an infinite number of variables not equalling the sum of the expectations; here we have the expectation of the sum of two bets not equalling the sum of the expectations.
If you have an alternating series which is conditionally but not absolutely convergent, the Riemann series theorem says that reordering its terms can change the result, or force divergence. So you can't pull a series of bets apart into two series, and expect their sums to equal the sum of the original. But the fact that you assumed you could is a perfect illustration of the point; if you had a collection of... (read more)