Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

DanielLC comments on Efficient Charity: Do Unto Others... - Less Wrong

130 Post author: Yvain 24 December 2010 09:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (318)

You are viewing a single comment's thread. Show more comments above.

Comment author: DanielLC 26 December 2010 01:32:56AM 1 point [-]

In order this to be true forever, the world would have to never end, which would mean that there's infinite utility no matter what you do.

If this is false eventually, there is no paradox. Whether or not It's worth while to invest for a few centuries is an open question, but if it turns out it is, that's no reason to abandon the idea of comparing charities.

Comment author: nshepperd 26 December 2010 02:40:22AM 1 point [-]

In order this to be true forever, the world would have to never end, which would mean that there's infinite utility no matter what you do.

That doesn't sound right... even if I'm expecting an infinite future I think I'd still want to live a good existence rather than a mediocre one (but with >0 utility). So it does matter what I do.

Say I have two options:

  • A, which offers on average 1.. utilon per second? (Are utilons measures of utility of a time period, or instantaneous utility?)
  • B, which offers on average 2 utilons / s

The limits as t approaches infinity are U(A) = t, U(B) = 2t. Both are "infinite" but B is yet larger than A, and therefore "better".

Comment author: Will_Sawin 26 December 2010 03:25:08AM 0 points [-]

So we need to formalize this, obviously.

Method 1: Exponential discounting.

Problem: You don't care very much about future people.

Method 2: Taking the average over all time (specifically the limit as t goes to infinity of the integral of utility from 0 to t, divided by t)

Conclusion which may be problematic: If humanity does not live forever, nothing we do matters.

Caveat: Depending on our anthropics, we can argue that the universe is infinite in time or space with probability 1, in which case there are an infinite number of copies of humanity, and so we can always calculate the average. This seems like the right approach to me. (In general, using the same math for your ethics and your anthropics has nice consequences, like avoiding most versions of Pascal's Mugging.)

Comment author: wnoise 27 December 2010 07:43:52AM 2 points [-]

Problem: You don't care very much about future people.

Why is this a problem? This seems to match reality for most people.

Comment author: Will_Sawin 01 January 2011 12:34:36AM 1 point [-]

So does selfishness and irrationality. We would like to avoid those. It also is intuitive that we would like to care more about future people.

Comment author: wnoise 06 January 2011 09:33:16AM *  1 point [-]

Excessive selfishness, sure. Some degree of selfishness is required as self-defense, currently, otherwise all your own needs are subsumed by supplying others' wants.. Even in a completely symmetric society with everybody acting more for others' good than their own is worse than one where everybody takes care of their own needs first -- because each individual generally knows their own needs and wants better than anyone else does.

I don't know the needs and wants of the future. I can't know them particularly well. I have worse and worse uncertainty the farther away in time that is. Unless we're talking about species-extinction level of events, I damn well should punt to those better informed, those closer to the problems.

It also is intuitive that we would like to care more about future people.

Not to me. Heck. I'm not entirely sure what it means to care about a person who doesn't exist yet, and where my choices will influence which of many possible versions will exist.

Comment author: Will_Sawin 06 January 2011 03:31:30PM *  0 points [-]

each individual generally knows their own needs and wants better than anyone else does.

I don't know the needs and wants of the future.

Expected-utility calculation already takes that into effect. Uncertainty about whether an action will be beneficial translates into a lower expected utility. Discounting, on top of that, is double counting.

Knowledge is a fact about probabilities, not utilities.

Not to me.

Let's hope our different intuitions are resolvable.

I'm not entirely sure what it means to care about a person who doesn't exist yet, and where my choices will influence which of many possible versions will exist.

Surely it's not much more difficult than caring about a person who your choices will dramatically change?

Comment author: nshepperd 26 December 2010 05:44:21AM 2 points [-]

How about this:

If you have a set E = {X, Y, Z...} of possible actions, A (in E) is the utility-maximising action iff for all other B in E, the limit

is greater than zero, or approaches zero from the positive side. Caveat: I have no evidence this doesn't implode in some way, perhaps by the limit being undefined. This is just a stupid idea to consider. A possibly equivalent formulation is

The inequality being greater or equal allows for two or more actions being equivalent, which is unlikely but possible.

Comment author: DSimon 27 December 2010 06:59:42PM 4 points [-]

Side comment: that math equation image generator you used is freakin' excellent. The image itself is generated based from the URL, so you don't have to worry about hosting. Editor is here.

Comment author: nshepperd 28 December 2010 03:06:43AM *  4 points [-]

I prefer this one, which automatically generates the link syntax to paste into a LW comment. There's a short discussion of all this on the wiki.

Comment author: Will_Sawin 26 December 2010 12:28:12PM 1 point [-]

Functions whose limit is +infinity and -infinity can be distinguished, so your good there.

I think it's the same as my second: As long as the probability given both actions of a humanity lasting forever is nonzero, and the differences of expected utilities far in the future is nonzero, nothing that happens in the first million billion years matters.

Comment author: nshepperd 27 December 2010 09:30:13AM *  0 points [-]

The difference in expected utility would have to decrease slow enough (slower than exponential?) to not converge, not just be nonzero. [Which would be why exponential discounting "works"...]

However I would be surprised to see many decisions with that kind of lasting impact. The probability of an action having some effect at time t in the future "decays exponentially" with t (assuming p(Effect_t | Effect_{t-1}, Action) is approximately constant), so the difference in expected utility will in general fall off exponentially and therefore converge anyway. Exceptions would be choices where the utilities of the likely effects increase in magnitude (exponentially?) as t increases.

Anyway I don't see infinities as an inherent problem under this scheme. In particular if we don't live forever, everything we do does indeed matter. If we do live forever, what we do does matter, excepts how it affects us might not if we anticipate causing "permanant" gain by doing something.

Comment author: Sniffnoy 26 December 2010 06:04:15AM 0 points [-]

Can't think about the underlying idea right now due to headache, but instead of talking about any sort of limit, just say that it's eventually positive, if that's what you mean.

Comment author: gwern 26 December 2010 03:58:38AM 2 points [-]

Bostrom would disagree with your conclusion that infinities are unproblematic for utilitarian ethics: http://www.nickbostrom.com/ethics/infinite.pdf

Comment author: DanielLC 26 December 2010 07:48:22AM 0 points [-]

You can switch between A and B just by rearranging when events happen. For example, imagine that there are two planets moving in opposite directions. One is a Utopia, the other is a Distopia. From the point of reference of the Utopia, time is slowed down in the Distopia, so the world is worth living in. From the point of reference of the Distopia, it's reversed.

This gets even worse when you start dealing with expected utility. As messed up as the idea is that the order of events matter, there at least is an order. With expected utility, there is no inherent order.

The best I can do is assign the priors for infinite utility to zero, and make my priors fall off fast enough to make sure expected utility always converges. I've managed to prove that my posteriors will also always have a converging expected utility.