wnoise comments on Efficient Charity: Do Unto Others... - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (318)
So we need to formalize this, obviously.
Method 1: Exponential discounting.
Problem: You don't care very much about future people.
Method 2: Taking the average over all time (specifically the limit as t goes to infinity of the integral of utility from 0 to t, divided by t)
Conclusion which may be problematic: If humanity does not live forever, nothing we do matters.
Caveat: Depending on our anthropics, we can argue that the universe is infinite in time or space with probability 1, in which case there are an infinite number of copies of humanity, and so we can always calculate the average. This seems like the right approach to me. (In general, using the same math for your ethics and your anthropics has nice consequences, like avoiding most versions of Pascal's Mugging.)
Why is this a problem? This seems to match reality for most people.
So does selfishness and irrationality. We would like to avoid those. It also is intuitive that we would like to care more about future people.
Excessive selfishness, sure. Some degree of selfishness is required as self-defense, currently, otherwise all your own needs are subsumed by supplying others' wants.. Even in a completely symmetric society with everybody acting more for others' good than their own is worse than one where everybody takes care of their own needs first -- because each individual generally knows their own needs and wants better than anyone else does.
I don't know the needs and wants of the future. I can't know them particularly well. I have worse and worse uncertainty the farther away in time that is. Unless we're talking about species-extinction level of events, I damn well should punt to those better informed, those closer to the problems.
Not to me. Heck. I'm not entirely sure what it means to care about a person who doesn't exist yet, and where my choices will influence which of many possible versions will exist.
Expected-utility calculation already takes that into effect. Uncertainty about whether an action will be beneficial translates into a lower expected utility. Discounting, on top of that, is double counting.
Knowledge is a fact about probabilities, not utilities.
Let's hope our different intuitions are resolvable.
Surely it's not much more difficult than caring about a person who your choices will dramatically change?