CCC comments on The $125,000 Summer Singularity Challenge - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (259)
This doesn't hold. Those extra five should be added onto the trillion you already have; not considered seperately.
Value only needs to increase monotonically. Linearity is not required; it might even be asymptotic.
That depends on how you do the accounting here. If we check the utility provided by saving five people, it's high. If we check the utility provided by increasing a population of a trillion, it's unfathomably low.
This is, in fact, the point.
Intuitively, we should be able to meaningfully analyse the utility of a part without talking about - or even knowing - the utility of the whole. Discovering vast interstellar civilizations should not invalidate our calculations made on how to save the most lives.
Let us assume that we have A known people in existence. Dr. Evil presents us with B previously unknown people, and threatens to kill them unless we kill C out of our A known people (where C<A). The question is, whether it is ethically better to let B people die, or to let C people die. (It is clearly better to save all the people, if possible).
We have a utility function, f(x), which describes the utility produced by x people. Before Dr. Evil turns up, we have A known people; and a total utility of f(A+B). After Dr. Evil arrives, we find that there are more people; we have a total utility of f(A+B) (or f(A+B+1), if Dr. Evil was previously unknown; from here onwards I will assume that Dr. Evil was previously known, and is thus included in A). Dr. Evil offers us a choice, between a total utility of f(A+B-C) or a total utility of f(A).
The immediate answer is that if B>C, it is better for B people to live; while if C>B, then it is better for C people to live. For this to be true for all A, B and C, it is necessary for f(x) to be a monotonically increasing function; that is, a function where f(y)>f(x) if and only if y>x.
Now, you are raising the possibility that there exist a number, D, of people in vast interstellar civilisations who are completely unknown to us. Then Dr. Evil's choice becomes a choice between a total utility of f(A+B-C+D) and a total utility of f(A+D). Again, as long as f(x) is monotonically increasing, the question of finding the greatest utility is simply a matter of seeing whether B>C or not.
I don't see any cause for invalidating any of my calculations in the presence of vast interstellar civilisations.
It takes effort to pull the lever and divert the trolley. This minuscule amount has to be outweighed by the utility of additional lives. It gets even worse in real situations, where it may cost a great deal to help people.
Ah; now we begin to compare different things. To compare the effort of pulling the lever, against the utility of the additional lives. At this point, yes, the actual magnitude and not just the sign of the difference between f(A+B-C+D) and f(A+D) becomes important; yet D is unknown and unknowable. This means that the magnitude of the difference can only be known with certainty if f(x) is linear; in the case of a nonlinear f(x), the magnitude cannot be known with certainty. I can easily pick out a nonlinear, monotonically increasing function such that the difference between f(A+B-C+D) and f(A+D) can be made arbitrarily small for any positive integer A, B and C (where A+B>C) by simply selecting a suitable positive integer D. A simple example would be f(x)=sqrt(x).
Now, the hypothetical moral agent is in a quandary. Using effort to pick a solution costs utilions. The cost is a simple, straightforward constant; he known how much that costs. But, with f(x)=sqrt(x), without knowing D, he cannot tell whether the utilions of saving the people is greater or lesser than the utilion cost of picking a solution. (For the purpose of simplicity, I will assume that no-one will ever know that he was in a position to make the choice - that is, his reputation is safe, no matter what he selects). Therefore, he has to make an estimate. He has to guess a value of D. There are multiple strategies that can be followed here:
Try to estimate the most probable value of D. This would require something along the lines of the Drake equation - picking the most likely numbers for the different elements, picking the most likely size of an extraterrestrial civilisation, and doing some multiplication.
Take the most pessimistic possible value of D; D=0. That is, plan as though I am in the worst possible universe; if I am correct, and D=0, then I take the correct action, while if I am incorrect and D later proves greater than zero, then that is a pleasant surprise. This guards against getting an extremely unpleasant surprise if it later turns out that D is substantially lower than the most likely estimate; utilions in the future are more likely to go up than down.
Ignore the cost, and simply take the option that saves the most lives, regardless of effort. This strategy actually reduces the cost slightly (as one does not need to expend the very slight cost of calculating the cost), and has the benefit of allowing immediate action. It is the option that I would prefer that everyone who is not me should take (because if other people take it, then I have a greater chance of getting my life saved at the cost of no effort on my part). I might choose this option out of a sense of fairness (if I wish other people to take this option, it is only reasonable to consider that other people may wish me to take it) or out of a sense of duty (saving lives is important).
More precisely, you take the expected value over your probability distribution for D, i.e. if
exceeds the cost of pulling the lever then you pull it.
ETA: In case you're wondering, I used this to display the equation.