All of Lee's Comments + Replies

Lee00

(I should say that I assumed that a bag of decisions is worth as much as the sum of the utilities of the individual decisions.)

Lee20

GreedyAlgorithm, this is the conversation I want to have.

The sentence in your argument that I cannot swallow is this one: "Notice that if you have incoherent preferences, after a while, you expect your utility to be lower than if you do not have incoherent preferences." This is circular, is it not?

You want to establish that any decision, x, should be made in accordance w/ maximum expected utility theory ("shut up and calculate"). You ask me to consider X = {x_i}, the set of many decisions over my life ("after a while"). You sa... (read more)

2pandamodium
This whole argument only washes if you assume that things work "normally" (eg like they do in the real field, eg are subject to the axioms that make addition/subtraction/calculus work). In fact we know that utility doesn't behave normally when considering multiple agents (as proved by arrows impossibility theorm), so the "correct" answer is that we can't have a true pareto-optimal solution to the eye-dust-vs-torture problem. There is no reason why you couldn't contstruct a ring/field/group for utility which produced some of the solutions the OP dismisses, and in fact IMO those would be better representations of human utility than a straight normal interpretation.
Lee160

Consider these two facts about me:

(1) It is NOT CLEAR to me that saving 1 person with certainty is morally equivalent to saving 2 people when a fair coin lands heads in a one-off deal.

(2) It is CLEAR to me that saving 1000 people with p=.99 is morally better than saving 1 person with certainty.

Models are supposed to hew to the facts. Your model diverges from the facts of human moral judgments, and you respond by exhorting us to live up to your model.

Why should we do that?

3DPiepgrass
In a world sufficiently replete with aspiring rationalists there will be not just one chance to save lives probabilistically, but (over the centuries) many. By the law of large numbers, we can be confident that the outcome of following the expected-value strategy consistently (even if any particular person only makes a choice like this zero or one times in their life) will be that more total lives will be saved. Some people believe that "being virtuous" (or suchlike) is better than achieving a better society-level outcome. To that view I cannot say it better than Eliezer: "A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain's feelings of comfort or discomfort with a plan." I see a problem with Eliezer's strategy that is psychological rather than moral: if 500 people die, you may be devastated, especially if you find out later that the chance of failure was, say, 50% rather than 10%. Consequentialism asks us to take this into account. If you are a general making battle decisions, which would weigh on you more? The death of 500 (in your effort to save 100), or abandoning 100 to die at enemy hands, knowing you had a roughly 90% chance to save them? Could that adversely affect future decisions? (in specific scenarios we must also consider other things, e.g. in this case whether it's worth the cost in resources - military leaders know, or should know, that resources can be equated with lives as well...) Note: I'm pretty confident Eliezer wouldn't object to you using your moral sense as a tiebreaker if you had the choice between saving one person with certainty and two people with 50% probability.
Lee60

Eliezer, I am skeptical that sloganeerings ("shut up and calculate") will not get you across this philosophical chasm: Why do you define the best one-off choice as the choice that would be prefered over repeated trials?

Lee-30

Eliezer, I think your argument is flat-out invalid.

Here is the form of your argument: "You prefer X. This does not strike people as foolish. But if you always prefer X, it would be foolish. Therefore your preference really is foolish."

That conclusion does not follow without the premise "You always prefer X if you ever prefer X."

More plainly, you are supposing that there is some long run over which you could "pump money" from someone who expressed such-and-such a preference. BUT my preference over infinitely many repeated trials is not the same as my preference over one trial. AND You cannot demonstrate that that is absurd.

Lee00

I don't think the possibility of a money-pump is always a knock-down reductio. It really only makes my preferences seem foolish in the long-run. But there isn't a long run here: it's a once-in-a-lifetime deal. If you told me that you would make me the same offer to me thousands of time, I would of course do the clean math that you suggest.

Suppose you are deathly thirsty, have only $1 in your pocket, and find yourself facing two bottled-water machines: The first would dispense a bottle with certainty for the full dollar, and the second would do so with a probability and price such that "clean math" suggests it is the slightly more rational choice. Etc.

3DanielLC
The rational choice would be the one that results in the highest expected utility. In this case, it wouldn't necessarily be the one with the highest expected amount of water. This is because the first bottle of water is worth far more then the second. The amount of money you make over your lifetime dwarfs the amount you make in these examples. The expected utility of the money isn't going to change much. It seems hard to believe that the option of going from B to C and then from C to A would change whether or not it's a good idea. After all, you can always go from A to B and then refuse to change. Then there'd be no long run. Of course, once you've done that, you might as well go from B to C and stop there, etc.
Lee20

Paul: Tough-minded rationalism need not preclude considerations of tact. Some logical statements come with nasty historical baggage and should be avoided, especially in a political context like this one.

But again, I don't know that is the case with Eliezer's quotation. I am only urging caution about Lincoln-isms like that one in general.

Lee00

Do you know where the Lincoln quotation comes from? I would be cautious about quoting it out of context. My guess is that it was a nasty remark about blacks. He made a similarly folksy riddle in the Lincoln-Douglas debates:

"Any system of argumentation that... argues me into the idea of perfect social and political equality with the negro, is a species of fantastic arrangement of words by which a man can prove a chestnut horse to be a horse chestnut."

Here's a link to my source.

Lee130

Your objection to the possiblity of a world without fire reminds me of the the Fyodor's doubts about the possibility of a hell in The Brother's Karamazov.

Hell is scary insofar as it contains things we understand and are scared of, like iron hooks to be hung with. But if hell has even one item, like a hook, from our ordinary physical world, then this would have all sorts of embarrassing implications.

"It's impossible, I think, for the devils to forget to drag me down to hell with their hooks when I die. Then I wonder- hooks? Where would they get them? W... (read more)