GreedyAlgorithm, this is the conversation I want to have.
The sentence in your argument that I cannot swallow is this one: "Notice that if you have incoherent preferences, after a while, you expect your utility to be lower than if you do not have incoherent preferences." This is circular, is it not?
You want to establish that any decision, x, should be made in accordance w/ maximum expected utility theory ("shut up and calculate"). You ask me to consider X = {x_i}, the set of many decisions over my life ("after a while"). You say that the expected value of U(X) is only maximized when the expected value of U(x_i) is maximized for each i. True enough. But why should I want to maximize the expected value of U(X)? That requires every bit as much (and perhaps the same) justification as maximizing the expected value of U(x_i) for each i, which is what you sought to establish.
Consider these two facts about me:
(1) It is NOT CLEAR to me that saving 1 person with certainty is morally equivalent to saving 2 people when a fair coin lands heads in a one-off deal.
(2) It is CLEAR to me that saving 1000 people with p=.99 is morally better than saving 1 person with certainty.
Models are supposed to hew to the facts. Your model diverges from the facts of human moral judgments, and you respond by exhorting us to live up to your model.
Why should we do that?
Eliezer, I am skeptical that sloganeerings ("shut up and calculate") will not get you across this philosophical chasm: Why do you define the best one-off choice as the choice that would be prefered over repeated trials?
Eliezer, I think your argument is flat-out invalid.
Here is the form of your argument: "You prefer X. This does not strike people as foolish. But if you always prefer X, it would be foolish. Therefore your preference really is foolish."
That conclusion does not follow without the premise "You always prefer X if you ever prefer X."
More plainly, you are supposing that there is some long run over which you could "pump money" from someone who expressed such-and-such a preference. BUT my preference over infinitely many repeated trials is not the same as my preference over one trial. AND You cannot demonstrate that that is absurd.
I don't think the possibility of a money-pump is always a knock-down reductio. It really only makes my preferences seem foolish in the long-run. But there isn't a long run here: it's a once-in-a-lifetime deal. If you told me that you would make me the same offer to me thousands of time, I would of course do the clean math that you suggest.
Suppose you are deathly thirsty, have only $1 in your pocket, and find yourself facing two bottled-water machines: The first would dispense a bottle with certainty for the full dollar, and the second would do so with a probability and price such that "clean math" suggests it is the slightly more rational choice. Etc.
Paul: Tough-minded rationalism need not preclude considerations of tact. Some logical statements come with nasty historical baggage and should be avoided, especially in a political context like this one.
But again, I don't know that is the case with Eliezer's quotation. I am only urging caution about Lincoln-isms like that one in general.
Do you know where the Lincoln quotation comes from? I would be cautious about quoting it out of context. My guess is that it was a nasty remark about blacks. He made a similarly folksy riddle in the Lincoln-Douglas debates:
"Any system of argumentation that... argues me into the idea of perfect social and political equality with the negro, is a species of fantastic arrangement of words by which a man can prove a chestnut horse to be a horse chestnut."
Here's a link to my source.
Your objection to the possiblity of a world without fire reminds me of the the Fyodor's doubts about the possibility of a hell in The Brother's Karamazov.
Hell is scary insofar as it contains things we understand and are scared of, like iron hooks to be hung with. But if hell has even one item, like a hook, from our ordinary physical world, then this would have all sorts of embarrassing implications.
"It's impossible, I think, for the devils to forget to drag me down to hell with their hooks when I die. Then I wonder- hooks? Where would they get them? What of? Iron hooks? Where do they forge them? Have they a foundry there of some sort? The monks in the monastery probably believe that there's a ceiling in hell, for instance. Now I'm ready to believe in hell, but without a ceiling. It makes it more refined, more enlightened, more Lutheran that is. And, after all, what does it matter whether it has a ceiling or hasn't? But, do you know, there's a damnable question involved in it? If there's no ceiling there can be no hooks, and if there are no hooks it all breaks down..."
http://fyodordostoevsky.com/etexts/the_brothers_karamazov.txt
(I should say that I assumed that a bag of decisions is worth as much as the sum of the utilities of the individual decisions.)