If a human child grew up in a less painful world - if they had never lived in a world of AIDS or cancer or slavery, and so did not know these things as evils that had been triumphantly eliminated - and so did not feel that they were "already done" or that the world was "already changed enough"... Would they take the next step, and try to eliminate the unbearable pain of broken hearts, when someone's lover stops loving them?
Here is a more instructive thought experiment. Suppose a human child grew up in a painless world and did not feel that pain was already there or that the world had already changed enough. Should she try to create, in that possible world, the kind of pain that Eliezer doesn't think we should destroy in the actual world?
Though I don't recall the bibliography off the top of my head, there's been more than one study demonstrating that children who are told to, say, avoid playing with a car, and offered a cookie if they refrain, will go ahead and play with the car when they think no one is watching, or if no cookie is offered. If no reward or punishment is offered, and the child is simply told not to play with the car, the child will refrain even if no adult is around.
This and other similar studies are, I think, reviewed in Alfie Kohn's Punished by Rewards.
I'm appalled at the consistent bad quality of the comments in this thread. Eliezer, you should refrain from writing similar posts in the future now that you know many of your readers are incapable of engaging with your arguments when they bear on issues which they are so emotional about. Your talents are wasted with such an unreceptive audience.
I like the Disqus comment system. Threaded conversations are easier to follow and can be selectively ignored, while comment ratings tend to be a reliable measure of content quality. Also, comments left on different Disqus-powered blogs by the same user are consolidated on a unique page, and users ranked by their cumulative comment ratings. It is not clear whether the "2 of 10" rule would be needed if such a system were implemented, given the added functionality and incentives.
A brief note to the (surprisingly numerous) egoists/moral nihilists who commented so far. Can't you folks see that virtually all the reasons to be skeptical about morality are also reasons to be skeptical about practical rationality? Don't you folks realize that the argument that begins questioning whether one should care about others naturally leads to the question of whether one should care about oneself? Whenever I read commenters here proudly voicing that they are concerned with nothing but their own "persistence odds", or that they would willingly torture others to avoid a minor discomfort to themselves, I am reminded of Kieran Healy's remarks about Mensa, "the organization for highly intelligent people who are nevertheless not quite intelligent enough not to belong to it." If you are so smart that you can see through the illusion that is morality, don't be so stupid to take for granted the validity of practical rationality. Others may not matter, but if so you probably don't either.
I think the debate is over whether there is such a thing as "what will happen"; maybe that question doesn't yet have an answer. In fact, I think any good definition of libertarian free will would require that it not have an answer yet.
Utilitarian, if it is now raining in Oxford, how could the sentence 'It will rain in Oxford tomorrow' have failed to have been true yesterday?
People hear: "The universe runs like clockwork; physics is deterministic; the future is fixed."
The question of whether the future is "fixed" is unimportant, and irrelevant to the debate over free will and determinism. The future--what will happen--is necessarily "fixed". To say that it isn't implies that what will happen may not happen, which is logically impossible. The interesting question is not about whether the future is fixed, but rather about what fixes the future.
But what if the source of much of your material in this essay on Ayn Rand's life is itself inaccurate and untrue? Another author--James Valliant--who wrote on Ayn Rand's life studied her private journals (that were unavailable to Barbara Branden and Nathaniel Brandon). According to him, the air of cultishness was initiated and encouraged by Nathaniel Brandon, who monitored all of Rand's guests, visitors, and letters, to ensure that they were not antagonistic to Rand.
A single anecdote should throw enough light on Rand's character to disprove this hypothesis. The libertarian economist Murray Rothbard was for a time part of Rand's circle of friends. But when Rand learned that Rothbard's wife was a Christian, she gave Rothbard six months to convert her to atheism, or else divorce her. Rothbard of course did neither, and was, accordingly, excommunicated soon thereafter.
I said, "If you're genuinely selfish, then why do you want me to be selfish too? Doesn't that make you concerned for my welfare? Shouldn't you be trying to persuade me to be more altruistic, so you can exploit me?"
The objection you press against your interlocutor was anticipated by Max Stirner, the renowned champion of egoism, who replied as follows:
Do I write out of love to men? No, I write because I want to procure for my thoughts an existence in the world; and, even if I foresaw that these thoughts would deprive you of your rest and your peace, even if I saw the bloodiest wars and the fall of many generations springing up from this seed of thought — I would nevertheless scatter it. Do with it what you will and can, that is your affair and does not trouble me. You will perhaps have only trouble, combat, and death from it, very few will draw joy from it.
If your weal lay at my heart, I should act as the church did in withholding the Bible from the laity, or Christian governments, which make it a sacred duty for themselves to 'protect the common people from bad books'. But not only not for your sake, not even for truth's sake either do I speak out what I think. No —
I sing as the bird sings That on the bough alights; The song that from me springs Is pay that well requites
I sing because — I am a singer. But I use you for it because I — need ears.
Utilitarian would rightly attack this, since the probabilities almost certainly won't wind up exactly balancing.
Utilitarian's reply seems to assume that probability assignments are always precise. We may plausibly suppose, however, that belief states are sometimes vague. Granted this supposition, we cannot infer that one probability is higher than the other from the fact that probabilities do now wind up exactly balancing.