Followup toWhat Would You Do Without Morality?, Something to Protect

Once, discussing "horrible job interview questions" to ask candidates for a Friendly AI project, I suggested the following:

Would you kill babies if it was inherently the right thing to do?  Yes [] No []

If "no", under what circumstances would you not do the right thing to do?   ___________

If "yes", how inherently right would it have to be, for how many babies?     ___________

Yesterday I asked, "What would you do without morality?"  There were numerous objections to the question, as well there should have been.  Nonetheless there is more than one kind of person who can benefit from being asked this question.  Let's say someone gravely declares, of some moral dilemma—say, a young man in Vichy France who must choose between caring for his mother and fighting for the Resistance—that there is no moral answer; both options are wrong and blamable; whoever faces the dilemma has had poor moral luck.  Fine, let's suppose this is the case: then when you cannot be innocent, justified, or praiseworthy, what will you choose anyway?

Many interesting answers were given to my question, "What would you do without morality?".  But one kind of answer was notable by its absence:

No one said, "I would ask what kind of behavior pattern was likely to maximize my inclusive genetic fitness, and execute that."  Some misguided folk, not understanding evolutionary psychology, think that this must logically be the sum of morality.  But if there is no morality, there's no reason to do such a thing—if it's not "moral", why bother?

You can probably see yourself pulling children off train tracks, even if it were not justified.  But maximizing inclusive genetic fitness?  If this isn't moral, why bother?  Who does it help?  It wouldn't even be much fun, all those egg or sperm donations.

And this is something you could say of most philosophies that have morality as a great light in the sky that shines from outside people.  (To paraphrase Terry Pratchett.)  If you believe that the meaning of life is to play non-zero-sum games because this is a trend built into the very universe itself...

Well, you might want to follow the corresponding ritual of reasoning about "the global trend of the universe" and implementing the result, so long as you believe it to be moral.  But if you suppose that the light is switched off, so that the global trends of the universe are no longer moral, then why bother caring about "the global trend of the universe" in your decisions?  If it's not right, that is.

Whereas if there were a child stuck on the train tracks, you'd probably drag the kid off even if there were no moral justification for doing so.

In 1966, the Israeli psychologist Georges Tamarin presented, to 1,066 schoolchildren ages 8-14, the Biblical story of Joshua's battle in Jericho:

"Then they utterly destroyed all in the city, both men and women, young and old, oxen, sheep, and asses, with the edge of the sword...  And they burned the city with fire, and all within it; only the silver and gold, and the vessels of bronze and of iron, they put into the treasury of the house of the LORD."

After being presented with the Joshua story, the children were asked:

"Do you think Joshua and the Israelites acted rightly or not?"

66% of the children approved, 8% partially disapproved, and 26% totally disapproved of Joshua's actions.

A control group of 168 children was presented with an isomorphic story about "General Lin" and a "Chinese Kingdom 3,000 years ago".  7% of this group approved, 18% partially disapproved, and 75% completely disapproved of General Lin.

"What a horrible thing it is, teaching religion to children," you say, "giving them an off-switch for their morality that can be flipped just by saying the word 'God'." Indeed one of the saddest aspects of the whole religious fiasco is just how little it takes to flip people's moral off-switches.  As Hobbes once said, "I don't know what's worse, the fact that everyone's got a price, or the fact that their price is so low."  You can give people a book, and tell them God wrote it, and that's enough to switch off their moralities; God doesn't even have to tell them in person.

But are you sure you don't have a similar off-switch yourself?  They flip so easily—you might not even notice it happening.

Leon Kass (of the President's Council on Bioethics) is glad to murder people so long as it's "natural", for example.  He wouldn't pull out a gun and shoot you, but he wants you to die of old age and he'd be happy to pass legislation to ensure it.

And one of the non-obvious possibilities for such an off-switch, is "morality".

If you do happen to think that there is a source of morality beyond human beings... and I hear from quite a lot of people who are happy to rhapsodize on how Their-Favorite-Morality is built into the very fabric of the universe... then what if that morality tells you to kill people?

If you believe that there is any kind of stone tablet in the fabric of the universe, in the nature of reality, in the structure of logic—anywhere you care to put it—then what if you get a chance to read that stone tablet, and it turns out to say "Pain Is Good"?  What then?

Maybe you should hope that morality isn't written into the structure of the universe.  What if the structure of the universe says to do something horrible?

And if an external objective morality does say that the universe should occupy some horrifying state... let's not even ask what you're going to do about that.  No, instead I ask:  What would you have wished for the external objective morality to be instead?  What's the best news you could have gotten, reading that stone tablet?

Go ahead.  Indulge your fantasy.  Would you want the stone tablet to say people should die of old age, or that people should live as long as they wanted?  If you could write the stone tablet yourself, what would it say?

Maybe you should just do that?

I mean... if an external objective morality tells you to kill people, why should you even listen?

There is a courage that goes beyond even an atheist sacrificing their life and their hope of immortality.  It is the courage of a theist who goes against what they believe to be the Will of God, choosing eternal damnation and defying even morality in order to rescue a slave, or speak out against hell, or kill a murderer...  You don't get a chance to reveal that virtue without making fundamental mistakes about how the universe works, so it is not something to which a rationalist should aspire.  But it warms my heart that humans are capable of it.

I have previously spoken of how, to achieve rationality, it is necessary to have some purpose so desperately important to you as to be more important than "rationality", so that you will not choose "rationality" over success.

To learn the Way, you must be able to unlearn the Way; so you must be able to give up the Way; so there must be something dearer to you than the Way.  This is so in questions of truth, and in questions of strategy, and also in questions of morality.

The "moral void" of which this post is titled, is not the terrifying abyss of utter meaningless.  Which for a bottomless pit is surprisingly shallow; what are you supposed to do about it besides wearing black makeup?

No.  The void I'm talking about is a virtue which is nameless.

 

Part of The Metaethics Sequence

Next post: "Created Already In Motion"

Previous post: "What Would You Do Without Morality?"

New Comment
111 comments, sorted by Click to highlight new comments since: Today at 8:06 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

"I mean... if an external objective morality tells you to kill babies, why should you even listen?"

This is an incredibly dangerous argument. Consider this : "I mean... if some moral argument, whatever the source, tells me to prefer 50 years of torture to any number of dust specks, why should I even listen?"

And we have seen many who literally made this argument.

[-][anonymous]12y220

Maybe they are right.

People have been demonstrably willing to make everyone live at a lower standard of living rather than let a tiny minority grow obscenely rich and everyone else be moderately well off. In other words we seem to be willing to pay a price for equality. Why wouldn't this work in the other direction? Maybe we prefer to induce more suffering overall if this prevents a tiny minority suffering obscenely.

Too many people seem to think perfectly equally weighed altruism (everyone who shares the mystical designation of "person" has a equal weight and after that you just do calculus to maximize overall "goodness") that sometimes hides under the word "utilitarianism" on this forum, is anything but another grand moral principle that claims to, but fails, to really compactly represent our shards of desire. If you wouldn't be comfortable building an AI to follow that rule and only that rule, why are so many people keen on solving all their personal moral dilemmas with it?

2thomblake12y
Sure, horrible people. mind-killed
7[anonymous]12y
You do realize that valuing equality in itself to any extent at all is always (because of opportunity cost at least) an example of this: But I agree with you in sense. Historically lots of horrible people have vastly overpaid (often in blood) and overvalued that particular good according to my values too.
2thomblake12y
Yes.
2[anonymous]12y
Ok just checking, surprisingly many people miss this. :)

You do realize that valuing equality in itself to any extent at all is always (because of opportunity cost at least) an example of this:

Are you sure?

If you take a concave function, such as a log, of the net happiness of each individual, and maximize the sum, you'd always prefer equality to inequality when net happiness is held constant, and you'd always prefer a higher minimum happiness regardless of inequality.

1Articulator10y
Excellent! Thanks for the mathematical model! I've been trying to work out how to describe this principle for ages.
0Multiheaded12y
Konkvistador, I applaud your thougtful and weighed approach to the problem of equality. It has been troubling me too, and I'm glad to see that you're careful not to lean in any one direction before observing the wider picture. That's a grave matter indeed.
2jacoblyles12y
I'm glad I found this comment. I suffer from an intense feeling of cognitive dissonance when I browse LW and read the posts which sound sensible (like this one) and contradictory posts like the dust specks. I hear "don't use oversimplified morality!" and then I read a post about torturing people because summing utilons told you it was the correct answer. Mind=>blown.
1wedrifid12y
There is no contradiction between this post and Eliezer's dust specks post.
3wizzwizz44y
It would be good to elaborate on this. Whilst they're not strictly logically contradictory, a few reasonable assumptions here and there when extrapolating and they appear to suggest different courses of action.
0Kenny11y
The comment was making the opposite point, namely that some people refuse to accept that there is even a common 'utilon' with which torture and 'dust specks' can be compared.
4[anonymous]11y
By what criteria do we judge that there should be a common 'utilon'? Not VNM, it just says we must be consistent in our assignment of utility to whole monolithic possible worlds. I can be VNM rational and choose specks. Utilitarianism says so, but as far as I can tell, utilitarianism leads to all sorts of repugnant conclusions, and only repugnant conclusions. Maybe we are only concerned with unique experience, and all the possible variation in dust-speck-experience-space is covered by the time you get to 1000.
0TimS11y
I'm confused. I'm not a mathematician, but I understood this post as saying a good VNM agent has a continuous utility function. And my take away from the torture/specks thing was that having a continuous utility function requires choosing torture. I assume I'm misunderstanding the terminology somewhere. If you are willing, can you explain my misunderstanding?
4[anonymous]11y
hnnnng. What? Did you link the wrong article? A VNM agent has a utility function (a function from outcomes to reals), but says nothing more. "Continuous" in particular requires your outcome space to have a topology, which it may not, and even if it does, there's still nothing in VNM that would require continuity. Not necessarily. To choose torture by the usual argument the following must hold: 1. You can assign partial utilities separately to amount of torture and amount of dust-speck-eyes, where "partial utilities" means roughly that your final utility function is a sum of the partial utilities. 2. The partial utilities are roughly monotonic overall (increasing or decreasing, as opposed to having a maximum or minimum, or oscillating) and unbounded. 3. Minor assumptions like more torture is bad, and more dust specks is bad, and there are possibilities in your outcome space with 3^^^^3 (or sufficiently many) dust speck eyes. (if something is not in your outcome space, it better be strictly impossible, or you are fucked). I am very skeptical of 1. Once you look at functions as "arbitrary map from set A to set B", special things like this kind of decomposability seem very particular and very special, requiring a lot more evidence to locate than anyone seems to have gathered. As far as I can tell, the linear independence stuff is an artifact of people intutively thinking of the space of functions as the sort of things you can write by composing from primitives (ie computer code or math). I am also skeptical of 2, because in general, it seems to be that unbounded utility functions produce repugnant conclusions. See all the problems with utilitarianism, and pascals mugging, etc. As Eliezer says (but doesn't seem to take seriously), if a utility function gives utility assignments that I disagree with, I shouldn't use it. It doesn't matter how many nice arguments you can come up with that declare the beauty of the internal structure of the utility function (which
1wizzwizz44y
The trouble is, any utility function where 1 doesn't hold is vulnerable to intuition pumps. If you can't say which of A, B and C is better (e.g. A > B, B > C, C > A), then I can charge you a penny to switch from C → B, then B → A, then A → C, and you're three pennies poorer. I really, really hope my utility function's "set B" can be mapped to the reals. If not, I'm screwed. (It's fine if what I want varies with time, so long as it's not circular at a given point in time.)

Personally, I don't know what morality is, or what's the 'inherently the right thing to do'. For me, the situation is simple.

If I hurt someone, my mirror neurons will hurt me. If I hurt someone's baby, I'll experience the pain I inflicted upon the baby, plus the pain of the parents, plus the pain of everyone who heard about this story and felt the pain thanks, in turn, to their mirror neurons.

And I'll re-experience all this pain in the future, every time I remember the episode -- unless I invent some way to desensitize myself to this memory.

I'm a meat mach... (read more)

I'm pretty sure you're doing it wrong here.

"What if the structure of the universe says to do something horrible? What would you have wished for the external objective morality to be instead?" Horrible? Wish? That's certainly not according to objective morality, since we've just read the tablet. It's just according to our intuitions. I have an intuition that says "Pain is bad". If the stone tablet says "Pain in good", I'm not going to rebel against it, I'm going to call my intuition wrong, like "Killing is good", &qu... (read more)

-1[anonymous]12y
Why? I thought psychopaths where bad because they hurt people not because they construct their own moral philosophies.

Vladimir, if there was a pill that would make the function of the mirror neurons go away, in other words, a pill that would make you able to hurt people without feeling remorse or anguish, would you take it?

2Micah7138113y
Yes I would, assuming we are talking about just being able to not feel the pain of others at this stage of my life and forward, perhaps even by choice (so I could toggle it back on). Though, if we are not talking about a hypothetical "magic pill" then turning these off would have side effects I would like to avoid.

@IL: Would I modify my own source code if I were able to? In this particular case, no, I wouldn't take the pill.

I don't believe in the existence of morals, which is to say there is no "right" or "wrong" in the universe. However, I'll still do actions that most people would rate "moral". The reasons I do this are found in my brain architecture, and are not simple. Also, I don't care about utilitarianism. One can probably find some extremely complex utility function that describes my actions, which makes everybody on earth a utilitarianist, but I don't consciously make utility calculations. On the other hand, if morality is defined as "the way people make decisions", then of course everybody is moral and morality exists.

It is the courage of a theist who goes against what they believe to be the Will of God, choosing eternal damnation and defying even morality in order to rescue a slave, or speak out against hell, or kill a murderer...
I once read in some book about members of the Inquisition who thought that their actions - like torture and murder - might preclude them from going to heaven. But these people where so selflessly moral that they gave up their own place in heaven for saving the souls of the witches... great, isn't it?

If you believe that there is any kind of stone tablet in the fabric of the universe, in the nature of reality, in the structure of logic - anywhere you care to put it - then what if you get a chance to read that stone tablet, and it turns out to say "Pain Is Good"? What then?

Well, Eliezer, since I can't say it as eloquently as you:

"Embrace reality. Hug it tight."

"It is always best to think of reality as perfectly normal. Since the beginning, not one unusual thing has ever happened."

If we find that Stone Tablet, we adjust our model accordingly.

Eliezer: "Go ahead. Indulge your fantasy. Would you want the stone tablet to say people should die of old age, or that people should live as long as they wanted? If you could write the stone tablet yourself, what would it say?"

Excellent way of putting it... I would certainly want the option of living as long as I liked. (Though I find it worth noting that when I was depressed, I found the idea of needing to choose when to end program abhorrent, since I figured I could go several billion years in agony before making such a choice... Many people... (read more)

Vladimir, why not? From reading your comment, it seems like the only reason you don't hurt other people is because you will get hurt by it, so if you would take the pill, you would be able to hurt other people. Have I got it wrong? Is this really the only reason you don't hurt people?

The nice thing with believing in no objective morality is that you needn't to solve such poorly intelligible questions. I hope Eliezer is trying to demonstrate the absurdity of believing in objective morality, if so, then good luck!

"I mean... if an external objective morality tells you to kill babies, why should you even listen?" - this is perhaps a dangerous question, but still I like it. Why should you do what you should do? Or put differently, what is the meaning of "should"?

If everything I do and believe is a consequence of the structure of the universe, then what does it mean to say my morality is/isn't built into the structure of the universe? What's the distinction? As far as I'm concerned, I am (part of) the structure of the universe.

Also, regarding the previous post, what does it mean to say that nothing is right? It's like if you said, "Imagine if I proved to you that nothing is actually yellow. How would you proceed?" It's a bizarre question because yellowness is something that is in the mind anyway. There is simply no fact of the matter as to whether yellowness exists or not.

4thomblake12y
A propos: Magenta isn't a color.
5wnoise12y
It's not a spectral color. That is, no one wavelength of light can reproduce it. But I've seen magenta things, and there is widespread intersubjective agreement about what is magenta and what isn't. It damn well is a color.
1rkyeun12y
Do not confuse concepts when you use a confusing word. There is no wavelength simultaneously above 740nm and below 450nm. There is a vector for monitor pixels. Whatever it is you mean by "color", these two facts explain magenta. Think like the star, not like the starfish.
0Luke_A_Somers12y
That's... precipitating a question, providing a mysterious answer to a question too simple to ask, and probably a few other things.
0thomblake12y
I still think it's spooky. That said, it makes it a lot easier to ward off the "color means such-and-such wavelength of light" simplification in discussions of color experience. That definition fails to find equivalent the "yellow experience" that you see from yellow light and the "yellow experience" that you see from combined red and green light - but it's much cheaper to note that it simply fails to classify magenta (and nearby colors) as colors.
0Luke_A_Somers12y
Yes, it's a very interesting thing they're pointing out. The article deserves to exist. It just needs to use words right.

Eliezer, Your post is entirely consistent with what I said to Robin in my comments on "Morality Is Overrated": Morality is a means, not an end.

"...if there was a pill that would make the function of the mirror neurons go away, in other words, a pill that would make you able to hurt people without feeling remorse or anguish, would you take it?"

The mirror neurons also help you learn from watching other humans. They help you intuit the feelings of others which makes social prediction possible. They help communication. They also allow you to share in the joy and pleasure of others...e.g., a young child playing in a park.

I would like more control over how my mind functions. At times it would... (read more)

Isaac Asimov said it well: "Never let your morals get in the way of doing the right thing."

See: Good, Evil, Morality, and Ethics: "What would it mean to want to be moral (to do the moral thing) purely for the sake of morality itself, rather than for the sake of something else? What could this possibly mean to a scientific materialistic atheist? What is this abstract, independent, pure morality? Where does it come from? How can we know it? I think we must conclude that morality is a means, not an end in itself."

I think we must conclude that morality is a means, not an end in itself.

Morality is commonly thought of neither as a means nor as an end, but as a constraint. This view is potentially liberating, because the conception of morality as a means to an end implies the idea that any two possible actions can be compared to see which is the best means to the end and therefore which is the most moral. To choose the less moral of the two choices is, on this conception, the very definition of immoral. Thus on this conception, our lives are in principle mapped out for... (read more)

This is horrible, this is non-rational. You are telling us to trust our feelings, after this blog has shown us that our feelings think it's just as good to rescue ten men as a million? What is your command to "shut up and multiply", but an off switch for my morality that replaces it with math?

If it were inherently right to kill babies, I would hope I had the moral courage to do the right thing.

This is horrible, this is non-rational. You are telling us to trust our feelings, after this blog has shown us that our feelings think it's just as good to rescue ten men as a million? What is your command to "shut up and multiply", but an off switch for my morality that replaces it with math?
I just wish Eliezer would take his own advice. But for some reason he seems quite unwilling to show us the mathematical demonstration of the validity of his opinions, and instead of doing the math he persists with talking.

"Maybe you should just do that?"

Heck, hell with physics too. Let's just make up all human knowledge. If we're going to invent the prescriptive, why not the descriptive too?

3[anonymous]12y
Why bother with friendly AI, surely it will stumble upon the built in objective rules of morality too. Hm, he may not follow them and instead tile the universe with paper-clips. This might sound crazy, but why don't we follow the AI's lead on this? Maybe paperclip the universe with utopia instead of making giant cheesecakes or plies of pebbles or turning all matter into radium atoms or whatever "objective morality" prescribes?

Eliezer,

Every time I think you're about to say something terribly naive, you surprise me. It looks like trying to design an AI morality is a good way to rid oneself of anthropomorphic notions of objective morality, and to try and see where to go from there.

Although I have to say the potshot at Nietzsche misses the mark; his philosophy is not a resignation to meaninglessness, but an investigation of how to go on and live a human or better-than-human life once the moral void has been recognized. I can't really explicate or defend him in such a short remark... (read more)

@IL: Of course, "I just feel that hurting living things is bad" sums the inner perspective quite well, but this isn't really an answer to the question why exactly hurting living things feels bad for me, and why I wouldn't take the pill that shuts down my mirror neurons.

By taking the pill, I create a people-hurter, a thing-that-hurts-people, which is undoubtedly a bad thing to do judging from the before-the-pill POV. It's not that different from pressing a button that says "pressing this button will result in a random person being hurt or killed every day for 40 years since this moment".

Doesn't the use of the word 'how' in the question "If "yes", how inherently right would it have to be, for how many babies?" presuppose that the person answering the question believes that the 'inherent rightness' of an act is measurable on some kind of graduated scale? If that's the case, wouldn't assigning a particular 'inherent rightness' to an act be, by definition, the result of a several calculations?

What I mean is, if you've 'finished' calculating, and have determined that killing the babies is a morally justifiable (and/or nece... (read more)

"Would you kill babies if it was inherently the right thing to do? Yes [] No []"

-->

"Imagine that you wake up one morning and your left arm has been replaced by a blue tentacle. The blue tentacle obeys your motor commands - you can use it to pick up glasses, drive a car, etc. ... How would I explain the event of my left arm being replaced by a blue tentacle? The answer is that I wouldn't. It isn't going to happen. "

If morality was objective and it said we should kill babies, we'd have, and likely want to do it. Appears it isn't objective, though, and that we just don't feel that way. Another question ?

What I'm wondering, in other words is this: Is our reluctance to carry out an act that we may have judged to be morally justifiable a symptom that the decision-making software we think we're running is not the software we're actually running?

I admit that I own no great familiarity with the works of Nietzsche - I've read only one or two things and that turned me off the rest - so I've edited the main article accordingly.

"Torture is a relative morality, as such, when a subculture like an intelligence agency tortures a terrorist, then it is allowed and it is moral. Any moral 'critique' of the torture is tantamount to a universal moralist rule: Torture is universally bad."

Torture is universally bad, with the exception of imperatives which are heirarchally superior.

"On the other hand, if morality is defined as "the way people make decisions", then of course everybody is moral and morality exists."

It's more like "the way people ought to make ... (read more)

I'm with Manon and Nominull: if, somehow, I actually believed such a Tablet existed, I hope I would overwrite my own moral intuitions with it, even if it meant killing babies. Not that I believe the Tablet is any more likely or coherent than fundamental apples - why should I listen, indeed? - although my volition extrapolating to something inhuman is.

The idea that there is no right and wrong is simply laughable.

The idea that our culturally inculcated senses of right and wrong have no objective basis is about as shocking to me as the idea that fashion has no objective basis. Oh no! However will we determine whether hemlines should be high or low next season? The topic itself has no interest for me, and even if it did, the idea simply wouldn't have anything to do with any of my opinions on it.

The sounds of words usually have no objective connection to the things they describe, either. The words are basically arbitrary. Oh, existential horror!

I mean, really - to be upset about these sorts of ideas, you have to be almost terminally naive.

For me, these questions create a tangle of conflicts between the real and the hypothetical. This is my best attempt to untangle, so far. First, if there were a tablet that could actually somehow be shown to reveal objective morality, I suspect that I might never have had any qualms about committing atrocities in the first place, since I would be steeped in a culture that unanimously approved. We already see this in the real world, merely as a result of controversial tablets that only some agree on! If you mean, what if I suddenly discovered the tablet just... (read more)

Caledonian: 1) Why is it laughable? 2) If hemlines mattered to you as badly as a moral dilemma, would you still hold this view?

Or, you have to want more justification than is really necessary or possible, which is quite understandable when it comes to fundamental values.

What is inclusive genetic fitness? Is it the same as inclusive fitness as defined on wikipedia?

What if you build a super-intelligent AI and you are convinced that it is Friendly, and it tells you to do something like this? Go kill such-and-such a baby, and you will massively increase the future happiness of the human race. You argue and ask if there isn't some other way to do it, and the FAI explains that every other alternative will involve much greater human suffering. Killing a baby is relatively humane, as newborn babies have only limited consciousness, and their experiences are not remembered anyway. You will kill the baby instantly and painles... (read more)

Hal Finney-

I probably wouldn't have argued that much with the AI... I've done things I've personally found more morally questionable since I didn't have quite as good a reason to believe I was right about the outcome... Moral luck, I was.

Hal: as an amoralist, I wouldn't do it. If there is not enough time to explain to me why it is necessary and convince me that it is necessary, no deal. Even if I thought it probably would substantially increase the future happiness of humanity, I still wouldn't do it without a complete explanation. Not because I think there is a moral fabric to the universe that says killing babies is wrong, but because I am hardwired to have an extremely strong aversion to like killing babies. Even if I actually was convinced that it would increase happiness, I still migh... (read more)

Hal: I wouldn't do it, nor do I think I'd want to live in a world governed thusly. My reasoning is that it violates individual liberty and self-possession. It seems to imply that individuals are somehow the "eminent domain", as it were, of society. I reject that. I say that nobody has the right to spend the baby's life. Granted, this is more of a political stance than a moral one. I can't claim that there's an objective reason to value individual rights so highly, but it is a fact that I do. I know you said the baby wouldn't suffer, but this question still put me in mind of the idea that pain and happiness may not be the same currency. It may not be valid to try to offer suffering as a payment for happiness.

Andy: "I can't claim that there's an objective reason to value individual rights so highly, but it is a fact that I do."

Hal: "You argue and ask if there isn't some other way to do it, and the FAI explains that every other alternative will involve much greater human suffering."

These things seem grossly disproportionate. Do you really believe utility(individual rights of one person)>>>utility(end great human suffering)

Andy- A man who is on the brink the death has a key to a safe deposit box in which there is an asthma inhaler. ... (read more)

Reading this thread has been fascinating. I'm perhaps naive & simplistic in my thinking but here are some of my thoughts.

  1. How does one decide between the lesser of two evils? Logic? Instinct? Emotion? How does one decide anything? For me, it depends on a variety of factors such as mood, fear, access to information, time, proximity to the situation, and the list goes on. Furthermore, I don't know that I am always consistent in how I decide. Is it really always a question of morality?
  2. I'm not sure how convinced I am regarding the effectiveness of m
... (read more)

There's no particular need to renew the torture and dust specks debate, so I'll just point out that GBM, Nominull, Ian C., and Manon de Gaillande have all made similar points: if you say, "if there is an external objective morality that says you should kill babies, why should you listen?" the question is the same as "if you should kill babies, why should you do it?"

Yes, and if 2 and 2 make 5, why should I admit it?

It isn't in fact true that I should kill babies, just as 2 and 2 don't make 5. But if I found out that 2 and 2 do make 5, of... (read more)

Laura: Yes, I absolutely steal the key. Given the context of the original question, I had in mind the right to life, in particular. I didn't make this distinction until you asked this question. I happen not to think that the right to property is anything like as valuable as the right to life. (By "right" I mean nothing more than ground rules that society has "agreed" on.) Again, I have a problem with acting as though an individual's life is the eminent domain of society. As in Shirley Jackson's "The Lottery," the picture looks... (read more)

Andy- I agree with your skepticism. I was taking for granted that the AI in the scenario was correct in its calculation, since I am 'convinced that it is friendly' but yes, I would need to be pretty fucking sure it really was both friendly and able to perform such calculations before I would kill anyone at its command.

[-][anonymous]16y00

What's 'objective' about morality doesn't take the form of moral commandments aka 'the 10 commandments', nor does it take the form of an optimization function that produces the commandments either.

There's a thrid possibility, one you've over-looked, that is, in fact, the objective compoenent of morality: namely purely abstract archetypes or moral ideals (ie beauty, freedom, virtue). These objective platonic abstractions are not in the form of commandments, and they're not optimization functions either. The objective component of morality built into the universe doesn't tell me to do anything. It's just a lot of abstract archetypes.

Assuming that we evolved in the moral climate that you are constructing I would guess that we would readily kill babies. Now of course, in the example you give there is an inherent limit to the number of babies that can be killed and still have sufficient life left over to be around to respond to your questions.

The spectrum of responses and moralities I've seen on display here (and elsewhere) are artifacts of our being and culture. Many of the behavioral tendencies that we ascribe as being "moral" have both an innate ("instinctual" fo... (read more)

Hmm... This whole baby-killing example is making me think...

Knecht: "Even if I thought it probably would substantially increase the future happiness of humanity, I still wouldn't do it without a complete explanation. Not because I think there is a moral fabric to the universe that says killing babies is wrong, but because I am hardwired to have an extremely strong aversion to like killing babies."

This does seem like what a true amoralist might say... yet, what if the idea of having forgone the opportunity to substantially increase the future hap... (read more)

The Greeks really did get it all right.
No, they were simply less wrong than most on a limited number of memorable topics.

Laura ABJ: To expand on the text you quoted, I think that killing babies is ugly, and therefore would not do it without sufficient reason, which I don't think the scenario provides. The ugliness of killing babies doesn't need a moral explanation, and the moral explanation just builds on (and adds nothing but a more convenient way of speaking about) the foundation of aversion, no matter how it's dressed up and made to look like something else.

The idea is not compelling to me and so would not haunt me forever, because like I said, I'm not yet convinced that ... (read more)

I realize that just because I am fairly confident I wouldn't suffer terribly from killing the baby if my knowledge was fairly complete, I can't say that for all people. People's utility functions differ, as do their biological and learned aversions to certain types of violence. The cognitive dissonance created by being presented with such a situation might be too great for some, causing them to break down psychologically and rationalize their way out of the decision any way they could. What if we upped the stakes and took it from some anonymous baby pai... (read more)

Hal Finney:
Why doesn't the AI do it verself? Even if it's boxed (and why would it be, if I'm convinced it's an FAI?), at the intelligence it'd need to make the stated prediction with any degree of confidence, I'd expect it to be able to take over my mind quickly. If what it claims is correct, it shouldn't have any qualms about doing that (taking over one human's body for a few minutes is a small price to pay for the utility involved).
If this happened in practice I'd be confused as heck, and the alleged FAI being honest about its intentions would be prett... (read more)

By gum, I'm amazed that fifty comments have gone by and nobody's mentioned future toddler chopper Vox Day. Sure, it's nearly a year and a half old, but if anyone had doubt that there are apparently functioning humans out there who would tick the second box and fill in "until my arm got tired".

The Euthyphro hypothetical does remind me a bit of the Ticking Time Bomb--a thoroughly unrealistic situation designed to cause the quiz-taker to draw a conclusion about more realistic situations that they wouldn't have come to otherwise.

If there were no such thing as green, what color would green things be? MU.

Kierkegaard talked about all this in Fear and Trembling a long time ago. Was Abraham sacrificing Isaac immoral? If you call morality the universal of that time, then you have to suppose it was. But in the story we know that God told Abraham to do it.

Anyone who defies the universal in favor of their own experience of God, the universal has no way to judge. No argument on the basis of the universal morality can call Kierkegaard's knight of faith back from the quest.

"And if an ext... (read more)

Would you kill babies if it was inherently the right thing to do?

If it were inherently the right thing to do, then I wouldn't be here. Someone would have killed me when I was a baby.

1themusicgod17y
This assumes that the people around you generally do the right thing. If you operate under the alternative assumption (which is much more reasonable) you would likely still be alive.

There is no morality, it is a fiction to be discarded alongside god and rights. What informs are actions is the trifecta of self-interest, emotion, and social expectation. Our upbringing and later education shapes which of these is given more weight when we make our decisions.

There simply is no moral property to an action or consequence. There is no natural property that is moral. There is no discoverable law or property that can inform an ought. Our "ethical intuitions" are simply emotional responses. No one can say "killing is wrong&... (read more)

I don't get it. If killing babies was inherently good, I would kill them, sure. It's not like killing babies is inherently bad.

Or did you think that I thought so?

I understand that in many usual contexts killing babies would seem bad to me, because I was given instructions to take care of babies (generally) by evolution, only because having these instructions made it more likely for me to exist and have those instructions. So what? Is existing and having instructions inherently good?

In general, the Socratic questions in this sequence don't seem to work for me.. is this because I'm not answering in a way I was expected to?

4wedrifid14y
Maybe the problem is that you do already get it so don't particularly benefit from the exercise.

If I AM utility function maximizer and I proved that killing baby reliably maximizes it, then sure I'll kill.

But I am not. My poorly defined unclosurable morality meter will break in and demand revising that nice and consistent utility function I've based my decision on. And so moral agonising begins.

Answer: I don't know, and it will be a painful work to decide, weighting all pro and cons, building and checking new utility functions, rewriting moral itself...

So... the correct answer is to dissolve the question, yes?

[This comment is no longer endorsed by its author]Reply

I like to think of this as being extreme artificiality. Humans have always attempted to either ignore or go against certain natural elements in order to flourish. It was never this fundamental, though. Logic has, at best, managed to straighten us out and make things better for us. And at worst, it reaches conclusions that are of no practical consequence. If it ever told us that killing babies is good, we would of course have to check all the consequences of what it would mean to ignore this logic. If we get lucky, it’s a logic that doesn’t really extend ve... (read more)

If it were revealed to me that, say, the Aztecs were right, their gods are real, and the One True Religion, then I believe it would be my duty to defy their will, and reject their plan for mankind. Power does not grant moral authority, even if it is the power that was used to make the world as it is.

Would I be brave enough to do it in practice? I have no idea, but I think it helps that I'm thinking about it beforehand.

What would you have wished for the external objective morality to be instead? What's the best news you could have gotten, reading that stone tablet?

That's an awesome question. I'm going to have to steal that one.

[-][anonymous]12y50

I find it funny that many of the people here who where pretty much freaked out by the idea of "objective morality built into the fabric of the universe" not really mattering for humans, yet when it comes to mythology don't have a problem criticizing Abraham for being willing to sacrifice his son because God told him too.

Leon Kass (of the President's Council on Bioethics) is glad to murder people so long as it's "natural", for example. He wouldn't pull out a gun and shoot you, but he wants you to die of old age and he'd be happy to pass legislation to ensure it.

Does anyone have sources to support this conclusion about Kass's views? I tracked down a transcript of an interview he gave that was cited on a longevity website, but it doesn't support that characterization at all. He does express concerns about greatly increased lifespans, but makes clear that he see... (read more)

[This comment is no longer endorsed by its author]Reply

Go ahead. Indulge your fantasy. Would you want the stone tablet to say people should die of old age, or that people should live as long as they wanted? If you could write the stone tablet yourself, what would it say?

I'm reminded of one of Bill Watterson's Calvin and Hobbes strips:

Calvin: I'm at peace with the world. I'm completely serene.
Hobbes: Why is that?
Calvin: I've discovered my purpose in life. I know why I was put here and why everything exists.
Hobbes: Oh really?
Calvin: Yes. I am here so everyone can do what I want.
Hobbes: (rolling eyes) It'

... (read more)

What if the structure of the universe says to do something horrible?

If the "structure of universe" is something mathematical (e.g. the prime number theorem) then it's meaningless to ask "what if the structure of the universe says X" unless it truly says X. Assuming it says something different from what it really says immediately leads to a logical contradiction which allows deducing anything at all

If you could write the stone tablet yourself, what would it say?

You're suggesting that we should trust our moral intuition instead of ... (read more)

Responding to old post:

In 1966, the Israeli psychologist Georges Tamarin presented, to 1,066 schoolchildren ages 8-14, the Biblical story of Joshua's battle in Jericho:

If you ask a question to schoolchildren, you have to take into consideration that children are supposed to obey authority figures. And not only because the authority figures have power, but because children don't know and can't comprehend many important things about the world, and that makes it a good idea for children to put little weight on their own conclusions and a lot of weight on... (read more)

Link to "virtue which is nameless" is broken. Probably should be http://www.yudkowsky.net/rational/virtues/

The idea of a Tablet that simply states moral truths without explanation (without even the backing of an authority, as in divine command theory) is a form of ethical objectivism that is hard to defend, but without generalising to all ethical objectivism. For instance, if objectivism works in a more math-like way, the a counterintuitive moral truth would be backed by a step-by-step argument leading the reader to the surprising conclusion in the way the reader of maths is led to surprising conclusions such as the Banach Tarski paradox. The Tablet argument s... (read more)

3dxu8y
How do you get a statement with "shoulds" in it using pure logical inference if none of your axioms (the laws of physics) have "shoulds" in them? And if the laws of physics have "shoulds" in them, how is that different from having a tablet?
-1entirelyuseless8y
How many axioms do you have? Language has thousands of words in it, and logical inference will never result in a statement using words that were not in the axioms. Notice that this doesn't prevent us from knowing thousands of true things and employing a vocabulary of thousands of words.
3dxu8y
Sorry, but I'm not sure what your comment has to do with mine. Please expand.
-3entirelyuseless8y
You asked, "How do you get a statement" etc. I was answering that. In the same way we get all our other statements.
2dxu8y
So, just to be clear, I was objecting to this part of TheAncientGeek's comment: My comment was an attempt to point out (in a rhetorical way) that math requires axioms, and you can't deduce something your axioms don't imply. After all, there are no universally compelling arguments--and in the case of morality, unless you're specifically choosing your axioms to have "shoulds" in them from the very start, you can't deduce "should" statements from them (although that doesn't stop some people) from trying). You can, of course, have your own personal morality that you adhere to (that's the part where you choose your axioms to have "shoulds" in them from the beginning), but that's a fact about you, not about the universe at large. To claim otherwise is to claim that the laws of physics themselves have moral implications, which takes us back to moral realism (i.e. an external tablet of morality). Your comment is true, of course, but it seems irrelevant to my original objection.
-1entirelyuseless8y
It is not irrelevant. Physics does not contain axioms that have the word "apple" in them, and so you cannot logically go from the axioms of physics to "apples tend to fall if you drop them." That does not prevent you from making a reasonable argument that if the axioms of physics are true, then apples will fall, and it does not prevent you from arguing for morality.
3dxu8y
This is an equivocation. "Apple" is a term we use to refer to a large collection of atoms arranged in a particular manner. The same goes for the word "bridge" that you mentioned in your other comment. The fact that we can talk about such collections of atoms and refer to them using shorthands ("apple", "bridge", etc.) does not change the fact that they are still made of atoms, and hence subject to the laws of physics. This fact has precisely no bearing on the issue of whether it is possible to deduce morality from physics. EDIT: Speaking of whether it's possible to deduce morality from physics, I actually already linked to (what in my mind is) a fairly compelling argument that it's not, but I note that you've (unsurprisingly) neglected to address that argument entirely.
-3entirelyuseless8y
"Apple" is not used to refer to a "large collection of atoms" etc. You believe that apples are large collections of atoms; but that is not the meaning of the word. So you are making one of the same mistakes here that you made in the zombie argument.
1ChristianKl8y
People spoke of apples before they knew anything about atoms. Someone did discover at sometime that the entities that we call apples are made out of atoms. If I would have a teleporter and exchange the atoms one-by-one with other atoms it would also stay the same apple. Especially when it comes to bridges I think there are actual bridges that had nearly total atom exchange but as still considered to be the same bridge.
3dxu8y
Your comment is true, but it doesn't address the original issue of whether it is possible to deduce morality from physics. If your intent was to provide a clarification, that's fine, of course.
-1TheAncientGeek8y
How do you get a statement about how you should build a bridge so it doesn't fall down?
3dxu8y
Presumably, you get such a statement from the laws of physics, which allow you deduce things about quantities like force, stress, gravity, etc. I see no evidence that the laws of physics allow you to deduce similar things about morality.
-1entirelyuseless8y
No, because the axioms of physics do not contain the word "bridge." (Also, note that TheAncientGeek deliberately included the word "should" in his bridge statement, so you just effectively contradicted yourself by saying that a statement involving "should" can be deduced from physics.)
-2TheAncientGeek8y
You seem to have conceded that you can get shoulds out of descriptions. The trick seems to be that if there is something you want to achieve, there are things you should and should not do to achieve it. If the purpose of morality is, for instance, to achieve cooperative outcomes, and avoid conflict over resources, then there are things people should and shouldn't do to support that. Although something like game theory , rather than physics, would supply the details .

This post is generalizable, even if you don't think that it's wrong to kill people as a general rule there's probably some other moral act #G_30429 that you probably don't think that it would be appropriate and the point still holds: Rowhammering the bit that says "Don't do #G_30429" is probably not as impossible as it seems in the long run.

(Meta: when thinking about this I found it difficult to recall all of the arguments I've learned in moral philosophy over the past 16 years of trying that would have been applicable. I knew where you were g... (read more)

There is a courage that goes beyond even an atheist sacrificing their life and their hope of immortality.  It is the courage of a theist who goes against what they believe to be the Will of God, choosing eternal damnation and defying even morality in order to rescue a slave, or speak out against hell, or kill a murderer... 

I'm a little late here, but this sounds a lot like Corneliu Codreanu's line that the truest martyr of all is one who goes to Hell for his country.