All of wstrinz's Comments + Replies

Once I understood the theory, my first question was has this been explained to any delusional patient with a good grasp of probability theory? I know this sort of thing generally doesn't work, but the n=1 experiment you mention is intriguing. I suppose what is more often interesting to me is what sorts of things people come up with to dismiss conflicting evidence, since it is in a strange place between completely random and clever lie. If you have a dragon in your garage about something you tend to give the most plausible excuses because you know, deep dow... (read more)

sure, how about being in a village taken over by the Khmer Rouge, or a concentration camp in Nazi Germany? Someplace where you don't necessary die quickly but have to endure a long and very unpleasant time with some amount of psychological or physical pain.

7Kaj_Sotala
Well, to take the concentration camp example. Every day, you'll encounter various painful things, such as malnutrition, both physical and mental violence from the guards, generally unpleasant living conditions, seeing your companions killed, and so on. Each of these causes a You Should Really Stop This From Happening reaction from your brain, countered by a system saying I Have No Way Of Stopping This. On top of the individual daily events, your attention will also be constantly drawn to the fact that for as long as you stay here, these events will continue, and you'll suffer from also having your attention drawn to the fact that you can't actually get out. Some of the coping mechanisms that've been identified in concentration camp inmates include strategies such as trying to find meaning in the experience, concentrating on day-to-day survival, fatalism and emotional numbing, and dreaming of revenge. Each of these could plausibly be interpreted as a cognitive/emotional strategy where the system sending the impossible-to-satisfy "you need to get out of here" message was quieted and the focus was shifted to something more plausible, therefore somewhat reducing the suffering.

I get the feeling I'm not just completing a full application of the definition here, but where does this apply to serious, terrible, "let's just imagine they threw you in hell for a few days" suffering. Sure, one can say that it's mostly pain being imagined, and the massive overload of a sensor not designed for such environments is really what we're bothered by, but is there a way that the part of this we usually talk about as 'suffering' fits into the attention-allocation narrative? Or are we talking about two different things here?

All the same, I find this fascinating and am going to experiment with it in my daily life. Looking forward to the non-content focused post.

0Kaj_Sotala
Can you give a more concrete example?

I agree, I can't really reliably predict my actions. I think I know the morally correct thing to do, but I'm skeptical of my (or anyone's) ability to make reliable predictions about their actions under extreme stress. As I said, I usually use this when people seem overly confident of the consistency of their morality and their ability to follow it, as well as with people who question the plausibility of the original problem.

But I do recall the response distributions for this question mirroring the distribution for the second trolley problem; far fewer take... (read more)

Great point. I've never thought of that and no-one I've ever tried this one has mentioned it either. This makes it more interesting to me that some people still wouldn't kill the baby, but that may be for reasons other than real moral calculation.

1CarlJ
Maybe this can work as an analogy: Right before the massacre at My Lai, a squad of soldiers are pursuing a group of villagers. A scout sees them up ahead a small river and he sees that they are splitting and going into different directions. An elderly person goes to the left of the river and the five other villagers go to the right. The old one is trying to make a large trail in the jungle, so as to fool the pursuers. The scout waits for a few minutes, when the rest of his squad team joins him. They are heading on the right side of the river and will probably continue on that way, risking to kill the five villagers. The scout signals to the others that they should go to the left. The party follows and they soon capture the elderly man and bring him back to the village center, where he is shot. Should the scout instead have said nothing or kept running forward, so that his team should have killed the five villagers instead? There are some problems with equating this to the trolley problem. First, the scout cannot know for certain before that his team is going in the direction of the large group. Second, the best solution may be to try and stop the squad, by faking a reason to go back to the village (saying the villagers must have run in a completely different direction).
3TheOtherDave
For my own part: I have no idea whether I would kill the baby or not. And I have even less of an idea whether anyone else would... I certainly don't take giving answers like "I would kill the baby in this situation" as reliable evidence that the speaker would kill the baby in this situation. But I generally understand trolley problems to be asking about what I think the right thing to do in situations like this is, not asking me to predict whether I will do the right thing in them.

I hope I'd do the same. I've never had to kill anyone before though, much less my own baby, so I can't be totally sure I'd be capable of it.

I've used the trolley problem a lot, at first to show off my knowledge of moral philosophy, but later, when I realized anyone who knows any philosophy has already heard it, to shock friends that think they have a perfect and internally consistent moral system worked out. But I add a twist, which I stole from an episode of Radiolab (which got it from the last episode of MASH), that I think makes it a lot more effective; say you're the mother of a baby in a village in Vietnam, and you're hiding with the rest of the village from the Viet Cong. Your baby start... (read more)

4fr00t
The answer that almost everyone gives seems to be very sensible. After all, the question: "What do I believe I would actually do" and "What do I think I should do" are different. Obviously self modifying to the point where these answers are as consistent as possible in the largest subset of scenarios as possible is probably a good thing, but that doesn't mean such self modifying is easy. Most mothers would simply be incapable of doing such a thing. If they could press a button to kill their baby, more would probably do so, just as more people would flip a switch to kill than push in front of a train. You obviously should kill the baby, but it is much more difficult to honestly say you would kill a baby than flip a switch: the distinction is not one of morality but courage. As a side note, I prefer the trolley-problem modification where you can have an innocent, healthy young traveler killed in order to save 5 people in need of organs. Saying "fat man", at least for me, obfuscates the moral dilemma and makes it somewhat easier.
2WrongBot
I would smother the baby and then feel incredibly, irrationally guilty for weeks or months. I am not a psychopath, but I am a utilitarian. I value having a consistent set of values more than I value any other factor that has come into conflict with that principle so far.

This is only equivalent to a trolley problem if you specify that the baby (but no one else) would be spared, should the Viet Cong find you. Otherwise, the baby is going to die anyway, unlike the lone person on the second trolley track who may live if you don't flip the switch.

[anonymous]110

I immediately thought, "Kill the baby." No hesitation.

I happen to agree with you on morality being fuzzy and inconsistent. I'm definitely not a utilitarian. I don't approve of policies of torture, for example. It's just that the village obviously matters more than a goddamn baby. The trolley problem, being more abstract, is more confusing to me.

4Desrtopa
If this were a real situation rather than an artificial moral dilemma, I'd say that if you can't silence the baby just by covering its mouth, you should shake it. It gets them to stop making noise, and while it's definitely not good for them, it'll still give the baby better odds than being smothered to death.
0wedrifid
The at this point part is interesting. Have you ever tried asking the question without the abstract priming? I'd like to see the difference.
8wedrifid
They would say the same thing only with more sincerity.

despite all of the concerns raised in lionhearted's post, and everything that's been written on LW about how analytic types have trouble getting along without getting defensive and prickly, I still think I wouldn't see a response like this just about anywhere else on the internet. karma points to you

9Viliam_Bur
Yes. But it also proves lionhearted's point: You can get great results, if you speak with people the right way. The right way has different ratio of ingredients in LW and outside LW. The carefulness described in the article is nice to have in LW, but absolutely necessary in most of the world.

I like this post a lot, it speaks to some of my concerns about this community and about the sorts of people I'd like to surround myself with. As an analytical/systematizing/whatever (I got a 35 on the test Roko posted a while back, interpret from that what you will), I felt very strange about all of these rhetorical games for most of my life. It was only when I discovered signalling theory in my study of economics that it started to make sense. If I frame my social interactions as signalling problems, the goals and the ways I should achieve them seem to be... (read more)

You said pretty much what I was thinking. My (main) motivation for copying myself would be to make sure there is still a version of the matter/energy pattern wstrinz instantiated in the world in the event that one of us gets run over by a bus. If the copy has to stay completely separate from me, I don't really care about it (and I imagine it doesn't really care about me).

As with many uploading/anthropics problems, I find abusing Many Worlds to be a good way to get at this. Does it make me especially happy that there's a huge number of other me's in other universes? Not really. Would I give you anything, time or money, if you could credibly claim to be able to produce another universe with another me in it? probably not.

0Roko
This comment will come back to haunt you ;-0
5cousin_it
Yep, I gave the same answer. I only care about myself, not copies of myself, high-minded rationalizations notwithstanding. "It all adds up to normality."

Man I wish I weren't away at college, I'd love to come to a Less Wrong meetup in my hometown...

[This comment is no longer endorsed by its author]Reply