The following is an excerpt of an exchange between Julia Galef and Massimo Pigliucci, from the transcript for Rationally Speaking Podcast episode 132:

Massimo: [cultivating virtue and 'doing good' locally 'does more good' than directly eradicating malaria]

Julia: [T]here's lower hanging fruit [in the developed world than there is in the developing world]. By many order of magnitude, there's lower hanging fruit in terms of being able to reduce poverty or disease or suffering in some parts of the world than other parts of the world. In the West, we've picked a lot of the low hanging fruit, and by any sort of reasonable calculation, it takes much more money to reduce poverty in the West -- because we're sort of out in the tail end of having reduced poverty -- than it does to bring someone out of poverty in the developing world.

Massimo: That kind of reasoning brings you quickly to the idea that everybody here is being a really really bad person because they spent money for coming here to NECSS listening to us instead of saving children on the other side of the world. I resist that kind of logic.

Massimo (to the audience): I don't think you guys are that bad! You see what I mean?

I see a lot of people, including bullet-biters, who feel a lot of internal tension, and even guilt, because of this apparent paradox.

Utilitarians usually stop at the question, "Are the outcomes different?"

Clearly, they aren't. But people still feel tension, so it must not be enough to believe that a world where some people are alive is better than a world where those very people are dead. The confusion has not evaporated in a puff of smoke, as we should expect.

After all, imagine a different gedanken where a virtue ethicist and a utilitarian each stand in front of a user interface, with each interface bearing only one shiny red button. Omega tells each, "If you press this button, then you will prevent one death. If you do not press this button, then you will not prevent one death."

There would be no disagreement. Both of them would press their buttons without a moment of hesitation.

So, in a certain sense, it's not only a question of which outcome is better. The repugnant part of the conclusion is the implication for our intuitions about moral responsibility. It's intuitive that you should save ten lives instead of one, but it's counterintuitive that the one who permits death is just as culpable as the one who causes death. You look at ten people who are alive when they could be dead, and it feels right to say that it is better that they are alive than that they are dead, but you juxtapose a murderer and your best friend who is not an ascetic, and it feels wrong to say that the one is just as awful as the other.

The virtue-ethical response is to say that the best friend has lived a good life and the murderer has not. Of course, I don't think that anyone who says this has done any real work.

So, if you passively don't donate every cent of discretionary income to the most effective charities, then are you morally culpable in the way that you would be if you had actively murdered everyone that you chose not to save who is now dead?

Well, what is moral responsibility? Hopefully we all know that there is not one culpable atom in the universe.

Perhaps the most concrete version of this question is: what happens, cognitively, when we evaluate whether or not someone is responsible for something? What's the difference between situations where we consider someone responsible and situations where we don't? What happens in the brain when we do these things? How do different attributions of responsibility change our judgments and decisions?

Most research on feelings has focused only on valence, how positiveness and negativeness affect judgment. But there's clearly a lot more to this: sadness, anger, and guilt are all negative feelings, but they're not all the same, so there must be something going on beyond valence.

One hypothesis is that the differences between sadness, anger, and guilt reflect different appraisals of agency. When we are sad, we haven't attributed the cause of the inciting event to an agent; the cause is situational, beyond human control. When we are angry, we've attributed the cause of the event to the actions of another agent. When we are guilty, we've attributed the cause of the event to our own actions.

(It's worth noting that there are many more types of appraisal than this, many more emotions, and many more feelings beyond emotions, but I'm going to focus on negative emotions and appraisals of agency for the sake of brevity. For a review of proposed appraisal types, see Demir, Desmet, & Hekkert (2009). For a review of emotions in general, check out Ortony, Clore, & Collins' The Cognitive Structure of Emotions.)

So, what's it look like when we narrow our attention to specific feelings on the same side of the valence spectrum? How are judgments affected when we only look at, say, sadness and anger? Might experiments based on these questions provide support for an account of our dilemma in terms of situational appraisals?

In one experiment, Keltner, Ellsworth, & Edwards (1993) found that sad subjects consider events with situational causes more likely than events with agentic causes, and that angry subjects consider events with agentic causes more likely than events with situational causes. In a second experiment in the same study, they found that sad subjects are more likely to consider situational factors as the primary cause of an ambiguous event than agentic factors, and that angry subjects are more likely to consider agentic factors as the primary cause of an ambiguous event than situational factors.

Perhaps unsurprisingly, watching someone commit murder, and merely knowing that someone could have prevented a death on the other side of the world through an unusual effort, makes very different things happen in our brains. I expect that even the utilitarians are biting a fat bullet; that even the utilitarians feel the tension, the counterintuitiveness, when utilitarianism leads them to conclude that indifferent bystanders are just as bad as murderers. Intuitions are strong, and I hope that a few more utilitarians can understand why utilitarianism is just as repugnant to a virtue ethicist as virtue ethics is to a utilitarian.

My main thrust here is that "Is a bystander as morally responsible as a murderer?" is a wrong question. You're always secretly asking another question when you ask that question, and the answer often doesn't have the word 'responsibility' anywhere in it.

Utilitarians replace the question with, "Do indifference and evil result in the same consequences?" They answer, "Yes."

Virtue ethicists replace the question with, "Does it feel like indifference is as 'bad' as 'evil'?" They answer, "No."

And the one thinks, in too little detail, "They don't think that bystanders are just as bad as murderers!", and likewise, the other thinks, "They do think that bystanders are just as bad as murderers!".

And then the one and the other proceed to talk past one another for a period of time during which millions more die.

As you might expect, I must confess to a belief that the utilitarian is often the one less confused, so I will speak to that one henceforth.

As a special kind of utilitarian, the kind that frequents this community, you should know that, if you take the universe, and grind it down to the finest powder, and sieve it through the finest sieve, then you will not find one agentic atom. If you only ask the question, "Has the virtue ethicist done the moral thing?", and you silently reply to yourself, "No.", and your response is to become outraged at this, then you have failed your Art on two levels.

On the first level, you have lost sight of your goal. As if your goal is to find out whether or not someone has done the moral thing, or not! Your goal is to cause them to commit the moral action. By your own lights, if you fail to be as creative as you can possibly be in your attempts at persuasion, then you're just as culpable as someone who purposefully turned someone away from utilitarianism as a normative-ethical position. And if all you do is scorn the virtue ethicists, instead of engaging with them, then you're definitely not being very creative.

On the second level, you have failed to apply your moral principles to yourself. You have not considered that the utility-maximizing action might be something besides getting righteously angry, even if that's the easiest thing to do. And believe me, I get it. I really do understand that impulse.

And if you are that sort of utilitarian who has come to such a repugnant conclusion epistemically, but who has failed to meet your own expectations instrumentally, then be easy now. For there is no longer a question of 'whether or not you should be guilty'. There are only questions of what guilt is used for, and whether or not that guilt ends more lives than it saves.

All of this is not to say that 'moral outrage' is never the utility-maximizing action. I'm at least a little outraged right now. But in the beginning, all you really wanted was to get rid of naive notions of moral responsibility. The action to take in this situation is not to keep them in some places and toss them in others.

Throw out the bath water, and the baby, too. The virtue ethicists are expecting it anyway.

 


Demir, E., Desmet, P. M. A., & Hekkert, P. (2009). Appraisal patterns of emotions in human-product interaction. International Journal of Design, 3(2), 41-51.

Keltner, D., Ellsworth, P., & Edwards, K. (1993). Beyond simple pessimism: Effects of sadness and anger on social perception. Journal of Personality and Social Psychology, 64, 740-752.

Ortony, A., Clore, G. L., & Collins, A. (1990). The Cognitive Structure of Emotions. (1st ed.).

New Comment
102 comments, sorted by Click to highlight new comments since: Today at 10:16 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

My main thrust here is that "Is a bystander as morally responsible as a murderer?" is a wrong question. You're always secretly asking another question when you ask that question, and the answer often doesn't have the word 'responsibility' anywhere in it.

You might also consider that you simply lack a moral modality that other people have. It is the right question for them, but no more meaningful to you than color is to a blind person.

I wonder if you actually lack the moral modality for responsibility, or are merely analyzing based on your ideological meta ethical beliefs. I wonder if a lot of people lack that modality.

Responsibility, duty, rights - they all create a space between my preferred outcome and the outcome of your action where I will refrain from coercion, retaliation, threats, or even disapproval. That's the space where freedom and autonomy lives.

When Jonathan Haidt first created his moral foundations, he didn't have a dimension for autonomy. It had to be pointed out to him. I find it disturbing to contemplate people lacking that modality. It's looking into the eyes of a person, and seeing Clippy starting back at me.

3Gram_Stone8y
Wow, okay, so I am not talking about a situation where you can do whatever the hell you want, and I'm not proposing any sort of position that makes you start coercing and threatening people, or taking away people's rights. You are lumping a lot more stuff in. I'm really only talking about how people make causal inferences, and how these result in different feelings like sadness, anger, and guilt. The reason it's good to feel guilty is because it gives you a signal that you are the causal origin of a negative outcome. But then people try to reconcile their scope insensitivity with their causal inference mechanism, and if they discredit their intuitions about scope and locality, then that causal inference mechanism gives you a huge dose of 'you are the causal origin of a negative outcome' signal. That's the 'repugnant conclusion'. The other decision is to discredit your intuition about the causal inference mechanism: say that anyone who focuses on outcomes is obviously missing some larger point about morality, because there's no way that we're all bad people. I'm saying, when you know what guilt actually is, and what it's for, you can stop relying on vague intuitions and just always do what the intuitions we're doing successfully half of the time. You don't need to care about the guilt because the system that delivers it was never designed to make inferences of that scope. If anything, the level of guilt people feel when they believe that they should be utilitarians is an underestimate, because of the scope insensitivity! Recognizing your feelings as sources of information about what you actually want, instead of constantly, implicitly using them as value judgments about 'you' due to a lack of understanding, is totally different from saying that you can do anything you want, and that guilt is an illusion.
2Lumifer8y
Why are you assuming that the signal is correct? I tend to think of guilt as pain feedback for breaking internalised norms. A lot of these norms are social or socially created. That does not make them automatically "right". Take a sincere Catholic girl who slept with some guy and is now feeling very very guilty about that. Is it good for her to feel guilty? What that guild "actually is, and what it's for"? What should she do if she wants to "stop relying on vague intuitions and just always do what the intuitions we're doing successfully half of the time"?
2Gram_Stone8y
Damn, I had considered using the word 'useful', but I used 'good' instead, so that I could avoid flak from the other guy! Of course signals can be 'incorrect', in a sense. I admit that I didn't consider this the sort of advice that sincere Catholic girls with really conservative beliefs about sexuality would read. I am assuming a certain level of background knowledge here. If I have made an error in that regard, then I bear responsibility for it. (Heh.)
4Lumifer8y
Knowledge? Our sincere Catholic girl is very knowledgeable. Perhaps you mean that your advice applies only to people with the "correct" moral systems?
2Gram_Stone8y
Not even. I'm assuming that her ontology for anything but the most immediate, important, tangible things would be practically useless. You can generate and possess an accurate world-model with a botched morality, but you're very unlikely to commit the moral action if the values are spot on but the world-model and its generator are botched. You should begin with ontology and epistemology and then move on to ethics. Ironically, I actually talked about not feeling guilty, as opposed to feeling guilty, in the article above. But that probably wouldn't be helpful for someone like that either, even if it seems like it superficially would be in your thought experiment.
-1Lumifer8y
In which sense useless? She's a contemporary, educated girl, she can navigate this world perfectly well and you probably won't disagree with her about much in the descriptive sphere. What you would disagree about is the normative sphere, but that doesn't have to do much with ontology. Why do you assume that her "world-model" is botched? There are plenty of very bright religious people.
0buybuydandavis8y
And I'm still noting that you seem to lack cognizance of the responsibility modality. Causality is a part of responsibility, but does not determine it. Getting out of bed in the morning may causally lead to you getting hit by a bus, or someone else getting hit by a bus, but that doesn't make you morally responsible for the accident. Again, I wonder if you don't get it at all, and simply lack a moral modality I have. People clearly get this idea to varying degrees. People often still feel guilty when they are part of a causal chain, even when they "know" they were not responsible. Seeing how that tendency distributes across Haidt's distributions of moral foundations would be really interesting. As for our emotions and moral intuitions, I agree that one should realize one's essential freedom in how we respond to them. They are all data. We can choose. For the rest of your post, I'm not a utilitarian and wasn't really interested in commenting on your apparent attempt to ameliorate guilt in utilitarians. You can do anything you can do. As I read it, you interpreted guilt as the emotional reaction to being part of a causal chain leading to a bad outcome. That's not an illusion. It's a mistake to think I held it you were saying it was.

Virtue ethicists replace the question with, "Does it feel like indifference is as 'bad' as 'evil'?" They answer, "No."

I haven't read the relevant literature, but I wouldn't think this is actually the question they are asking themselves. I don't see any virtue based conceptual framework referred to in the question.

Any virtue ethicists out their who could weight in?

6Gram_Stone8y
Thanks for speaking up. I just framed this as a verbal question for convenience. Massimo talks about how utilitarianism leads him to the conclusion that most people must be extremely bad, and that this is counterintuitive enough for him to reject utilitarianism. My hypothesis is that he means that the way that his brain natively evaluates responsibility does not agree with an evaluation of responsibility that is equivalent to asking "What are the consequences of this action or inaction?" Virtue-ethical language doesn't come into play because they're looking for reasons not to be utilitarians, not ways to be virtue ethicists. Does that make sense?

(I like your way of thinking, and I like even more that you look at this problem in the first place. I've had this mental note that says "utilitarianism vs guilt???" for a while now.)

One facet of the problem I think you overlooked is the "social group" dynamic.

Consider, which of these two is a more accurate expansion of "I'm a utilitarian" as observed in Real Life (TM):

  • "My goal is to save lives effectively, in accordance with a coherent utility function (etc.)"

  • "I think it's 'good'/'proper' to save lives effectively, in accordance with a coherent utility function (etc.)"

I would be interested to see a sketch of the evidence in question for atheism. (Not so interested for utilitarianism -- values versus facts.)

But if the point you're making is an instance of the general schema "there is evidence against almost anything, and if you collect just the evidence that goes one way you can often make it look quite convincing" then I agree, but plead guilty only to not wanting to weigh my comment down with hedging. One of the things I meant by "good reasoning" was not doing that :-).

tl;dr It's difficult to get people to change their mind :-)

You have to consider that humans don't have perfect utility functions. Even if I want to be a moral utilitarian, it is a fact that I am not. So I have to structure my life around keeping myself as morally utilitarian as possible. Brian Tomasik talks about this. It might be true that I could reduce more suffering by not eating an extra donut, but I'm going to give up on the entire task of being a utilitarian if I can't allow myself some luxuries.

4Gram_Stone8y
This is actually just the sort of thing that I'm trying to say. I'm saying that when you understand guilt as a source of information, and not a thing that you need to carry around with you after you've learned everything you can from it, then you can take the weight off of your shoulders. I'm saying that maybe if more people did this, it wouldn't be as hard to do extraordinary kinds of good, because you wouldn't constantly be feeling bad about what you conceivably aren't doing. Most of what people consider conceivable would require an unrealistic sort of discipline. Punishing people likely just reduces the amount of good that they can actually do. Am I right that we seem to agree on this?
2woodchopper8y
I think I agree with what you're saying for the most part. If your goal is, say, reducing suffering, then you have to consider the best way of convincing others to share your goal. If you started killing people who ran factory farms, you're probably going to turn a lot of the world against you, and so fail in your goal. And, you have to consider the best way of convincing yourself to continue performing your goal, now and into the future, since humans goals can change depending on circumstances and experiences. In terms of guilt, finding little tricks to rid yourself of guilt for various things probably isn't a good way to make you continue caring and doing as much as you can for a certain issue. I can know that something is wrong, but if I don't feel guilty about doing nothing, I'm probably not going to exert myself as hard in trying to fix it. If I can tell myself "I didn't do it, therefore it's none of my concern, even though it is technically a bad thing" and absolve myself of guilt, it's simply going to make me less likely to do anything about the issue.
2Gram_Stone8y
Ah, I assumed the guilt would demotivate on net. Maybe it depends on how strongly you identify with utilitarian ideas.

On the object level, I think you are almost completely wrong.

You say, "There is not one culpable atom in the universe." This is true, but your implied conclusion, that there are no culpable persons in the universe, is false. Likewise, there may not be any agenty dust in the universe. But if your implied conclusion is that there are no agents in the universe, then your conclusion is false.

But if there are agents in the universe, and there are, then there can be good and bad agents there, just as there are good and bad apples in the universe.

Richar... (read more)

Everyone knows that the right choice here is to save the child, and that the utilitarian choice is wrong.

[citation needed]

Saving the child is the choice that feels better, the choice that will make other people think better of us, the choice that all else being equal gives most evidence of being a good person. For all those reasons, I expect many of us would choose to save the child. But is that the right choice? I am very very unconvinced.

A more reputable reason to prefer saving the child: we may reasonably doubt our impact estimates for very indirect charitable activity like donating money to help people far away, and suspect that they may be inflated (because pretty much everyone involved has an incentive to inflate them). So if our "number of expected lives" was estimated without taking that into account, we might want to reduce the estimate substantially. But all that would mean is that one of the things we're comparing against one another is wrong, and that has nothing to do with deficiencies in utilitarianism.

Of course the scenario is ridiculous anyway; it seems to require that arriving ten minutes later and damp will stop us ever making the donation (how??), or else that the donation is so time-critical that every 10 minutes of delay means three more lives lost (in which case we probably shouldn't merely be jogging).

1Lumifer8y
Whether it's the right choice is a function of your moral system. Under some moral systems it is, and under some it isn't. However notice the "everyone knows" part. Everyone does know. Which percentage of the population do you expect to agree that letting the child drown was the right thing to do? Any more than the trolley one? Hypotheticals aren't know for their realism.
5gjm8y
Right. And provided some of the latter moral systems are ones endorsed by actual people, it cannot be true that "Everyone knows ...". Oh, I'm sorry. I'd thought we were having a discussion about ethics, not a popularity contest. What percentage of the population has even heard of utilitarianism? What proportion has heard of it and has a reasonably accurate idea what it is? Nope, ridiculous to a similar extent and in similar ways. This is relevant not because there's anything wrong with using unrealistic hypothetical questions to explore moral systems, but because there's something wrong with making a naked appeal to intuition when addressing an unrealistic hypothetical question (that being what entirelyuseless just did). Because our intuitions are not calibrated for weird hypothetical situations and we shouldn't expect what they tell us about such situations to be very enlightening.
3Gram_Stone8y
A while back, a lot of people would have agreed that setting cats on fire for entertainment was totally cool. The idea is that the argument sneaks in intuitions about the situation that have been explicitly stipulated away.
-1Lumifer8y
Yes, and which conclusion do you draw from this observation? I am not sure I understand. Which intuitions have been explicitly stipulated away and where?
1Gram_Stone8y
I don't see how defining morality as the popular vote doesn't entail moral progress being a random walk, and don't think that that definition provides any kind of answer to most of the questions that we pose within the cultural category 'moral philosophy'. There's implicit uncertainty about how to compare the moral weight of children and adults. Is there not always some number of adults that would be better to save than a fixed number of children? Would you sacrifice ten million adults for one child? There's some number. People have unique intuitions about the moral weight of children, as opposed to adults, and most utilitarians don't make any kind of concrete judgments about what the weights should be. If you throw in something like this, then you're not countering a claim that anyone has actually made. There are other intuitions that implicitly affect the judgment, like pleasure, social reputation, uncertainty about the assumptions themselves. In particular, it's hard to suspend your disbelief in a thought experiment. If it really were the case that you knew with certainty that you could live and save two people instead of dying trying to save someone else and failing, then yes, you should pick the action that leads to the outcome with the greatest number of people safe. And finally, these things never actually happen. You seem to champion pragmatism constantly; I don't see how being able to save a life for $4,000 instead of $100,000 and ignoring quirks about my ability to perceive large scopes and distant localities to come to the conclusion that, yes, in fact, I should save twenty-five lives instead of one life, is counterintuitive, unpragmatic, or morally indefensible. I see thought experiments against utilitarianism as counterintuition porn, pitting a jury-rigged human brain against the most alien, unrealistic situation that you possibly can.
1Lumifer8y
You imply that the empirically observed ("popular") morality of different societies at different times is a random walk. Is that a bullet you wish to bite? The point I had in mind, though, wasn't defining morality through democracy. If you think that your moral opinions about cats on fire are better than those of some fellows a century or two ago, you have a couple of ways to argue for this. One would be to claim that moral progress exists and is largely montonic and inescapable, thus your morality is better just because it comes later in time. Another would be to claim that you are in some way exceptional (in terms of your position in space and/or time), for example you can see the Truth better than those other folks because they were deficient in some way. As you are probably well aware of, such claims tend to be controversial and have issues. I was wondering which path do you want to take. I'm guessing the moral progress path, am I right? Sure, but what has been explicitly stipulated away? That's not what we are talking about, is it? We are talking more about immediate, visceral-reaction kinds of actions versus far-off, unconnected, and statistical-averages kinds. In some way it's an emotion vs intellect sort of a conflict, or, put in different terms, hardwired biological imperatives vs abstract calculations. You are saying that abstract calculations provide the right answer, but I don't see it as self-evident: see my post above about putting all your trust into a single maximization.
4TheAncientGeek8y
Yes, morality has a cluster of concerns, including obligation, praise, blame and rightness of action Thats the deontologucal cluster, If you are concerned about culpability, you need to think about what responsibilities you are under. You have an obligation to pay your taxes, but not one to spend your disposable income in any particular way. There's another cluster to do with voluntary action, outcomes and making the world better. That's e cosequentalist cluster, Utilitarianism is a good tool for spending money optimally, but if you try to use it as a theory of oblgaion, it breaks The third cluster is virtue theoreic, concerned with self cultivation. I don't know why Pigliuci thanks you can tell whether you are obligated by examining subjective feelings, You are obligated to do something if you are likely to be blamed for not doing it.Self bame is/secondary to that, You have to took outward, not inward, to find the objective fact. One way of fixing emotional problems s to run off the right theory,
1torekp8y
This. I call the inference "no X at the microlevel, therefore, no such thing as X" the Cherry Pion fallacy. (As in, no cherry pions, implies no cherry pie.) Of course more broadly speaking it's an instance of the fallacy of composition, but, this variety seems to be more tempting than most, so it merits its own moniker. It's a shame. The OP begins with some great questions, and goes on to consider relevant observations like But from there, the obvious move is one of charitable interpretation, saying, Hey! Responsibility is declared in these sorts of situations, when an agent has caused an event that wouldn't have happened without her, so maybe, "responsibility" means something like "the agent caused an event that wouldn't have happened without her". Then one could find counterexamples to this first formulation, and come up with a new formulation that got the new (and old) examples right ... and so on.
3gjm8y
The OP has explicitly denied committing the cherry pion fallacy here. I confess, though, that I'm not sure what point the OP is making by observing that grinding the universe to dust would not produce agenty dust. I can see two non-cherry-pion-fallacy-y things they might be saying -- "agency doesn't live at the microlevel, so stop looking at the microlevel for things you need to look further up for" and "agency doesn't live at the microlevel, but it's produced by the microlevel, so let's understand that and build up from there" -- but I don't see how to fit either of them into what comes before and after what the OP says about agenty dust. Gram_Stone, would you care to do some inferential-gap bridging?
1DanArmak8y
Suppose you know there are three people being held hostage across the street, who will be killed unless the ransom money is delivered in the next ten minutes. You're running there with the money in hand; there's no-one else who can make it in time. On the way, you witness a young child drowning in a river. Do you abandon your mission to save the child? I claim that many (most?) people would be much more understanding if I ignored the child in my example, than if I did so in yours. Do you agree? The only difference between the two scenarios is that the hostages are concrete, nearby and the danger immediate, while the people you're donating to are far away in time and space and probably aren't three specific individuals anyway. And this engages lots of well known biases - or useful heuristics, depending on your point of view. How would one argue that it's right to save the child in your example, and right to abandon it in mine? I think most people would (intuitively) try to deny the hypothetical: they would question how you can be so sure that your donation would save exactly three lives, and why making it later wouldn't work, and so on. But if they accept the hypothetical that you have a clear choice between the two, then what difference can motivate them, other than the near-far or specific people vs. statistic distinctions? What other rule can be guiding 'what is the right thing to do'? And do you accept this rule?
2entirelyuseless8y
I agree that the differences are more or less what you say they are, and I think those differences can be enough to determine what is right and what is not. I do not think it has anything to do with being biased.
0DanArmak8y
Certainly, you can assign moral weight to strangers according to their distance from you, their concreteness, and their familiarity or similarity to you. That is what many people do, and probably everyone instinctively does it to some degree. Modern utilitarians, EAers, etc. don't pretend to be perfect; most of them just deviate a little bit from this default behavior. One problem with this is that, in historically recent times, a very few people are sometimes placed in positions where they can (or must) decide the lives of billions. And then most people agree we would not want them to follow this rule. We don't want the only thing stopping nuclear first strikes to be the fear of retaliation; if Reagan had had a button which would instantly wipe out all USSR citizens with no fear of revenge strikes, we would want him to not press it for moral reasons. Another problem is that it creates moral incentives not to cooperate. If two groups are contesting a vital resource, we'd rather they share it; we don't want them to each have moral incentives to go to war over it, because it's morally more important to have a vital resources for yourself than it is not to kill some strangers or deprive them of it. A related question is the precie function with which moral weight falls off with distance has to be very finely tuned. Should it fall off with distance squared, or cubed, or what? Is there any way for two friends to convince one another whose moral rule is more exactly correct?
0entirelyuseless8y
I started to write a response to this and then deleted it because it grew to over a page and I wasn't close to being finished. Basically you are looking at things from a utilitarian point of view and would like a description of my position in terms of a utility function. But I don't accept that point of view, even if I understand it, and the most natural description of my way of acting isn't a utility function at all. (I accept that to the degree that my actions are consistent, it is mathematically possible to describe those actions with a utility function -- but there is no necessary reason why that utility function would look very sensible, given that the agent is not actually using a utility function, but some other method, to make its choices.) The simple answer (the full answer isn't simple) to your questions is that I should do the right thing in my life, which might involve giving money to strangers, but which probably does not involve giving 50% of it to strangers, and those few people who are in positions of power should do the right thing in their lives, which definitely does not normally involving wiping out countries.
-3Gram_Stone8y
I'm not saying that no one's responsible for anything, and I'm not saying that there are no agents! I mean, seriously? Do you actually know people like that? I started to write a substantive reply to this, but I've interacted with you in the past and you're one of the few people who I don't even want to give a chance to. And I'm the kind of guy who is this charitable. And I just wrote an entire article about engaging virtue ethicists instead of blowing them off, and I'm still blowing you off. Come back when you have a non-ridiculous interpretation of my arguments, or don't come back at all.
5entirelyuseless8y
If your comments about atoms were about atoms and nothing else, you should remove them from the post, because they are a distraction from the real discussion, and they are also misleading, because the natural interpretation is the one that I gave.

I don't get the problem here.

A murderer is bad. A non-murderer is neutral. Some guy is good. A martyr is gooder.

Somehow he takes this and utilitarianism to mean that everyone is evil, a non-sequitur. Then he blames the bystander for deaths that don't even have a cause-effect connection to him.

Isn't the question of someone being a good or a bad person at all a part of virtue ethics? That is, for a utilitarian the results of the bystander's and murderer's actions were the same, and therefore actions were as bad as each other, but that doesn't mean a bystander is as bad as the murderer, because that's not a part of utilitarian framework at all. Should we implement the policy of blaming or punishing them the same way? That's a question for utilitarianism. And the answer is probably "no".

0Gram_Stone8y
I've had similar thoughts in the past few days. It does seem that utilitarianism merely prescribes the moral action, without saying anything about the goodness or badness of people. Of course, I've seen self-identifying utilitarians talk about culpability, but they seem to be quickly tacking this on without thinking about it.
4Vamair08y
It is possible to talk about utilitarian culpability, but it's a question of "would blaming/punishing this (kind of) person lead to good results". Like you usually shouldn't blame those who can't change their behavior as a response to blame unless they self-modified themselves to be this way or if them being blameless would motivate others that can... That reminds me of the Eight Short Studies On Excuses, where Yvain has demonstrated an example of such an approach.

Which is?

Moral responsibility is related to but not the same thing as moral obligation, and it's completely possible for a utilitarian to say one is morally forbidden to be a bystander and let a murder happen while admitting that doing so doesn't make you responsible for it. This is because responsibility is about causation and obligation is about what one ought to do. Murderers cause murders and are therefore responsible for them, while bystanders are innocent. The utilitarian should say not that the bystander is as morally responsible as the murderer (because they aren't), but that moral responsibility isn't what ultimately matters.

It's quite possible to acknowledge that real agents, including myself, do not have perfect models, nor perfect understanding of their own utility, nor perfect control of their subpersonal modules in order to act in accordance with stated beliefs all the time. Personally, I am not a utilitarian because I think that most utility functions are not consistent, and even if they were I don't have sufficient knowledge to compare them within myself, let alone across individuals.

In any case, it's pretty clear that no known actual (non-mythical) agent is perfect in... (read more)

There are only questions of what guilt is used for, and whether or not that guilt ends more lives than it saves.

That starts to remind me of medieval Christianity. The only question is whether you can save souls from eternal torment, anything that happens in this world is utterly irrelevant in comparison, and guilt, yes, guilt is a very useful tool.

Thank you, I'll pass.

4Gram_Stone8y
I don't even really get what you're passing on. I really would like to understand what your criticism is, but this is way too little information for me to infer that.
3Lumifer8y
I'm passing on hard-core utilitarianism, basically. Specifically, I'm passing on on simple functions to be maxmised with everything else considered an acceptable sacrifice if it leads to an uptick in the One True Goal. Even more specifically, I'm passing on using guilt to manipulate people into doing things you want them to do, all in the service of One True Goal. The parallel should be obvious: if you believe in eternal (!) salvation and torment, absolutely anything on Earth can be sacrificed for a minute increase in the chance of salvation.
5Furcas8y
... yes? What's wrong with that? Are you saying that, if you came across strong evidence that the Christian Heaven and Hell are real, you wouldn't do absolutely anything necessary to get yourself and the people you care about to Heaven? The medieval Christians you describe didn't fail morally because they were hard-core utilitarians, they failed because they believed Christianity was true!
0Lumifer8y
Yes, I'm saying that. I'm not sure you're realizing all the consequences of taking that position VERY seriously. For example, you would want to kidnap children to baptize them. That's just as an intermediate step, of course -- you would want to convert or kill all non-Christians, as soon as possible, because even if their souls are already lost, they are leading their children astray, children whose souls could possibly be saved if they are removed from their heathen/Muslim/Jewish/etc. parents.
7Furcas8y
Yes, I acknowledge all of that. Do you understand the consequence of not doing those things, if Christianity is true? Eternal torment, for everyone you failed to convert. Eternal. Torment.
1Lumifer8y
Yes, I do. Well, since I'm not actually religious, my understanding is hypothetical. But yes, this is precisely the point I'm making.
1Furcas8y
Well, my point is that stating all the horrible things that Christians should do to (hypothetically) save people from eternal torment is not a good argument against 'hard-core' utilitarianism. These acts are only horrible because Christianity isn't true. Therefore the antidote for these horrors is not, "don't swallow the bullet", it's "don't believe stuff without good evidence".
1Lumifer8y
Is that so? Would real-life Christians who sincerely and wholeheartedly believe that Christianity is true agree that such acts are not horrible at all and, in fact, desirable and highly moral? So once you think you have good evidence, all the horrors stop being horrors and become justified?
3DanArmak8y
If your evidence is good enough, then one must choose the lesser horror. "Better they burn in this life than in the next." Various arguments have been made that it's impossible to be sure to the degree required. I don't accept them, but I don't think you're advancing one of them either.
3Lumifer8y
I haven't been advancing anything so far. I was just marveling at the readiness, nay, enthusiasm with which people declare themselves to be hard-headed fanatics ready and willing to do anything in the pursuit of the One True Goal. There are... complications here. First let me mention in passing two side issues. One is capability: even if you believe the "lesser horror" is the right way, you may find yourself unable to actually do that horror. The other one is change: you are not immutable. What you do changes you, the abyss gazes back, and after committing enough lesser horrors you may find that your ethics have shifted. Getting back to the central point, there are also two strands here. First, you are basically saying that evil can become good through the virtue of being the lesser evil. Everything is comparable and relative, there are no absolute baselines. This is a major fork where consequentialists and deontologists part ways, right? Second is the utilitarian insistence that everything must be boiled down to a single, basically, number which determines everything. One function to rule them all. I find pure utilitarianism to be very fragile. Consider a memetic plague (major examples: communism and fascism in the first half of the XX century; minor example: ISIS now). Imagine a utilitarian infected by such a memetic virus which hijacks his One True Goal. Is there something which would stop him from committing all sorts of horrors in the service of his new, somewhat modified "utility"? Nope. He has no failsafes, there is no risk management, once he falls he falls to the very bottom. If he's unlucky enough to survive till the fever passes and the virus retreats, he will look at his hands and find them covered with blood. I prefer more resilient systems, less susceptible to corruption, ones which fail more gracefully. Even at the price of inefficiency and occasional inconsistency.
2DanArmak8y
Conditional on being sufficiently convinced such a goal is true, which I am not and assign negligible probability to ever being. Both are issues that must be addressed, but they don't imply one should abandon the attempt. Also, they aren't exclusive to doing extremely horrible instrumental things in pursuit of even-more-extremely good outcomes. I'm saying that whether or not you embrace a notion of the absolute magnitude of good and evil - that is, of a moral true zero - an evil can be the least evil of all available options. More importantly, deontology is completely compatible with theology. Many people believe(d) in the truth of a religion, and also that that religion commands them to either convert or kill non-believers. This is where the example used in this thread comes from: "burn their bodies - save their souls". So I'm not sure if you're proposing deontology as a solution, and if so, how. I'm not a utilitarian, for a better reason than that: utilitarianism doesn't describe my actual moral feelings (or those of almost all other people, as far as I can tell), so I see no reason to wish to be more utilitarian. In particular, I assign very different weights to the wellbeing of different people. That is not very different from imagining a meme that infects any other kind of consequentialist and hijacks the moral weight of a particular outcome. Or which infects deontologists with new rules (like religions sometimes do).
2Lumifer8y
Kinda? The interesting thing about utilitarians is that their One True Goal is whatever scores the highest on the utility-meter. Whatever it is. This is conditional on two evils being comparable (think about generic sorting functions in programming). Not every moral system accepts that all evils can be compared and ranked. Again, kinda? It depends. Even in Christianity true love for Christ overrides any rules. Formulated in a different way, if you have sufficient amount of grace, deontological rules don't apply to you any more, they are just a crutch. That's perfectly compatible with utilitarianism. My understanding of utilitarianism is that it's a variety of consequentialism where you arrange all the consequences on a single axis called "utility" and rank them. There are subspecies which specify particular ways of aggregating utility (e.g. by saying that the weights of utility of all individuals are all the same), but utilitarianism in general does not require that.
0DanArmak8y
But they still need to take into account the probabilities of their factual beliefs. Getting everyone into Heaven may be the One True Goal, but they need to also be certain that Heaven really exists and that they're right about how to get there. Yes. That's why I said "an evil can be" and not "some evil must be". But usually, given a concrete choice, one outcome will be judged best. It's unlikely, to put it mildly, that someone would believe they can determine whether another person goes to Heaven or Hell, and be morally indifferent between the choices. That appears to be true for many Protestant denominations. In the Catholic and Orthodox churches, though, salvation is only possible through the church and its ministers and sacraments. And even most Protestants would agree that some (deontological) sins are incompatible with a state of grace unless repented, so at most a past sinner can be in a state of grace, not an ongoing one. It's good to be precise about the meaning of words. I've talked to some people (here on LW) who didn't accept the label "utilitarianism" for philosophies that assign near-zero value to large groups of people.
0Lumifer8y
True, but there are no absolute thresholds. Whatever gets ranked first is it. There are moral philosophies which would refuse to kill an innocent even if this act saves a hundred lives.
0DanArmak8y
What's wrong with that? Other than Pascal's mugging, which everyone needs to avoid. True, but very few people actually follow them, especially if you replace 'a hundred' with a much larger arbitrary constant. The 'everyone knows it's wrong' metric that was mentioned at the start of this thread doesn't hold here.
2Lumifer8y
Other than that Mrs. Lincoln, how was the play? :-) What's wrong with that is, for example, the existence of the single point of failure and lack of failsafes. I don't know about that. Very few people find themselves in a situation where they have to make this choice, to start with. We're back in Pascal's Mugging territory, aren't we? So what is it, is utilitarianism OK as long as it avoids Pascal's Mugging, or is "all evil is evil" position untenable because it falls prey to Pascal's Mugging?
0DanArmak8y
Why do you think other moral systems are more resilient? You gave communism, fascism and ISIS (Islamism) as examples of "a utilitarian infected by such a memetic virus which hijacks his One True Goal". Islamism, unlike the first two, seems to be deontological, like Christianity. Isn't it? Deontological Christianity has also been 'hijacked' several times by millenialist movements that sparked e.g. several crusades. Nationalism and tribe solidarity have started and maintained many wars where consequentialists would make peace because they kept losing. That's true. But do many people endorse such actions in a hypothetical scenario? I think not, but I'm not very sure about this. Good point :-) It's clear that one's decision theory (and by extension, one's morals) would benefit from being able to solve PM. But I don't know how to do it. You have a good point elsewhere that consequentialism has a single failure point, so it would be more vulnerable to PM and fail more catastrophically, although deontology isn't totally invulnerable to PM either. It may just be harder to construct a PM attack on deontology without knowing the particular set of deontological rules being used, whereas we can reason about the consequentialist utility function without actually knowing what it is. I'm not sure if this should count as a reason not to be a consequentialist (as far as one can), because one can't derive an ought from an is, so we can't just choose our moral system on the basis of unlikely thought experiments. But it is a reason for consequentialists to be more careful and more uncertain.
2Lumifer8y
I think a mix of moral systems is more resilient. Some consequentialism, some deontology, some gut feeling. No, I don't think so. Mainstream Islam is deontological, but fundamentalist movements, just like in Christianity, shift to less deontology and more utilitarianism (of course, with a very particular notion of "utility"). Yes, deontology is corruptible as well, but one of the reasons it's more robust is that it's simpler. To be a consequentialist you first need the ability to figure out the consequences and that's a complicated and error-prone process, vulnerable to attack. To be a deontologist you don't need to figure out anything except which rule to apply. To corrupt a consequentialist it might be sufficient to mess with his estimation of probabilities. To corrupt a deontologist you need to replace at least some of his rules. Maybe if you find a pair of contradictory rules you could get somewhere by changing which to apply when, but in practice this doesn't seem to be a promising attack vector. And yes, I'm not arguing that this is a sufficient reason to avoid being a consequentialist. But, as you say, it's a good reason to be more wary.
0DanArmak8y
I completely agree. Also because this describes how humans (including myself) actually act: according to different moral systems, depending on which is more convenient, some heuristics, and on gut feeling.
1Furcas8y
Yes? Of course? With the caveats that the concept of 'Christianity' is the medieval one you mentioned above, that these Christians really have no doubts about their beliefs, and that they swallow the bullet. Are you trolling? Is the notion that the morality of actions is dependent on reality really that surprising to you?
-1Lumifer8y
Why don't you go ask some. Huh? The "concept" of Christianity hasn't changed since the Middle Ages. The relevant part is that you either get saved and achieve eternal life or you are doomed to eternal torment. Of course I don't mean people like Unitarian Universalists, but rather "standard" Christians who believe in heaven and hell. Morality certainly depends on the perception of reality, but the point here is different. We are talking here about what you can, should, or must sacrifice to get closer to the One True Goal (which in Christianity is salvation). Your answer is "everything". Why? Because the One True Goal justifies everything including things people call "horrors". Am I reading you wrong?
3Furcas8y
I mentioned three crucial caveats. I think it would be difficult to find Christians in 2016 who have no doubts and swallow the bullet about the implications of Christianity. It would be a lot easier a few hundred years ago. What I mean is that the religious beliefs of the majority of people who call themselves Christians have changed a lot since medieval times. I don't see the relevance of what you call a "One True Goal". I mean, One True Goal as opposed to what? Several Sorta True Goals? Ultimately, no matter what your goals are, you will necessarily be willing to sacrifice things that are less important to you in order to achieve them. Actions are justified as they relate to the accomplishment of a goal, or a set of goals. If I were convinced that Roger is going to detonate a nuclear bomb in New York, I would feel justified (and obliged) to murder him, because like most of the people I know, I have the goal to prevent millions of innocents from dying. And yet, if I believed that Roger is going to do this on bad or non-existent evidence, the odds are that I would be killing an innocent man for no good reason. There would be nothing wrong with my goal (One True or not), only with my rationality. I don't see any fundamental difference between this scenario and the one we've been discussing.
-2Lumifer8y
Yes. Multiple systems, somewhat inconsistent but serving as a check and a constraint on each other, not letting a single one dominate. Not in all ethical systems. In consequentialism yes, but not all ethics are consequentialist. How do you know that? Not in this specific example, but in general -- how do you know there is nothing wrong with your One True Goal?
0hairyfigment8y
Are you trying to be funny? Note that not all of the 70% would agree that belief or its lack sends people to Hell. See also. ETA: If you doubt what I said about beliefs regarding those "doomed to eternal torment," see "Many religions can lead to eternal life," in this sizeable PDF.
0Vitor8y
The real danger, of course, is being utterly convinced Christianity is true when it is not. The actions described by Lumifer are horrific precisely because they are balanced against a hypothetical benefit, not a certain one. If there is only an epsilon chance of Christianity being true, but the utility loss of eternal torment is infinite, should you take radical steps anyway? In a nutshell, Lumifer's position is just hedging against Pascal's mugging, and IMHO any moral system that doesn't do so is not appropriate for use out here in the real world.
0hairyfigment8y
You're hand-waving a lot of problems. Or you added too many negatives to that last sentence.
0DanArmak8y
You're describing a situation where some people hold factually incorrect beliefs (i.e. objectively wrong religions). And there's an infinitely powerful entity - a simulator, an Omega, a God - who will torture them for an unbounded time unless they change their minds and belive before they die. The only way to help them is by making them believe the truth; you completely believe this fact. Do you think that not overriding other people's will, or not intervening forcefully in their lives, is a more important principle than saving them from eternal torture? What exactly is the rule according to which you (would) act?
-1Lumifer8y
Given your certainty, it seems that it would be easy for you to demonstrate and even to prove that these beliefs are "factually incorrect". Would you mind doing that? It would settle a lot of issues that humanity struggled with for many centuries:-/
5gjm8y
I think you are misunderstanding what DanArmak wrote. The "situation" in question -- which it would be more accurate to say you were describing other people's belief in -- was that Christianity is right and unbelievers are going to hell; neither you nor Dan were endorsing that situation as an accurate account of the world, only as what some people have believed the world to be like. (Right, Dan?)
0DanArmak8y
That's right.
0DanArmak8y
Like gjm says, you seem to have missed that I was describing a counterfactual. I don't personally hold such a (religious) belief, so I can't do what you ask. But more relevantly, people have failed for many centuries to convince most others of many true facts I do believe in - such as atheism, or (more relevantly) the falsehood of all existing religions. This isn't because the beliefs aren't true or the proofs are hard to verify; it's because people are hard to convince of anything contrary to something they already believe which is of great personal or social importance to them. People, in short, are not truth seekers, and also lack by default a good epistemological framework to seek truth with.
1Lumifer8y
You're very... cavalier about putting an equals sign between things you believe in and things which are true. Yes, of course you believe they are true, but there is Cromwell's beseechment to keep in mind. Especially in a situation where you hold a certain belief and other people hold clearly different beliefs. Oh really? You can prove that all religions are false? Let me go back to my comment, then, where it seems I wasn't quite clear. If you can provide proofs of atheism being true, please do so. Of course, proving a negative is notoriously hard to do.
2DanArmak8y
I try to keep in mind a probabilistic degree of belief for different beliefs. But I do endorse my previous statement for some beliefs, which I hold strongly enough to simply refer to them as true, even after taking all the meta-arguments against certainty into account. Those are two different things. It's hard to prove that atheism is true in the sense that all possible religions are false. But it's quite easy to prove that every actually existing theistic* religion (that I and whoever I'm talking to have ever heard of) is false. (*) (Excluding some philosophies which are called 'religions' but don't make any concrete claims, either natural or supernatural, limiting themselves to moral rules and so on; obviously those can't be true or false, proven or disproven.)
2Lumifer8y
I don't believe this is true. Can you demonstrate? Let's take Christianity as the most familiar theistic religion. Here is the Nicene Creed, prove that it is false.
2DanArmak8y
The Creed is a part of a larger whole, not meant to form a religion on its own. It doesn't include the great majority of the usual reasons for believing in Christianity, which I would need to address to convince people that it is wrong; it states (some of) the conclusions Christians believe in but not their premises. A Christian wouldn't try to convert someone just by telling them the Nicene Creed, without even any evidence for believing in the Creed. However, on further reflection: I must partially retract what I said. The 'quite easy' proof I had in mind is not universal: like any proof, its form and existence depend on who that proof is supposed to convince. It's famously hard to convince a Christian of a disproof of Christianity; it's also very easy to convince someone who is already an atheist, or an Orthodox Jew, that the the same disproof is valid. Every human alive has heard of the concept of religion, and of some concrete religions (if not necessarily of Christianity), and usually either believes in one or explicitly does not believe in any. So it could be said there's no perfectly impartial judge of the validity of a proof. I believe that a neutral, rational, unbiased reasoner would be convinced by my simple proofs; but even apart from not being to test this, a Christian could argue that I'm sneaking assumptions into my definition of a neutral reasoner. (After all, every reasoner must start out with some facts, and if Christianity is true, why not start out believing in it?) I retract my previous claim. I don't have a "quite easy" proof any given religion is false, if by proof we mean "some words that would quite easily convince a believer in that religion to stop believing in it."
1Lumifer8y
But that is precisely the part that I'm objecting to. I agree that trying to convince a believer would likely lead to some form of "Whatever, dude, you just need to let Jesus into your heart". I'm not a Christian. I don't think Christianity is true, but that's a probabilistic belief that potentially could change. Prove to me that Christianity is false.
4DanArmak8y
All beliefs are probabilistic and can change. The evidence that would convince us that Christianity is true would have to be commensurate with the prior telling us today that it is false. The existence of that low prior is the proof that it's very likely false (factoring in our uncertainty about some things). (By the word 'prove', I obviously didn't mean a logical proof that sets the probability to zero; just one that makes the probability so low that theory would never rise to the level of conscious consideration.) Why do I think our prior for Christianity should be very low? First, because it makes supernatural claims; that is, claims which are by definition counter to all previous observations which we had used to determine the natural laws. Second, because its core claims (and future predictions) are similar to many sets of (mutually contradictory) claims made by many other religions, which implies that generating such claims and eyewitness testimony (as opposed to lasting miraculous artifacts or states of nature) is a natural human behavior which doesn't need further explanation. Third, because we know that Christianity and its dogmas have changed a lot during its history, and many sects have risen and fallen which have violently disagreed over every possible point of theology. Even if we assign a high probability to some variant of Christianity being true and ignore all other world religions (actual and possible), the average probability of any specific branch of Christianity would still be low, although not nearly as astronomically low as due to the other reasons given. And since we can trace clear human causes for the beliefs of many sects - like Luther postulating that since Catholic clergy was corrupt, their theology must also be wrong, or like many sects that were declared heretical for political reasons - it's likely that all sects' beliefs had human causes, and evolved in large part from previous, non-Christian beliefs. Fourth, and generalizing th
0Lumifer8y
I think you're trying to double-dip :-) The prior itself is a probability (or a set of probabilities). A "low prior" means that something is unlikely -- directly. It does not offer proof that it's unlikely, it just straight out states it is unlikely. And there doesn't seem to be any reason to talk about priors, anyway. It's not like at any moment we expect a new chunk of information and will have to update our beliefs. I think it's simpler to just talk about available evidence. As a preface let me say that I basically agree with the thrust of your arguments. I am not a Christian, afer all. However I don't consider them as anything close to a "proof" -- they look weaker to me than to you. That is not so. Supernatural claims do not run "counter" to previous observations, they just say that certain beings/things/actions are not constrainted by laws of nature. Wright brothers' airplane was not "counter" to all previous observations of transportation devices with an engine. Recall Clarke's Third Law. Not to mention that "all previous observations" include a lot of claims of miracles :-) Yep. But there is a conventional explanation for that (I do not imply that I believe it): different traditions take different views of the same underlying divinity, but find themselves in the position of the nine blind men and the elephant. This point will also need to explain why large civilizations (e.g. China) did NOT develop anything which looks like monotheism. That's a wrong way to look at it. Imagine that you have an underlying phenomenon which you cannot observe directly. You can only take indirect, noisy measurements. Different people take different sets of measurements, they are not the same and none of them are "true". However this does not mean that the underlying phenomenon does not exist. It only means that information available to you is indirect and noisy. See above -- different people might well have human reasons to prefer this particular set of measurements or t
1bogus8y
Who says that they didn't? Chinese folk religion acknowledged Shang-di (also called Tian, 'Heaven') as the primordial, universal deity, which is essentially a kind of henotheism and quite close historically to monotheism. This is especially true since other deities, while worthy of veneration and sacrifice, were largely conflated with "spirits". Of course, the later ideology of Confucianism tended to supplant these ancestral beliefs as a genuine foundation for ethics and philosophy/general worldview, although it did encourage the practice of rituals as a way of maintaining social harmony and a tightly-knit community.
0Lumifer8y
I do. If you squint hard enough you can detect monotheism in any religious system at which point the term "monotheism" loses any meaning. I'm using the conventional approach where religions like Judaism and Christianity (in spite of the Trinity!) are monotheistic and religions like Hinduism and Shinto are not.
1bogus8y
But the point is, the ancestral version of what would later evolve into Judaism was far from monotheistic; much like Chinese folk religion. As with almost anything else in history, monotheism was a gradual development.
0Lumifer8y
Sure. But let's go a bit upthread and look at my original sentence: Note the word "develop".
0ChristianKl8y
Christianity not only has the trinity but also a bunch of saints towards whom you can pray and who then supposedly intervene. Additionally there are a bunch of angels. There's the devil and demons.
0DanArmak8y
I meant that in response to your framing: "I don't think Christianity is true, but that's a probabilistic belief that potentially could change." If your belief changes in the future, it'll be in response to evidence. I can't know what evidence you'll receive in the future, so I can't refute it ahead of time. So all I can do is lower the probability for Christianity now, which will serve as a sufficiently low prior in the future so that any new evidence still doesn't convince you. Yeah, I didn't phrase it well. The Wright brothers' plane wasn't claimed to be, or perceived as, supernatural. The reason miracles are advanced as proof of a religion - the reason they are in the discussion in the first place (and in the Nicene Creed) - is because they are very surprising events. If a prophet goes around performing miracles like curing sick people, and a second prophet goes around saying that whatever we already expect to happen is in itself a miracle ("the miracle of nature") and lets the sick people die, then we have a reason to believe the first prophet's other claims ("God is granting me this power") but not the second one's. One meaning of "supernatural" is "unnatural": that is, an event that cannot happen according to normal natural law. Since we deduce natural law from observation only (and not from revelation or from first principles), this just means unique, unforeseeable events contrary to what we believe are the laws of nature. This is why many try to refute religious miraculous claims, not by denying the story of the claims, but by giving natural law explanations for the events. Claims are cheap, proofs are hard. Christians deny the claims of miracles by non-Christians, at least ones made explicitly in the name of another religion (other than pre-Christian Judaism). They don't deny the supernatural explanation, they just deny the claim that the miracle occurred. Similarly, if I really believed all the Christian miracles occurred as stated, I would probabl
0Lumifer8y
I think we might be starting to rehash the standard atheist/Christian debate. Of course there are counter-counter-arguments for my counter-arguments to your arguments. And there are counter-counter-counter-arguments to those, too. Unless you really like turtles, I am not sure there is much need to go there. I know the arguments against religions. I find them sufficiently convincing to not be religious. I do not find them rising to the level of "proof". YMMV, of course.
0entirelyuseless8y
The two parts of your last paragraph oppose one another -- given the difficulty people have in seeking the truth, all proofs of that kind are hard to verify. You cannot say "the proofs are easy to verify, but most people do not have the ability to do so." Saying that something is easy just means that it does not take much ability. You can say that it is easy for you, perhaps, but not that it is just easy.
6gjm8y
Consider the following proposition: For each existing religion, one can easily set out evidence of its wrongness that would (1) be very convincing to the majority of people who are not already positively disposed towards that religion and (2) be good reasoning in the abstract; if we combine these, we get a strong argument that no existing religion is close to the truth, but this argument will not convince most people because most people are adherents of some religion or other, and it is extremely difficult for adherents of any religion to appreciate the strength of arguments against that religion. It seems to me that that proposition may very well be true, and that if it is true then it's correct (aside from the fact that "proofs" is too ambitious a word) to say both that it's difficult to convince people that all existing religions are wrong, and that the proofs of that fact are not hard to verify.
0[anonymous]8y
The same is true for atheism, and certainly for utilitarianism. Original thread here.
0entirelyuseless8y
(I think Eugene is downvoting your comments on this thread.) I'm not sure this is true, for a number of reasons. By "one can easily set out evidence of its wrongness" I presume that you meant one particular set of evidence. In that case, I am not sure that you can in fact choose one particular set of evidence which would be very convincing to the majority of people. It's true that if you take all the people who disagree with a religion, then take any individual among them, you can easily find something which is convincing to him. But that may not be the same thing which is convincing to someone else. And since people who are religious typically think there is some evidence for their religion, they will disagree with some arguments against other religions because they will see that those arguments could also refute their own. They will see them as "proving too much." So it is not clear at all that you can take one particular set of evidence and convince the majority. On the other hand, if you meant you can find a set of evidence that matches each individual, your argument is in fact too wide, because it would apply to nearly all philosophical views, including contrary ones, as time mentioned in his comment in regard to atheism and utilitarianism. If you are avoiding that consequence by means of the "good reasoning in the abstract", I have various issues with that. First, it is possible to argue for something with "good reasoning in the abstract" in a way which would be convincing to many or most people, but which would become unconvincing if they were aware of other evidence and arguments for the opposite position. If good reasoning is supposed to mean that you mention all of the evidence, then your argument is question begging -- you are saying nothing more than that you have considered all of the relevant evidence that you can find, and you consider the most reasonable position to be that all religions are false. I agree both with this assessment of the evidenc
5gjm8y
Eugine is downvoting my comments everywhere. My 30-day karma is currently at -44; if I am interpreting the percentages right then my "non-Eugine" 30-day karma is probably somewhere around +270. (But I suspect that may be inflated a bit, because some people may be applying "corrective" upvotes to some of my comments that have been Eugined.) That's a good observation. My feeling is that the "argument from evil" against religions that claim there's a supremely good supremely powerful being is extremely convincing to almost everyone who doesn't belong to such a religion -- but of course lots of people who don't belong to one such religion may belong to a different such religion and therefore be unfavourably disposed towards the AfE. It is supposed to mean something along those lines, but if you think I'm begging the question then I think you are probably misunderstanding the nature of my argument. Let me recap, and try to make the connective tissue of the argument more visible; my apologies if this makes it laborious. * DanArmak said (roughly) that atheism is not only correct but in some sense obviously correct; in other words, that the answer to "why are so many people religious?" is less "because anti-religious arguments are subtle and hard to understand" and more "because religious people are unable to accept even easy clear arguments if they lead to irreligious conclusions". * Various people disagreed. * I am agreeing (roughly and provisionally) with Dan, and suggesting a way to make his claim more precise: the idea is that (1) there are anti-religious arguments convincing to any given person (you just have to avoid arguments that they can see refuting their own religion) and (2) that isn't because you can find crappy arguments tailored to any given person's biases and errors -- the arguments in question are actually good ones in some objective sense. * The people disagreeing with Dan mostly agree with him that in fact religions are probably wrong, and I thin
1entirelyuseless8y
Regarding the argument from evil: 1. I'm still not sure that your idea is true even in when it is limited to "everyone who doesn't belong to such a religion." It might be true about LWers but they are not a representative sample. Americans seem to be pretty consistently more likely to identify as agnostic as atheist, for example. Now of course that might be because American doesn't like atheists and they want to avoid the social consequences. And it might be different (as far as I know) in other countries, since I just checked the statistics for the US according to various polls. But prima facie, it suggests that "most people who don't believe in any god at all are not extremely convinced by any argument against the existence of a god." I am not asserting that this is definitely the case, but it is plausible to me, and supported at least by this fact. It's possible you could establish your claim with better data, but so far I'm not convinced. 2. There is still the problem that if you limit it to people who "don't belong to such a religion," then you appear to be saying "everyone who thinks that God doesn't exist thinks that there is a convincing argument against the existence of God," and even if that were one particular argument, it would be similar to saying, "everyone who accepts the error theory of morality finds such and such an argument convincing." Even if there is one such argument in the case of error theory (which might not actually be the case), that hardly establishes that it is easy to prove that error theory is true, or that it is true at all, for that matter. 3. I personally think the argument from evil (and various similar arguments) is evidence against the existence of a personal God, but I don't find it extremely convincing. A large part of the reason for that is when I did believe in Christianity, I had an answer to that argument which I found reasonable, and which still seems reasonable to me, not in the sense of "this is the case", but in
1gjm8y
Agnostics It's true that a lot of people call themselves agnostics, which seems to indicate (1) not being completely convinced by any argument against theism while also (2) not having a commitment to any particular religion. However, I think the great majority of people who call themselves agnostics fall into one of these categories: * People who prefer to avoid too-committal terms like "atheist", either because there's a stigma attached to overt atheism where they are or because they think "atheist" implies absolute certainty. * People who haven't really thought the matter through very much. * People who are agnostic about the existence of some sort of deity but strongly convinced that e.g. there is almost certainly no supremely good and powerful being who takes a personal interest in human affairs. I would expect people in the first and third groups to share my opinion about arguments from evil, though the third lot would rightly observe that e.g. such arguments tell us nothing about superbeings who just don't care about our affairs. People in the second group might well not be very convinced by arguments from evil, but I would expect that if they gave serious consideration to such arguments they would typically see them as very strong. "Everyone thinks there is a convincing argument" That's not quite what I'm saying. I'm saying that there are, in fact, arguments that I would expect to be very convincing if looked at seriously by a sizable majority of people not committed to the religions in question. Of course those who haven't seriously considered such arguments will not yet be convinced. Your own epistemic situation I find it very interesting that you aren't very convinced by arguments from evil despite having rejected Christianity, but I don't think there's anything further I can say without having any idea why it is that you aren't convinced. You say it's because there's a particular answer you find reasonable, but I've no idea what that answer is
0DanArmak8y
It's true that the difficulty of understanding a proof is relative to the one doing the understanding. But what I meant was different. People don't (merely) "have difficulty in seeking the truth", or find the proofs "hard to verify". Rather, people are generally not interested in seeking truth on certain subjects, and not willing to accept truth that is contrary to their dearly held beliefs, regardless of the nature or difficulty of the proof that is presented to them. When I said that "people are not truth seekers", I didn't mean that they are bad at discovering the truth, but that on certain subjects they usually (act as if they) don't want to discover it at all.
0entirelyuseless8y
Yes, I basically agree with this, although I think it applies to the vast majority of non-religious people as much as to religious people, including in regard to religious topics. In other words it is mostly not for the sake of truth that someone holds religious beliefs, and it is mostly not for the sake of truth that someone else holds non-religious beliefs. Also, it does mean that people are bad at discovering the truth on the topics where they do not want to discover it, just as people are generally bad at jobs they do not want to do.
0Lumifer8y
This is certainly true and not limited to religion, too.
0Brillyant8y
What does this mean in this context?
0Lumifer8y
Means "pay special attention to, this is a key expression".
2Brillyant8y
This isn't exclusively medieval. Lots of modern Evangelicals view the world in rather stark heaven/hell terms.
1buybuydandavis8y
I believe that is still the official Catholic position. Pascal's wager applied. Some quote from some "official source" went like "better that we all die an agonizing death than that one soul is lost to damnation".
1Lumifer8y
Kinda-sorta. On the one hand, yes, on other hand nowadays Vatican likes to talk about ecumenism and how everyone (notably including non-Christians) should live in peace and harmony. As usual, what it means is "better that you die an agonizing death than that one soul is lost to damnation"