I have a fondness for playing D&D characters who insist on using a moral system distinct from the morality imposed by the gods. The most common argument such characters use is the fact that there are energy types connected with "good" and "evil" and that magic specially cares about that is not evidence that those are what is objectively good or evil but merely forces that have been declared to be so because they frequently align with what humans commonly perceive of as good and evil. A necromancer I had had a quite long argument with another player's paladin over these issues. Unfortunately, I've never gotten to play such a character for an extended campaign.
And I still haven't had a chance to run my transhumanist necromancer.
I know in some games it doesn't really matter, but do you know much sociology? How would your character come to such an original view?
Well, if your character has a natural talent for magic labeled evil and he's still inclined to do what he sees as good then he's likely to think about these sort of issues.
Motivated cognition. As opposed to my current character's motivated stopping (he's a priest of a god-esque entity that's actually slightly evil. But he doesn't want to recognise that, so he's not going to think about it, except when he can't help it)
Would it be moral or rational to try and save them from their own folly, or to write it up to personal choice instead?
We don't need to hypothesize fantasy land; this is a real life question. People very often make bad decisions, and aren't always warm to the idea of people telling them how to run their lives. It's their choice, but they'll still suffer from it.
Taboo the words 'moral' and 'rational'. To the extent that you value preventing suffering, you have to want to change their behavior. There may be conflicting factors, or orthogonal factors that dominate, but that term is still there.
I think a lot of unstated assumptions are being smuggled in with the word "god" (and related words like "worship"), resulting in confusion.
If I imagine I live in a world (W) where a powerful entity (E) punishes behavior it describes as immoral (B), and some people (P) engage in B anyway, and you ask me whether I ought to do what I can to prevent P from engaging in B or ought to write it up to personal choice... well, it depends on specifics.
E.g., if W = mid-20th-century Germany and E = the government and B = homosexuality, my answer is "neither." I ought to do what I can to protect P from E, and to get rid of E.
I assume that isn't what you mean by a "god I worship."
If I instead imagine I live in W2 where E2 is an extremely reliable source of moral judgments and judges B2 immoral, and P2 engage in B2, what should I do?
Well, the simplest answer is "whatever E2 judges." That is, I ought to ask E2 "Should I try to save P2 from their own folly, or write it up to personal choice?" and then do whatever E2 says.
What if I can't do that? Well, why can't I?
One possible reason is I don't know enough about W2 to know I ought to ask.
In the real world this seems plausible to me: I'm willing to believe that there's a system out there that reliably makes more accurate moral judgments than I do, but I don't know of any such system. But your use of the word "worship" blocks that reading... if I worship E2 in W2, it follows that I do know E2 has that property, because why would I worship it otherwise?
Another possible reason is because E2 doesn't answer such questions.
In this case, it seems to me that my next best bet is to reason by analogy from the things E2 has said, in order to answer the question. Which is more or less what believers in the moral superiority of various texts in the real world do. (Not just theists... a Objectivist friend of mine worked this way as well: when I asked him questions about what someone ought to do in this situation or that, his instinct was to go look the answer up.)
I realize that all of this sounds like I'm dodging the question, but to some extent that is my point: if you invite me to imagine that I live in a world that really does have a reliable source of objective moral judgments, and then ask me whether it's more moral in that world to do A or B, the correct answer really is that I should look it up!
The fact that such an answer doesn't tell me much that's useful about the real world is, to my mind, evidence supporting the idea that I don't live in such a world.
Would it be moral or rational to try and save them from their own folly, or to write it up to personal choice instead?
I'd say yes, within limits. For example if it was a fact that masturbation made you go to hell, I'd be in favour of heavily censoring porn, and finding ways to reduce the incidence of masturbation. But not necessarily chaining people's hands to their beds against their will (though even that is questionable - I would certainly be in favor of offering people the possibility of signing up for having their hands forcibly chained to their bed!).
By the way, you get roughly the same issue with drug use: is it moral or rational to try to save someone from becoming addicted to a drug that will ruin his life? (This is separate from the question of which substances can ruin someone's life)
Honestly, I always found that really inconsistent. If people knew about hell, why would anyone be evil? Unless you're lich-caliber, have a massively small discount fraction, or simply cannot change your alignment, the cost to evil actions so dramatically outweighs the benefits. It's not whether or not you convince them, it's "how the hell aren't you convinced already?"
And so it seems to me that a consistent world has to have roughly similar afterlifes, or have people be predestined based on their personality. "Well, I'm a mean person, I guess I might as well make the most of it, and get what joy from cruelty that I can now because man am I screwed in the future."
I would strongly caution against drawing conclusions from fictional evidence here.
If people knew about hell, why would anyone be evil?
For the same reason, I suspect, that people can be evil in the real world while genuinely believing in divine punishment. For example, they may think that surely, their actions must be justified and therefore good.
But we're presuming a morality that's measurable. It's like saying "no, this uranium isn't radioactive, I'm believing really hard that it's not." These are pretty hardcore delusions we're talking about.
I think most people who believe they believe in divine punishment actually don't.
Even if people had strong evidence that hell existed, it would still be far, while the desirable outcomes of evil would be near. Real people already regularly make decisions that are flat out insane from a utility calculation perspective, so it doesn't seem unreasonable for fictional characters to do the same.
Even if people had strong evidence that hell existed, it would still be far, while the desirable outcomes of evil would be near.
Hence the "massively small discount fraction." Although, actually, there's a better way to make that convincing- as ghosts, people could have less sentience and thus less capacity to gain utility or disutility. A ghost in a pit of flame might feel as bad as a human suffering from a headache; a ghost in heaven might feel as good as a human in afterglow. So to extraplanar travelers it looks like the ghosts are in a terrible situation, but they're sort of used to it and don't even notice that they're moaning, anymore.
One reason I find that so unconvincing it is essentially requires that evil people have no long-term planning ability. The plan put forward by Elan's dad assumes neutrality after his death, such that being a legend is a long-term goal actually worth pursuing compared to entry into heaven. He's going into it knowing full well that he's the villain and he will die with his boots on- so to not go into it knowing full well that he's headed to the Abyss seems like a glaring plot hole.
I'm not well versed in the setting elements of D&D, and Order of the Stick is a homebrewed setting anyway, but I don't think that evil characters are subject to progressively greater torment depending on the magnitude of their crimes in life.
As best I recall, characters' experience in the D&D afterlife depends primarily on their moral alignment, which determines the plane of existence to which they get sent after they die. There are a few examples of specific torments for specific sins, but that's more the exception than the rule -- and in at least some cases it's possible for characters to become part of their destined afterlife's hierarchy if they fulfill the right conditions.
So Elan's dad is acting pretty sanely by not taking this into account, at least if we assume as most D&D settings do that alignment isn't very mutable. This sort of arrangement carries some rather odd implications, but hey, it's D&D.
To be fair, perhaps a billionth of the population might actually be that stupid for various reasons or insanity.
In addition, a world where due to the objecive existence of heaven and hell almost everybody was deterred from doing evil would be self-consistent.
This is a decent start to a post. There should be more - perhaps your own answers to these questions (attempting to cover the field of possible answers) and a wrap-up conclusion. I'd be amazed if there wasn't reasonably good philosophical work on the area already, for example.
(Yes, LessWrong posts do ask a lot.)
It is a perfectly fine discussion post. I would agree with you for the main page.
Incidentally, since this is the discussion section, why don't you provide some of what you think is missing? Find a few philosophical papers on the topic and link them?
Because I'm not a philosopher, don't claim to be and don't know the area :-) I said I'd be amazed if no-one had thought about this, but it's possible they havent.
Yeah, it's a good discussion post and appears to be generating discussion. I voted it up :-)
This comment by Carinthium in my earlier post has got me to think, and my apologies if this subject was raised before - couldn't find it.