Please don't fight the hypothetical here. I know the evidence isn't nearly so perfect that atheism does in fact cause harm, as all the studies I've personally seen which suggest as much have some methodical flaws. This is merely a question of whether "That which can be destroyed by the truth should be" is, in fact, a useful position to take, in view of ideas which may actually be harmful.
As the person creating the hypothetical you had the opportunity to construct it such that there was no good reason to fight it. While I'm not going to make any claims about atheistic beliefs myself I would not consider those who chose to fight the hypothetical to be making an error. Mixing real things with fictional evidence has an impact on what people believe even if you tell them "this is merely a question of [abstraction]".
Do you have a better example in mind?
(I might have considered using something generic such as “some piece of information”, but I suspect such a post would be received very badly by LW readers, because reasons.)
But yeah, generally speaking (I'm not talking about this post in particular -- I personally had no trouble semi-automatically mentally replacing “atheism” with “some piece of information”), reading Americans write stuff like that about atheism when I happen to live in a continent where a sizeable fraction of people (probably a majority of people in my generation) are atheists and the sky hasn't fallen yet feels quite weird. (Same for other things American conservatives oppose, such as gun control.)
Do you have a better example in mind?
No, not off the top of my head. I also don't particularly object to using this example. I do oppose prohibitions on fighting the hypothetical. Making this choice of hypothetical represents a form of influence. Persuasion that is immune to rebuttal is (usually) undesirable for epistemic purposes.
There is no need to go as hypothetical as that. There are plenty of routine small-scale "emotional basilisks" in everyday life, like whether to finally tell your partner that you had a one-time fling while on a business trip many years ago.
Blurring out all the fight-worthy details of the hypothetical, you've proposed something that saves people from wasting time (good), makes peoples' beliefs closer to reality (good in itself, and good in that it's likely to interact positively with unknown unknowns), but makes people unhappy (bad), and makes people live less long (bad).
Whether you should do this depends on how much time is saved, how much you value that time being saved, how much closer to reality, how much you value that closeness to reality, how much less happy, how much you value that happiness, how much less long, and how much you value the extra lifespans. These numbers cover pretty much the entirety of the question. And... different people have different values and thus will answer with different numbers, and I don't think any sort of argument will resolve those disagreements.
By "good", I mean it's good according to my personal values and according to the sorts of values I think are most commonly found in other people. I'm not sure quite how it maps onto philosophical terminology, but I believe this is a fairly common way of navigating the is-ought distinction around here.
Would you kill babies if it was intrinsically the right thing to do? If not, under what other circumstances would you not do the right thing to do? If yes, how right would it have to be, for how many babies?
EDIT IN RESPONSE: My intended point had been that sometimes you do have to fight the hypothetical.
Would you kill babies if it was intrinsically the right thing to do?
Probably not.
If not, under what other circumstances would you not do the right thing to do?
Obviously whenever the force of morality on my volition is overcome by the force of other non-moral preferences that go in an opposite direction. (a mere aesthetic preference against baby-killing might suffice, likewise not wanting to go to jail or executed)
What about consequentialism? What if we'd get a benevolent AI as a reward?
We should never fight the hypothetical. If we get undesirable results in a hypothetical, that's important information regarding our decision algorithm. It's like getting a chance to be falsified and not wanting to face it. We could also just fight Parfit's Hitchhiker, Newcomb's. We shouldn't, and neither should we here.
Is there a difference between fighting the hypothetical and recognizing that the hypothetical is badly defined and needs so much unpacking that it's not worth the effort? This falls into the latter category IMO.
"Negative impact on happiness" is far too broad a concept, "theism" is a huge cluster of ideas, and the idea of harm/benefit on different individuals over different timescales has to be part of the decision. Separating these out enough to even know what the choice you're facing is will likely render the excercise pointless.
My gut feel is that if this were unpacked enough to be a scenario that's well-defined enough to really consider, the conundrum would dissolve (or rather, it would be as complicated as the real world but not teach us anything about reality).
Short, speculative, personal answer: there may be individual cases where short-term lies are beneficial to the target in addition to the liar, but they are very unlikely to exist on any subject that has wide-ranging long-term decision impact.
If you accept the traditional assumptions of Christianity (well, the ones about "what will happen if I do X," not about "is X right?"), killing babies is pretty clearly the right thing. And still almost nobody does it, or has any desire to do it.
A just-baptized infant, as far as I know, is pretty much certain to go to Heaven in the end. Whereas if it has time to grow up it has a fair chance of dying in a state of mortal sin and going to Hell. By killing it young you are very likely saving it from approximately infinite suffering, at the price of sending yourself to Hell and making its parents sad. Since you can only go to Hell once, if you kill more than one or two babies then you're clearly increasing global utility, albeit at great cost to yourself. And yet Christians are not especially likely to kill babies.
Yes, none, any amount at all for any amount at all...assuming no akrasia, and as long as you don't mean 'right thing to do' in some kind of merely conventional sense. But that's just because, without quotation marks, the right thing to do is the formal object of a decision procedure.
If that's so, then your question is similar to this:
Would you infer that P if P were the consequent of a sound argument? If not, under what other circumstances would you not infer the consequent of a sound argument?
I don't see how this relates to the original post, this strikes me as a response to a claim of objective/intrinsic morality rather than the issue of resolving emotional basilisks vis-a-vis the litany of tarsky. Are you just saying "it really depends"?
This comment fails to address the post in any way whatsoever. No claim is made of the "right" thing to do; a hypothetical is offered, and the question asked is "what do you do?" It is not even the case that the hypothetical rests on an idea of an intrinsic "right thing" to do, instead asking us to measure how much we value knowing the truth vs happiness/lifespan, and how much we value the same for others. It's not an especially interesting or original question, but it does not make any claims which are relevant to your comment.
EDIT: That does make more sense, although I'd never seen that particular example used as "fighting the hypothetical", more just that "the right thing" is insufficiently defined for that sort of thing. Downvote revoked, but it's still not exactly on point to me. I also don't agree that you need to fight the hypothetical this time, other than to get rid of the particular example.
You've brought in moral realism, which isnt relevant.
"Would you do X, if it was epistemically rational, but not instrumentally rational"
"Would you do Y if it was instrumentally rational, but not epistemically rational"
If two concepts arent the same under all possible circumstances, they aren't the same concept. Hypotheticals are an appropriate way of determining that.
If you want to consider a hypothetical, you might not want to introduce terms and subjects seen to conflict with the hypothetical. I think that this post would have gotten a somewhat better response if you just replaced "atheism" with "X". The handle "atheism" is just distracting at this point in the discussion.
It would depend on whether or not the happiness gains from theists not starting idiotic wars, arbitrarily disallowing pleasant/useful behaviors, discriminating against certain groups, or making irrational large-scale decisions would balance out the costs.
Suppose there's no evidence for or against those improvements manifesting from the propagation of atheism. You can, for the purposes of argument, assume/hope that those improvements will manifest, but you do so without evidence that this will actually be the case, or that things won't actually get worse in any of those areas.
Uh, then I would probably just keep it to myself, at least for the time being.
The reason I would consider otherwise is that "always strive for the truth, even when it seems like a bad idea at the time" might be a good pre-commitment to make. I definitely believe that it is on an individual level, but for society I'm not so sure. I definitely don't think it's a deontological moral rule to always seek truth, I just think that it's a very good practical rule to have. This concept is laid out somewhere in the Sequences, I think.
If I could, I would work out a way to tell people that would cause a minimum amount of societal disruption, but I would definitely tell people. Since I do not and cannot know what the future holds, I cannot risk future generations making bad decisions based on false information. Since I respect other people as adults with the right to make their own decisions, I do not have the moral authority to decide it would be in their "best interests" to be lied to (as some parents do with their children).
While I don't entirely think this article was brilliant, it seems to be getting downvoted in excess of what seems appropriate. Not entirely sure why that is, although a bad choice of example probably helped push it along.
To answer the main question: need more information. I mean, it depends on the degree to which the negative effects happen, and the degree to which it seems this new belief will be likely to have major positive impacts on decision-making in various situations. I would, assuming I'm competent and motivated enough, create a secret society which generally kept the secret but spread it to all of the world's best and brightest, particularly in fields where knowing the secret would be vital to real success. I would also potentially offer a public face of the organization, where the secret is openly offered to any willing to take on the observed penalties in exchange for the observed gains. It could only be given out to those trusted not to tell, of course, but it should still be publicly offered; science needs to know, even if not every scientist needs to know.
There's nothing intrinsically bad about being wrong, but it's risky. At the very least, you should probably tell powerful people. The cost of making them less happy is the same as with anyone else, but the benefit should accurate knowledge matter is much greater.
Sometimes one not only has to fight hypotheticals, one has to fight reality. Even if I consider the hypothetical situation as is, the person in that situation should look for ways to overcome that existentialism and doubt. For a real-world example, Eliezer certainly isn't troubled by it, and has set out at length why one shouldn't be.
I'm a newbie.
I'm generally sympathetic to the pragmatic benefit over "scientism" (i.e., ignoring actual empirical effects based on what simply must or couldn't be the case) view. (In fact, I plan to post a little piece on that matter shortly.) I'm a fan of the so-called "tetris effect" of learning how to be happy by doing/noticing.
I make a distinction in the case of theism/atheism for two reasons, though. First, not to put to fine a point on it, theism is too wacky/undefined. Religious people tend not to really have any idea what the hell they are asserting when they say they believe in God or providence or whatever. An "omnipresent force," "a benevolent watcher," karma, Jesus, Jesus's old man, Vishnu, the tao, a first cause--who the hell knows what they are talking about. Second, I think you can get the same psychological benefit (which I don't doubt) without any of the cloud cuckoo land.
What used to be called "positive psychology" got a bad name around the time of Freud because it became increasingly clear that it in't as successful at helping people with serious mental problems ("hysteria") as drugs, the talking cure, shock therapy, etc. What it is good at is increasing (a bit) the the quotient of general happiness in the practicer. And the thing is, religious practice isn't so hot at fixing psychosis either. They're about on a par. But utilizing the "tetris effect" doesn't require one to believe anything...well...stupid. It's just a practice, like meditation, that makes one feel better.
In sum, my feeling is that there's no harm in getting these benefits--even if they're something of a Pollyanna/placebo thing. But all else equal, if you can get such benefit without the use of bronze age fairy tales, that's the route that makes more sense. If your hypothetical were adjusted slightly to insist that there is no other way to receive the longevity and happiness provided by "theism" (whatever that is, exactly), I might revise my answer. But I think I'd still want to know precisely what bilge I'd have to swallow to get this increase in utility.
W
Suppose it is absolutely true that atheism has a negative impact on your happiness and lifespan. Suppose furthermore that you are the first person in your society of relatively happy theists who happened upon the idea of atheism, and moreover found absolute proof of its correctness, and quietly studied its effects on a small group of people kept isolated from the general population, and you discover that it has negative effects on happiness and lifespan. Suppose that it -does- free people from a considerable amount of time wasted - from your perspective as a newfound atheist - in theistic theater.
Would you spread the idea?
This is, in our theoretical society, the emotional equivalent of a nuclear weapon; the group you tested it on is now comparatively crippled with existentialism and doubt, and many are beginning to doubt that the continued existence of human beings is even a good thing. This is, for all intents and purposes, a basilisk, the mere knowledge of which causes its knower severe harm. Is it, in fact, a good idea to go around talking about this revolutionary new idea, which makes everybody who learns it slightly less happy? Would it be a -better- idea to form a secret society to go around talking to bright people likely to discover it themselves to try to keep this new idea quiet?
(Please don't fight the hypothetical here. I know the evidence isn't nearly so perfect that atheism does in fact cause harm, as all the studies I've personally seen which suggest as much have some methodical flaws. This is merely a question of whether "That which can be destroyed by the truth should be" is, in fact, a useful position to take, in view of ideas which may actually be harmful.)