I have a terrifying confession to make: I believe in God.

This post has three prongs:

First: This is a tad meta for a full post, but do I have a place in this community? The abstract, non-religious aspect of this question can be phrased, "If someone holds a belief that is irrational, should they be fully ousted from the community?" I can see a handful of answers to this question and a few of them are discussed below.

Second: I have nothing to say about the rationality of religious beliefs. What I do want to say is that the rationality of particular irrationals is not something that is completely answered after their irrationality is ousted. They may be underneath the sanity waterline, but there are multiple levels of rationality hell. Some are deeper than others. This part discusses one way to view irrationals in a manner that encourages growth.

Third: Is it possible to make the irrational rational? Is it possible to take those close to the sanity waterline and raise them above? Or, more personally, is there hope for me? I assume there is. What is my responsibility as an aspiring rationalist? Specifically, when the community complains about a belief, how should I respond?


My Place in This Community

So, yeah. I believe in God. I figure my particular beliefs are a little irrelevant at this point. This isn't to say that my beliefs aren't open for discussion, but here and now I think there are better things to discuss.  Namely, whether talking to people like me is within the purpose of LessWrong. Relevant questions have to do with my status and position at LessWrong. The short list:

  1. Should I have kept this to myself? What benefit does an irrational person have for confessing their irrationality? (Is this even possible? Is this post an attempted ploy?) I somewhat expect this post and the ensuing discussion to completely wreck my credibility as a commentator and participant.
  2. Presumably, there is a level of entry to LessWrong that is enforced. Does this level include filtering out certain beliefs and belief systems? Or is the system merit-based via karma and community voting? My karma is well above the level needed to post and my comments generally do better than worse. A merit-based system would prevent me from posting anything about religion or other irrational things, but is there a deeper problem? (More discussion below.) Should LessWrong /kick people who fail at rationality? Who makes the decision? Who draws the sanity water-line?
  3. Being religious, I assume I am far below the desired sanity waterline that the community desires. How did I manage to scrape up over 500 karma? What have I demonstrated that would be good for other people to demonstrate? Have I acted appropriately as a religious person curious about rationality? Is there a problem with the system that lets someone like me get so far?
  4. Where do I go from here? In the future, how should I act? Do I need to change my behavior as a result of this post? I am not calling out for any responses to my beliefs in particular, nor am I calling to other religious people at LessWrong to identify themselves. I am asking the community what they want me to do. Leave? Keep posting? Comment but don't post? Convert? Read everything posted and come back later?

 

The Wannabe Sanity Waterline

This post has little to do with actual beliefs. I get the feeling that most discussions about the beliefs themselves are not going to be terribly useful. I originally titled this post, "The Religious Rational" but figured the opening line was inflammatory enough and as I began editing I realized that the religious aspect is merely an example of a greater group of irrationals. I could have admitted to chasing UFOs or buying lottery tickets.  What I wanted to talk about is the same.

That being said, I fully accept all criticisms offered about whatever you feel is appropriate. Even if the criticism is just ignoring me or an admin deleting the post and banning me. I am not trying to dodge the subject of my religious beliefs; I provided myself as an example to be convenient and make the conversation more interesting. I have something relevant and useful to discuss in regards to the overall topic of rationalistic communities that applies to the act of spawning rationalists from within fields other than rationalism. Whether it directly applies to LessWrong is for you to decide.

How do you approach someone below the sanity waterline? Do you ignore them and look for people above the line? Do you teach them until they drop their irrational deadweight? How do you know which ones are worth pursuing and which are a complete waste of time? Is there a better answer than generalizing at the waterline and turning away everyone who gets wet? The easiest response to these people is to put the burden of rationality on their shoulders. Let them teach themselves. I think think there is a better way.  I think there are people closer to the waterline than others and deciding to group everyone below the line together makes the job of teaching rationalism harder.

I, for example, can look at my fellow theists and immediately draw up a shortlist of people I consider relatively rationalistic. Compared to the given sanity waterline, all of us are deep underwater due to certain beliefs. But compared to the people on the bottom of the ocean, we're doing great. This leads into the question: "Are there different levels of irrationality?" And also, "Do you approach people differently depending on how far below the waterline they are?"

More discretely, is it useful to make a distinction between two types of theists? Is it possible to create a sanity waterline for the religious? They may be way off on a particular subject but otherwise their basic worldview is consistent and intact. Is there a religious sanity waterline? Are there rational religious? Is a Wannabe Rational a good place to start?

The reason I ask these questions is not to excuse any particular belief while feeling good about everything else in my belief system. If there is a theist struggling to verify all beliefs but those that involve God, then they are no true rationalist. But if said theist really, really wanted to become a rationalist, it makes sense for them to drop the sacred, most treasured beliefs last. Can rationalism work on a smaller scale?

Quoting from Outside the Laboratory (emphasis not mine):

Now what are we to think of a scientist who seems competent inside the laboratory, but who, outside the laboratory, believes in a spirit world? We ask why, and the scientist says something along the lines of: "Well, no one really knows, and I admit that I don't have any evidence - it's a religious belief, it can't be disproven one way or another by observation." I cannot but conclude that this person literally doesn't know why you have to look at things.

A certain difference between myself and this spirit believing scientist is that my beliefs are from a younger time and I have things I would rather do than gallop through that area of the territory checking my accuracy. Namely, I am still trying to discover what the correct map-making tools are.

Also, admittedly, I am unjustifiably attached to that area of my map. It's going to take a while to figure out why I am so attached and what I can do about it. I am not fully convinced that rationalism is the silver-bullet that will solve Life, the Universe, and Everything. I am not letting this new thing near something I hold precious. This is a selfish act and will get in the way of my learning, but that sacrifice is something I am willing to make. Hence the reason I am below the LessWrong waterline. Hence me being a Wannabe Rational.

Instead, what I have done is take my basic worldview and chased down the dogma. Given the set of beliefs I would rather not think about right now, where do they lead? While this is pure anathema to the true rationalist, I am not a true rationalist. I have little idea about what I am doing. I am young in your ways and have much to learn and unlearn. I am not starting at the top of my system; I am starting at the bottom. I consider myself a quasi-rational theist not because I am rational compared to the community of LessWrong. I am a quasi-rational theist because I am rational compared to other theists.

To return to the underlying question: Is this distinction valid? If it is valid, is it useful or self-defeating? As a community, does a distinction between levels of irrationally help or hinder? I think it helps. Obviously, I would like to consider myself more rational than not. I would also like to think that I can slowly adapt and change into something even more rational. Asking you, the community, is a good way to find out if I am merely deluding myself.

There may be a wall that I hit and cannot cross. There may be an upper-bound on my rationalism. Right now, there is a cap due to my theism. Unless that cap is removed, there will likely be a limit to how well I integrate with LessWrong. Until then, rationalism has open season on other areas of my map. It has produced excellent results and, as it gains my trust, its tools gain more and more access to my map. As such, I consider myself below the LessWrong sanity waterline and above the religious sanity waterline. I am a Wannabe Rational.


Why This Helps

The advantage of a distinction between different sanity waterlines is that it allows you to compare individuals within groups of people when scanning for potential rationalists. A particular group may all drop below the waterline but, given their particular irrational map, some of them may be remarkably accurate for being irrational. After accounting for dumb luck, does anyone show a talent for reading territory outside of their too-obviously-irrational-for-excuses belief?

Note that this is completely different than questioning where the waterline is actually drawn. This is talking about people clearly below the line. But an irrational map can have rational areas. The more rational areas in the map, the more evidence there is that some of the mapmaker's tools and tactics are working well. Therefore, this mapmaker is above the sanity waterline for that particular group of irrational mapmakers. In other words, this mapmaker is worth conversing with as long as the conversation doesn't drift into the irrational areas of the map.

This allows you to give people below the waterline an attractive target to hit. Walking up to a theist and telling them they are below the waterline is depressing. They do need to hear it, which is why the waterline exists in the first place, and their level of sanity is too low for them to achieve a particular status. But after the chastising you can tell them that other areas in their map are good enough to become more rational in those areas. They don't need to throw everything away to become a Wanna Rational. They will still be considered irrational but at least their map is more accurate than it was. It is at this point that someone begins their journey to rationalism.

If we have any good reason to help others become more rational, it seems as though this would count toward that goal.


Conversion

This last bit is short. Taking an example of myself, what should I be doing to make my map more accurate? My process right now is something like this:

  1. Look at the map. What are my beliefs? What areas are marked in the ink of science, evidence, rationalism, and logic? What areas aren't and what ink is being used there?
  2. Look at the territory. Beliefs are great, but which ones are working? I quickly notice that certain inks work better. Why am I not using those inks elsewhere? Some inks work better for certain areas, obviously, but some don't seem to be useful at all.
  3. Find the right ink. Contrasting and comparing the new mapmaking methods with the old ones should produce a clear winner. Keep adding stuff to the toolbox once you find a use for it. Take stuff out of the toolbox when it is replaced by a better, more accurate tool. Inks such as, "My elders said so" and "Well, it sounds right" are significantly less useful. Sometimes we have the right ink but we use incorrectly. Sometimes we find a new way to use an old ink.
  4. Revisit old territory. When I throw out an old ink, examine the areas of the map where that ink was used. Revisit the territory with your new tools handy. Some territory is too hard to access now (beliefs about your childhood) or some areas on your map don't have corresponding territories (beliefs about the gender of God).

These things, in my opinion, are learning the ways of rationality. I have a few areas of my map marked, "Do this part later." I have a few inks labeled, "Favorite colors." These are what keep me below the sanity waterline. As time moves forward I pickup new favorite colors and eventually I will come to the areas saved for later. Maybe then I will rise above the waterline. Maybe then I will be a true rationalist.

The Wannabe Rational
New Comment
305 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Gavin1100

MrHen leaned back in his chair.

It had taken hours to write, but it was flawless. Everything was there: complete deference to the community’s beliefs, politely asking permission to join, admission of guilt. With one post, the tenor of LessWrong had been changed. Religion would join politics and picking up women as forbidden topics.

It would only be later that they would realize what had happened. When rationality became restricted by politeness, that would be when he would begin offering arguments that weakened atheist resolve. And he would have defenders, primed by this pitch-perfect post. Once he was made an honorary member of the “in” group, there is much greater leeway. They had already mentally committed to defend him here, the later details would be immaterial.

After the first online conversion, there would be anger. But at least some would defend him, harkening back to this one post. “It’s okay to be irrational,” they would say, “we’re all irrational about some things.” Oh, the luminaries would never fall. Eliezer, Robin, YVain, Gavin—they were far too strong. But there were those who longed to go back to the warm embrace of belief. Those just emerging from their shells, into ... (read more)

[-]MrHen200

That was awesome.

6orthonormal
Oh man. I had already clicked downvote for excessive paranoia before I read the penultimate sentence. Needless to say, I reversed my judgment immediately.
-1Blueberry
I don't understand. This post doesn't suggest that we forbid talking about religion.
[-]Kevin120

It's a half-joke, or a half-sarcastic joke -- I hold such humor in the highest regard and describe it as hyper-cynicism, from my own garbling of this. http://www.snpp.com/other/special/philosophy.html Basically, the poster is mostly joking, as given away by the last two sentences, but he wouldn't have made the post if he didn't think elements of truth behind it existed.

0Blueberry
I understand that, but the "joke" was basically that the original post was a conspiracy to make this site more accepting of religion and ban discussion of it, when the post doesn't suggest anything like that; quite the opposite, in fact.
5Kevin
It works at many levels.

(Brief foreword: You really should read much more of the sequences. In particular How to Actually Change Your Mind, but there are also blog posts on Religion. I hope that one thing that comes out of this discussion is a rapid growth of those links on your wiki info page...)

What are the requirements to be a member of the LessWrong community? If we upvote your comments, then we value them and on average we hope you stay. If we downvote them, we don't value them and we hope either that they improve or you leave. Your karma is pretty positive, so stay.

You seem to be expecting a different shape of answer, about certain criteria you have to meet, about being an aspiring rationalist, or being above the sanity waterline, or some such. Those things will likely correlate with how your comments are received, but you need not reach for such proxies when asking whether you should stay when you have more direct data. From the other side, we need not feel bound by some sort of transparent criteria we propose to set out in order to be seen to be fair in the decisions we make about this; we all make our own judgement calls on what comments we value with the vote buttons.

I think you're led to ... (read more)

9MrHen
I like this. This clarifies a lot for me.
8woozle
This seems along similar lines to my initial reaction: "belief in God" is an undefined statement, since "God" is undefined (or, alternatively, has so many possible definitions that one is left with more noise than signal), and therefore such a statement does not automatically have any particular implications for your level of rationality. Given without any further context (real-world implications, specific definition of "God", etc.) it is more a social tag (statement of identification with the set of people who say they "believe in God") than anything else. Are there any implications of this belief which affect how you treat other people? Do any of those implications put you at odds with beliefs which are also reasonable if one does not believe in the existence of [your definition of] God?

Saying you believe in an undefined and undefinable fuzzword doesn't reflect well on your high-level mastery of rationality either.

OTOH, saying you "believe" in some mostly vacuous statement that you were raised to believe, while not really believing anymore in most of the more obviously false beliefs in the same package, doesn't reflect very poorly on your rationality. (I'm not sure to what extent this applies to MrHen.)

ETA: I view belief in god in a growing rationalist as sort of a vestigial thing. It'll eventually just wither and fall off.

It reflects less poorly than seriously believing in astrology, perhaps. But it's still Not Good, the more so if you've been warned. "Just give up already and admit you were completely wrong from the beginning" is not a trivial or dispensable skill.

[-]woozle110

It's "not good" on the large scale, but it seems to me that on an individual level MrHen has done a very positive thing -- perhaps two: (1) admitted openly, in front of a crowd known for its non-theism, that he is a theist and holds a belief for which he fully expected some censure; (2) did not cling defensively to that belief.

On #2: His focus on a possible change in his "rationalist" group membership as a result of that belief could be seen as an attempt to divert scrutiny away from his actual belief so that he would not have to defend (and possibly question) it -- but it did not feel to me like that sort of move; it felt more like he was expecting this group to behave much the same way that a religious group would behave if he had openly admitted disbelieving some item of their doctrine: a mis-application of previously experienced behavior, not a diversionary tactic.

I fully agree that these are impressive subskills that have been displayed, but let us not also forget that it is better to be unimpressively right than impressively wrong. (E.g. Chalmers.)

0Paul Crowley
With all this assessment of how positive MrHen's actions are, what is the query you're trying to hug?
3woozle
Is "query-hugging" a term which has been used elsewhere (e.g. some LW post I should have read)? If I'm interpreting the question correctly, I'm hoping MrHen will now fearlessly examine his "belief in God" and figure out what that means in non-metaphorical real-world terms. For example, does his God merely provide an uplifting example of goodness for us all to follow, through a series of stories which are not literally true and which one is free to interpret as one wishes? Or (to take a moderate non-liberal theist stance) does his God have a firm belief that gay people, while entitled to the doctrine of "live and let live", are not properly fulfilling some Plan and therefore are not entitled to the same protections as others? Does his God plan to return only after some terrible cataclysm has befallen mankind (and which, therefore, perhaps we should not work so hard to prevent)? Does his God have opinions about the "right to life" of fetal tissue, working on the Sabbath (and which day exactly is the Sabbath... and what constitutes "work"), the value of evidence and reason over faith and doctrine?
4orthonormal
Hug the Query
0Paul Crowley
So the question is, what difference in expectations are you hoping to discriminate between?
8pdf23ds
I'm not sure why my comment is at -1. People often start out at disadvantage, no matter how rational their character, and no matter what their potential*. You can't expect immediate maturity. I was raised as a fundamentalist Christian. I took it pretty seriously around the age of 14 or so, so much that I started looking into apologetics (the rational defense of the faith). After critically evaluating all the arguments for and against, I ended up abandoning the faith within a couple years. If my parents hadn't gone down the path of fundamentalism (which only started when I was around 8 anyway--before that they were much more average-like Christians) then I probably wouldn't have become an atheist nearly as soon. I find it unlikely that I wouldn't have ended up as a rationalist, though. * Of course, people who are raised as rationalists have more potential, but potential has more to do with intelligence and disposition than upbringing.
2woozle
The fuzzword has no universally accepted definition, but the speaker may have a specific definition for it -- and that definition could, at least theoretically, be one in which it is entirely rational to believe. Rather than presuming that just because this is not the case 99.9...% of the time, I wanted to make the point that it really does depend what that definition is... (a specific case of the more general point that rationalism isn't about group membership or [not] saying certain things, but about how you make decisions) and that this is why a rationalist wouldn't automatically characterize "belief in God" as irrational.
1Alex Flint
If it is not the case 99.9...% of the time then a rationalist certainly would characterize "belief in God" as irrational, with probability 99.9...%.
8woozle
Hmm, well, I have to admit that I did that, internally... and now I'm trying to figure out why I didn't want to do it outwardly. (several wodges of deleted text later...) Labeling a belief as "irrational" without giving a reason is (a) likely to elicit an emotional defensive response which will inhibit self-critical thinking, and (b) doesn't address the issue of why we believe said belief to be irrational, so is just a kind of argument-from-authority (we are rationalists, which means we are rational, and we say your belief is irrational; therefore it is) which is not a good process to follow if we want to be less wrong. ...so handling it that way, if our goal is to maximize rationality in others, would be irrational. And we can't address the issue of why it is irrational until we know what it actually means; saying "I believe in God" isn't really a whole lot more meaningful than "The Gostak distims the Doshes". Heck, I can truthfully say it myself: I believe in God -- as a fictional character from an ancient mythology which somehow manages to dominate political discourse in this country. Certainly that God (a character in people's minds) exists, and is very powerful, and it would be irrational of me to deny this. Yes, "everyone knows" that if you say "I believe in God" it means you believe in a sentient universe-creator (who probably possesses a number of additional characteristics which compel you to behave in certain ways) -- but the actual words don't inherently mean that, which is why I say that it is more of a "social label" than a statement regarding factual matters.
2Paul Crowley
What assertions does this reasoning not apply to?
2woozle
I should think it would not apply to any well-defined assertion of fact -- e.g "The universe was created by a conscious entity" is a statement we could discuss further -- though "There is evidence that the universe was created by a conscious entity" would be better, because then we are one step further into the dialogue. Better still would be to add "...and that evidence is [insert argument here]", because then we have a specific line of reasoning to look at and say "No, this doesn't make sense because [fill in counter-argument]", allowing the other person to then explain why our counter is wrong... and so on. As it is... can we even have a discussion about whether the Gostak distims the Doshes, or whether it is rational to believe that this is true? Not really, because the terms are undefined; we don't know what is being "believed". Same with "God", even though we know what might be (indeed, probably is) intended. Thinking about it, this phenomenon of having a few handy exceptions to a generally-reliable rule is something frequently exploited by theists (and faitheists). Theists freely use "God" as a club 99% of the time, to bash people into line and promote their meme, but then on those few occasions when they are backed into a corner by a skeptic they can always say "This? Oh no no, this isn't a club for bashing people, it's just a piece of found art I like to keep on my desk and through which I enjoy contemplating nature's beauty." So it's very important to identify what we're talking about. If MrHen claims his God is really just a piece of found art, then we have rational grounds for objection if we ever see him using it as a club. If he openly admits that it's a club, then we can object on rational grounds to the idea of bashing people.

While I agree with the overall point of this comment, the 99% statistic seems very wrong to me. I expect that for some individuals that's true, but across the whole population I'd be surprised if it's much higher than 40%. I'm basing my estimate on, among other things, having worked in a Roman Catholic nursing home (with actual nuns, though I didn't interact with them often) for four years and not making any particular effort to hide the fact that I'm an atheist from my co-workers. (The residents, I took on a case-by-case basis, as seemed appropriate given the situation.) I experienced exactly one instance in those four years of someone objecting more strongly than 'wait, what?' to my lack of faith: The leader of a new bible study group took offense when I didn't actively participate in their event (and got in my face about it in front of my residents, which you just *don't do* - I was much more upset about her upsetting them than anything else), and my supervisor's reaction to that was to apologize profusely to me (and not about the residents having been upset, either, heh) and forbid that group from coming back. The vast majority of instances where religion came up were either in social bonding contexts or as personal or interpersonal reassurances ('s/he's in heaven now') that were rarely to never directed at me by people who were aware of my atheism, and easily ignorable in any case.

Strawmen aren't good, ok?

0woozle
"99.999...%" was intended to refer to the portion of self-identified theists whose theistic beliefs would be demonstrably irrational if explored. It sounds like you're talking about the portion of self-identified theists who are offended by atheism -- a number which I would expect to be substantially lower.
4wedrifid
0woozle
I used that "99%" thing twice -- I apologize for getting muddled about which one you were referring to. Since we're talking about the "bashing" figure: I maintain that the overwhelming majority of the time when "God" is invoked in the political field, it is being used as a club to bash people into line and promote religious ideas.
5AdeleneDawner
I'd agree with that; I don't see much other reason to bring religion up, in that context. I expect that politics, some kinds of child-rearing, and provoked debates constitute the bulk of instances where religion is used as a club, and that those situations aren't the bulk of the instances where religion is used at all. (My estimate for how often religion is used as a club compared to other uses, outside those contexts, is considerably less than 10%. People live this stuff even when we're not around for them to fight with, after all.)
4woozle
Perhaps what we are working towards, then, is a recognition that an irrational belief which is Mostly Harmless in personal life can become a deadly threat when let loose in the wrong habitat (such as the political field) -- and that therefore people who wish to embrace this Mostly Harmless irrational belief are much like exotic pet owners in that they need to be aware that their cute furry wuggums can be a serious hazard if not properly contained and cared for. To bring this back to the original issue -- i.e. why it's necessary for MrHen to explain what his belief means before anyone can claim it is rational or otherwise -- and complete the metaphor: Believing in God is rather like owning a pet. It may or may not be a particularly rational thing to do (you have to spend a lot of time and money nurturing it, and the benefit you get in return is pretty much entirely psychological), but some pets are much more dangerous than others... and the degree of danger may not have any relationship to how cute and harmless they seem when you first adopt them.
0Blueberry
And once you start owning a cute little pet, it opens the door to owning larger and more dangerous pets.
0AdeleneDawner
That sounds about right.
0wedrifid
It is appropriate, then, that politics is referred to as the skillful use of blunt instruments. Your observations may be somewhat different from mine. I don't know where you reside but I know that in the US, for example, 'God' plays more part in politics than it does here in Australia.
0wedrifid
It seems you have suffered from the blunt end of a selection effect. Perhaps:
0woozle
It depends what context we're sampling from. I was thinking of discussion in the media, and/or politics in general, where religion's main contribution seems to be as I described it: demands that the speaker's particular beliefs be given precedence because they come "from God" -- a club for bashing people into line. Yes, the 99% figure was overprecise; I probably should have said "the overwhelming majority of the time". It would be an interesting study to actually count the number of "bashing people into line" usages versus all other political uses of religion; perhaps religion-based pleas for charity and mercy don't get counted because they seem sane -- something anyone reasonable would say -- so my unconscious reference-counter doesn't add them to religion's score. In any case, your definition-swapping with the word "club" completely misses my point. To whatever extent MrHen uses God as a club-for-joining (what I called a "social label"), I have no objection. It is the other sort of club I want MrHen either to specifically reject or defend: does he accept such usage of "belief in God" (if someone says God said it, it must be true), or do reason and critical thinking prevail if someone tries to persuade him that he must do X because of his belief?
2Paul Crowley
Thanks. You should definitely read No One Can Exempt You From Rationality's Laws, from which this idea is largely drawn.
5aausch
Yes, I agree with you here. It looks to me like one of the core values of the community revolves around first evaluating each individual belief for its rationality, as opposed to evaluating the individual. And this seems very sensible to me - given how compartmentalized brains can be, and how rationality in one individual can vary over time. Also, I am amused by the parallels between this core value, and one of the core principles of computer security in the context of banking transactions. As Scheiner describes it, evaluate the transaction not the end user
1Paul Crowley
first evaluating each individual belief for its rationality Again, no, I'm afraid you're still making the same mistake. When you talk about evaluating a belief for its rationality, it still sounds like the mindset where you're trying to work out if the necessary duty has been done to the rationality dance, so that a belief may be allowed in polite society. But our first concern should be: is this true? Does this map match the territory? And rationality is whatever systematically tends to improve the accuracy of your map. If you fail to achieve a correct answer, it is futile to protest that you acted with propriety.
1aausch
Now I am really confused. How can a belief be rational, and not true?

"Rational" is a systematic process for arriving at true beliefs (or high-scoring probability distributions), so if you want true beliefs, you'll think in the ways you think are "rational". But even in the very best case, you'll hit on 10% probabilities one time out of ten.

I didn't see anything wrong with your original comment, though; it's possible that Ciphergoth is trying to correct a mistake that isn't there.

0Hul-Gil
Well, if you got a very improbable result from a body of data; I could see this happening. For example, if most of a group given a medication improved significantly over the control group, but the sample size wasn't large enough and the improvement was actually coincidence, then it would be rational to believe that it's an effective medication... but it wouldn't be true. Then again, we should only have as much confidence in our proposition as there is evidence for it, so we'd include a whatever-percent possibility of coincidence. I didn't see anything wrong with your original comment, either.
0aausch
I've since learned that some people use the word "rationality" to mean "skills we use to win arguments and convince people to take our point of view to be true", as opposed to the definition which I've come to expect on this site (currently, on an overly poetic whim, I'd summarize it as "a meta-recursively applied, optimized, truth-finding and decision making process" - actual definition here).

It may be enough if we find common cause in wanting to be rational in some shared topic areas. As long as we can clearly demarcate off-limit topics, we might productively work on our rationality on other topics. We've heard that politics is the mind killer, and that we will do better working on rationality if we stay away from politics. You might argue similarly about religion. That all said, I can also see a need for a place for people to gather who want to be rational about all topics. So, the question for this community to decide is, what if any topics should be off-limits here?

Agreed.

One caveat: it's great to want to be rationalist about all things, but let him without sin cast the first stone. So much of the community's energies have gone into analyzing akrasia - understanding that behavior X is rational and proper yet not doing it - that it appears hypocritical and counter-productive to reject members because they haven't yet reached all the right conclusions. After all, MrHen did mark religion for later contemplation.

6UnholySmoke
* House M.D. Robin, I'm a little surprised to read you saying that topics on which it's difficult to stay on track should be skirted. As far as I'm concerned, 'What are your religious views?' is the first question on the Basic Rationality test. I know that encouraging compartmentalisation isn't your goal by any means, but it sounds to me as though it would be the primary effect. Now you're talking. No topics should be off-limits!
5arbimote
It would be great for this rationalist community to be able to discuss any topic, but in a way that insulates the main rationality discussions from off-topic discussions. Perhaps forum software separate from the main format of LessWrong? Are monthly open threads enough for off-topic discussions?
3groupuscule
A rationalist forum would be interesting not only for the discussions themselves, but also because it would materialize and test some of the more abstract stuff from this site. Reading the new year/decade predictions conversations, it struck me that effective treatment of outside content should be Less Wrong's crowning jewel--the real proof that rationality makes good ideas.
3Kevin
We should discuss non-meta topics on non-meta subreddits. Maybe if you asked Eliezer he would turn on sub-reddit creation or make at least one. I would like both a non-meta group blog, a non-meta link sharing subreddit, and an on-topic meta rationalist link-sharing subreddit. I think that the problems of scale and education these extra sites will create are not easy, but solving them as soon as possible is desirable. It's something we need to discuss more fully soon enough; I'll make a top-level post to discuss it eventually.
3MrHen
This is an excellent way of saying what I wanted to say and asking what I wanted to ask.
[-]Roko210

I think that, in practice, a few religious people on LW are harmless and will probably have a positive effect.

It seems politically correct to go softly-softly on the few theists here, but let's not forget that theism is known to systematically lead to false beliefs (above and beyond the [probabilistically] false belief that there is a God), such as theistic moral realism, denial of evolution and evolutionary psychology, and abandonment of the scientific method. In a community dedicated to creating an accurate map-territory correspondence by systematic weighing of evidence and by fostering a fundamental mistrust of the corrupted hardware that we run on, theism is not welcome en-masse.

1Furcas
Hear, hear. I would add that in order to be welcome en-masse, theism first has to be welcome in small quantities, which seems to be the case already, judging from the overall positive response to the original post. This makes me think that Less Wrong is on the path to failure as a rationalist community.
[-]Roko120

I think that a mass invasion of theists is unlikely for social reasons - they just won't bother to come; I don't lie awake at night frightened that when I next check LW the top article will be about how we can learn rationality lessons from JC.

8Furcas
Neither do I, but I wouldn't be surprised to see a post promoting religious accommodationism before 2010 is over.
1billswift
Neither would I. The last several months has been another of those depressing periods where I have gotten my nose rubbed in the essential truth of Lazarus Long's "Never underestimate the power of human stupidity."

There is nothing about being a rationalist that says that you can't believe in God. I think the key point of rationality is to believe in the world as it is rather than as you might imagine it to be, which is to say that you believe in the existence of things due to the weight of evidence.

Ask yourself: do you want to believe in things due to evidence?

If the answer is no, then you have no right calling yourself a "wannabe rationalist" because, quite simply, you don't want to hold rational beliefs.

If the answer is yes, then put this into practice. Is the moon smaller than the earth? Does Zeus exist? Does my toaster still work? In each case, what is the evidence?

If you find yourself believing something that you know most rationalists don't believe in, and you think you're basing your beliefs on solid evidence and logical reasoning, then by all means come and tell us about it! At that point we can get into the details of your evidence and the many more subtle points of rational reasoning in order to determine whether you really do have a good case. If you do, we will believe.

5gelisam
Uh-oh. I... I don't think I do want to believe in things due to evidence. Not deep down inside. When choosing my beliefs, I use a more important criterion than mere truth. I'd rather believe, quite simply, in whatever I need to believe in order to be happiest. I maximize utility, not truth. I am a huge fan of lesswrong, quoting it almost every day to increasingly annoyed friends and relatives, but I am not putting much of what I read there into practice, I must admit. I read it more for entertainment than enlightenment. And I take notes, for those rare cases in my life where truth actually is more important to my happiness than social conventions: when I encounter a real-world problem that I actually want to solve. This happens less often than you might think.

Here's another set of downvotes I don't get (ETA: parent was at -2 when I arrived). Gelisam is just stating their personal experience, not in order to claim we must all do likewise, but as their own reaction to the debate.

I think this community would be ill served by a norm that makes it a punishable offense to ever admit one doesn't strive for truth as much as one ought.

As far as replies go:

I'd rather believe, quite simply, in whatever I need to believe in order to be happiest.

It's not so simple. If you're self-deceiving, you might be quite wrong about whether your beliefs actually make you happier! There's a very relevant post on doublethink.

3AdeleneDawner
Agreed.
1gelisam
Ah, so that's why people downvoted my comment! Thanks for explaining. I thought it was only because I appeared to be confusing utilons with hedons. Regarding the doublethink post, I agree that I couldn't rationally assign myself false but beneficial beliefs, and I feel silly for writing that I could. On the other hand, sometimes I want to believe in false but beneficial beliefs, and that's why I can't pretend to be an aspiring rationalist.
4Nanani
"Maximizing truth" doesn't make any sense. You can't maximize truth. You can improve your knowlege of the truth, but the truth itself is independent of your brain state. In any case, when is untruth more instrumental to your utility function than truth? Having accurate beliefs is an incredibly useful thing. You may well find it serves your utility better.
8gelisam
I think it's fairly obvious that "maximizing truth" meant "maximizing the correlation between my beliefs and truth". Truth is overrated. My prior was heavily biased toward truth, but then a brief and unpleasant encounter with nihilism caused me to lower my estimate. And before you complain that this doesn't make any sense either, let me spell out that is an estimate of the probability that the strategy "pursue truth first, happiness second" yields, on average, more hedons than "pursue happiness using the current set of beliefs".
2Wilka
Have you ever the experience of learning something true that you would rather not have learned? The only type of examples I can think of here (of the top of my head) would be finding out you had an unfaithful lover, or that you were really adopted. But in both case, it seems like the 'unhappiness' you get from learning it would pass and you'd be happy that you found out in the long wrong. I've heard people say similar things about losing the belief in God - because it could lead to losing (or at least drifting away from) people you hold close, if their belief in God had been an import thing in their relationship to you.
2faul_sname
Yes. Three times, in fact. Two of them are of roughly the same class as that one thing floating around, and the third is of a different class and far worse than the other two (involving life insurance and charity: you'll find it if you look).

I believe in God too, since I think it's more likely that there is a God than that there isn't. But by "God" I mean "experimenter", or "producer", or "player".

I should start an apocalyptic Dionysian religion around one commandment and threat: "Be entertaining, for sweeps week cometh." The main problem is that would make Hitler a saint.

6Jack
I recognize the simulation hypothesis as valid but what evidence have you that it is more likely the case than not? I might well join if you can convince me of the above.
8PhilGoetz
But it would be more entertaining if you became its nemesis and devoted yourself to its destruction. I begin to sense my new religion may have severe organizational problems.
1xamdam
He had me at
1Kevin
My intuitive justification: There are an infinite number of times I can be the Kevin simulated in 2010. I even think it very likely that Kevin_10000CE would want to run a consciousness through an ancestor simulator of the bottleneck period in human history to be able to assimilate the knowledge of that experience. So I could be in any one of an infinite possible number of simulations, or I could be living in the true 2010. The probability calculation becomes meaningless because of the infinities involved, but I don't see why my intuition is wrong.
3Jack
If your premise is true the probability you and I are in a simulation is 1 (though for obnoxious reasons so I understand what you mean). But the premise seems wrong. There are a large number of plausible futures in which no world simulations are ever run.
1Kevin
Are most of those possible futures with no world simulations because of the destruction of human civilization, or because humans transcend and ancestor simulations are deemed to be something like unethical? If the first, I'm much more optimistic about us not killing ourselves than Eliezer.
2Eliezer Yudkowsky
I'd think UFAIs would be much more likely to run faithful generic-intelligent-species simulations than Friendly AIs would be likely to run faithful ancestor simulations.
3Kevin
So then the question becomes, if you're a transhuman living under a FAI and you want to play around in simulations of certain interesting times in human history, how realistic can the simulations be?
4Eliezer Yudkowsky
Not so realistic that you become a different person who never consented to being simulated, nor so realistic that "waking up" afterward equates to killing an innocent person and substituting the old you in their place.
4DanielVarga
In a universe where merging consciousnesses is just as routine as splitting them, the transhumans may have very different intuitions about what is ethical. For example, I can imagine that starting a brand new consciousness with the intention of gradually dissolving it in another one (a sort of safe landing of the simulated consciousness and its experiences) will be considered perfectly ethical and routine. Maybe it will even be just as routine as us humans reasoning about other humans. (Yes, I know that I don't create a new conscious being when I think about the intentions of another human.) What I just claimed is that in such a universe, very different ethical norms may emerge. A much stronger claim that I would not try to defend right now is that such a nonchalant and inhuman value system may simply be the logical consequent of our value system when consistently applied to such a weird universe.
0Kevin
I agree with you, but I think part of the problem is that we only get to define ethics once, unless we somehow program the FAI to take the changing volition of the transhuman race into account.
1DanielVarga
Do you agree with my first, ridiculously modest claim, or my second, quite speculative one? :)
0Kevin
I agreed specifically with the first modest claim and the general sentiment of the entire post.
0[anonymous]
This comment has been moved.
2MichaelHoward
Even where the FAI was sure that different person would consent to being simulated if made aware of the situation and thinking clearly? It could throw in some pretty good incentives. I wonder if we should adjust our individual estimates of being in a Friendly-run sim (vs UF-sim or non-sim) based on whether we think we'd give consent. I also wonder if we should adjust whether we'd give consent based on how much we'd prefer to be in a Friendly-run sim, and how an FAI would handle that appropriately.
0Kevin
One reason to significantly adjust downward the probability of being in a Friendly-run sim is what I would call "The Haiti Problem"... I'm curious if anyone has solutions to that problem. Does granting eventual immortality (or the desired heaven!) to all simulated persons make up for a lifetime of suffering?
0orthonormal
Perhaps only a small number of persons need be simulated as fully conscious beings, and the rest are acted out well enough to fool us. Perceived suffering of others can add to the verisimilitude of the simulation. Of course, internalizing this perspective seems like moral poison, because I really do want the root-level version of me to act against suffering there where it definitely exists.
2Kevin
I'm not sure I believe your first clause -- the final chapter of The Metamorphosis of Prime Intellect tried to propose an almost Buddhist type resurrection as a solution to the problem of fun. If the universe starts feeling too much like a game to some transhumans, I think a desire to live again as a human for a single lifetime might be somewhat common. Does that desire override the suffering that will be created for the new human consciousness that will later be merged back into the immortal transhuman? Most current humans do seem to value suffering for some reason I don't understand yet... Since this is perilously close to an argument about CEV now, we can probably leave that as a rhetorical question. For what it's worth, I updated my intuitive qualitative probability of living in a simulation somewhat downward because of your statement that as you conceive of your friendly AI right now, it wouldn't have let me reincarnate myself into my current life.
2Strange7
The masochists that I know seem to value suffering either for interpersonal reasons (as a demonstration of control - beyond that I'm insufficiently informed to speculate), or to establish a baseline against which pleasurable experiences seem more meaningful.
0[anonymous]
Even where the FAI was sure that different person would consent to being simulated if made aware of the situation and thinking clearly? It could throw in some pretty good incentives. I wonder if we should adjust our individual estimates of being in a Friendly-run sim (vs UF-sim or non-sim) based on whether we think we'd give consent. I also wonder if we should adjust whether we'd give consent based on how much we'd prefer to be in a Friendly-run sim, and how an FAI would handle that appropriately.
0[anonymous]
Even where the FAI was sure that different person would consent to being simulated if made aware of the situation and thinking clearly? It could throw in some pretty good incentives. I wonder if we should adjust our individual estimates of being in a Friendly-run sim (vs UF-sim or non-sim) based on whether we think we'd give consent. I also wonder if we should adjust whether we'd give consent based on how much we'd prefer to be in a Friendly-run sim, and how an FAI would handle that appropriately.
2Jack
Yes. Also, we might just be too poor in the future (either too poor to run any or too poor to run many). And if it wasn't included in "humans transcend", a Singleton could prohibit them.
0Kevin
Those are certainly possibilities, but we are comparing infinite sets here. Or comparing uncountable futures. I recognize that my premise may "seem" wrong, but I don't think we can convince each other until we can take this out of the realm of comparing infinities.
0Jack
I don't think they're uncountable. It's just a continuous probability distribution.

I am asking the community what they want me to do. Leave? Keep posting? Comment but don't post? Convert? Read everything posted and come back later?

I want you to keep doing what you have been doing. I find it distressing that you seem to think it'd be a reasonable, or even realistic, response for us to chase you out with torches and pitchforks. I am sorry to hear that we have created an environment that has led you to conceal this fact about yourself for such an extended time. I am pleased to note that you seem to find us worth hanging out with and seeking advice and help from in spite of us apparently having created this unwelcoming atmosphere.

I'm also personally curious about your exact flavor of theism, but that may, as you indicate, be neither here nor there.

If you haven't already, you might want to read Theism, Wednesday, and Not Being Adopted. I don't know if the case I describe is similar to yours or not, though.

3MrHen
I really don't know how the community is going to respond. The last time I talked like this I made a comment that ended up receiving the most upvotes of anything I have done. I don't expect torches and pitchforks, but I do expect some form of ultimatum. I also expect an intangible response that will affect my future comments/posts. But your comment is certainly encouraging. I wasn't so much "hiding". I just didn't have a good reason to come forward. Why would I? Why would any theist in this community? The reason I did is because I am weighing whether I want to actively devote time to continuing this path. If I commit to this path it is (a) better to say this now than later and (b) a good way to ping for impassable objects in regards to using LessWrong to continue my journey. EDIT: Oh, and a response to Wednesday is forthcoming but will take more thought. :) UPDATE: The response is up.
8JamesAndrix
I suggest changing your expectations. I identified myself as a theist here long ago, and haven't noticed any negative response. At least one other person did at the same time, I think we got upvotes and someone commented on it being interesting, but that was that. Can't find the link. I now self-identify as an atheist, so stick around, the magic works. :-)
1MrHen
This is the response to the Wednesday post. (Which, by the way, I read way back when it was written. You can find a few of my comments down in the threads. :) ) Wednesday's case is certainly interesting. My younger self used similar logic during his big crisis of faith and I don't consider it to be a poor choice of action. I think my current situation is very apt for a future Wednesday that begins to wonder about some of the things she has seen. Future-Wednesday and Present-MrHen would probably have some excellent discussions. The big question that is relevant for Wednesday is whether you can successfully compartmentalize areas of your map. You say, "I reject out of hand the idea that she should deconvert in the closet and systematically lie to everyone she knows." I would respond by asking the same questions I asked in my post. Is it helpful to pursue "rational" theism? It isn't true rationalism by any means. But is it better than the alternative?

How much of the Sequences have you read?

7MrHen
I keep track at my wiki user page.

That's basically nothing. Okay, not much point in my wondering "What could I have missed?" then.

Your intentions seem good, and if you read through the Sequences (or even just Map and Territory, Mysterious Answers and How To Actually Change Your Mind) then I expect you'll have a very different perspective at the end of it.

5byrnema
I think you did miss something. You write that everything adds back up to normalcy, but I observe that physical materialism feels bereft of meaning compared to the theistic worldview. (I can give more details regarding this, and concede in advance it is not a universal experience.) If I can construct a free-floating belief system that makes "values" coherent for this bereft person, on what basis should they not prefer the free-floating belief system? The running argument seems to be that they should value 'truth'. However the obvious catch is that the person only places a terminal value for truth from within the free floating belief system.

Byrnema, if you took someone who'd just never heard of God to begin with, never heard of any superstitutions, just grew up in a nice materialistic civilization that expected to take over the galaxies someday, and you asked them "What's left, when God's gone?" they'd look up at the stars, look back at you, and say, "I don't understand what you think is missing - it looks to me like everything is there."

I'm sorry that I failed to convey this, and I do worry that the metaethics sequence failed and will need to be done over. But you can't say I didn't try.

2byrnema
You did. I just think it's crazy to think that no one will ever ask, "what's the purpose of taking over all these galaxies?". I'm also not sure why you mention God specifically. I'm not sure how the existence of a supreme super-power assigning purpose would be any more meaningful -- or, really, any different -- than the physical laws of the universe assigning purpose.
8orthonormal
If asked, they might answer along the lines of "so that more people can exist and be happy"; "so that ever more interesting and fun and beautiful patterns can come into being"; "so that we can continue to learn and understand more and more of the strange and wonderful patterns of reality", etc. None of these are magical answers; they can all be discussed in terms of a (more sophisticated than current) analysis of what these future beings want and like, what their ethics and aesthetics consist of (and yes, these are complicated patterns to be found within their minds, not within some FOV), etc. What I think is crazy is to reject all those answers and say you can't in principle be satisfied with any answer that could be different for a different civilization. I think that such dismissals are a mistake along the lines of asking for the final cause or "purpose" of the fact that rocks fall, and rejecting gravity as an insufficient answer because it's only an efficient cause.
4Vive-ut-Vivas
The question itself ("what's the purpose?") presupposes the answer. If you've never heard of God or superstition, why would you assume that there was any purpose other than just to take over all these galaxies?
3byrnema
Whenever you do anything, isn't it natural to question what you're doing it for?
5Vive-ut-Vivas
That's not the question you're asking. There's no God-shaped hole in answering "because we feel like taking over galaxies" until you put it there.
1byrnema
I didn't say anything about a God-shaped hole. You're reading something different into my question, or maybe trying to cubbyhole my question into a stereotype that doesn't quite fit. Whenever I do anything, I have an idea of how that fits into a larger objective. One exception might be activities that I do for simplistic hedonism, but that doesn't provide the full range of satisfaction and joy that I feel when I feel like I'm making progress in something. The pleasure in the idea of taking over galaxies is very much progress-based, and so it would be natural to ask why this would actually be progress.
0Vive-ut-Vivas
Substitute "meaning" for "God", then. The problem is trying to fit everything into a "larger objective": whose objective? That's what I mean when I say you're presupposing the answer. Also, "why would taking over galaxies be progress?" can be answered pretty simply once you explain what you mean by "progress". Technological advancement? Increased wealth? Curiosity?
1byrnema
Good. Your comments above now make good sense to me. That's my problem. Maybe it's a problem common to many theists too. Any cures? And is this problem a hardware problem or a logic problem?
3Vive-ut-Vivas
If I had the catch-all cure to existential angst, I wouldn't be parroting it on here, I'd be trying to sell it for millions! Maybe you could call it a hardware problem, since I'd liken it to a virus. You've been corrupted to look for a problem when there isn't one, and you know there isn't one, but you just don't feel emotionally satisfied (correct me if I'm wrong here). I don't have an answer for that. I would suspect that the more you distance yourself from these kinds of views (that the universe must have "meaning" and all that), the question just stops being even relevant. I think the problem just involves breaking a habit.
7wedrifid
Then Robin would have a field day explaining why people did not actually buy it, despite the wringing of hands and gnashing of teeth.
0Mitchell_Porter
Before your journey into nihilism, why did you do things? ETA: Though this discussion focuses on purposes and actions, I wonder if the problem might be, that something about life which was always present for you and providing meaning, no matter what you did, now appears to be absent under all conceivable circumstances.
0Kevin
I couldn't make it through the metaethics sequence but I really liked Three Worlds Collide.
3Furcas
Eliezer didn't really miss anything. What you're asking boils down to, "If I value happiness more than truth, should I delude myself into holding a false belief that has no significant consequence except making me happy?" The obvious answer to that question is, "Yes, unless you can change yourself so that your happiness does not require belief in something that doesn't exist". The second option is something that Eliezer addressed in Joy in the Merely Real. He didn't address the first option, self-deception, because this website is about truth-seeking, and anyway, most people who want to deceive themselves don't need help to do it.
6byrnema
I was embarrassed for a while (about 25 minutes after reading your comment and Ciphergoth's) that my ideas would be reduced to the cliche's you are apparently responding to. But then I realized I don't need to take it personally, but qualify what I mean. First, there's nothing from my question to Eliezer to indicate that I value happiness more than truth, or that I value happiness at all. There are things I value more than truth; or rather, I only find it possible to value truth above all else within a system that is coherent and consistent and thus allows a meaningful concept of truth.
2Furcas
If "feels bereft of meaning" doesn't mean that it makes you unhappy, the only other interpretation that even begins to make sense to me, is that an important part of your terminal values is entirely dependent on the truth of theism. To experience what that must feel like, I try to imagine how I would feel if I discovered that solipsism is true and that I have no way of ever really affecting anything that happens to me. It would make me unhappy, sure, but more significantly it would also make my existence meaningless in the very real sense that while the desires that are encoded in my brain (or whatever it is that produces my mind) would not magically cease to exist, I would have to acknowledge that there is no possible way for my desires to ever become reality. Is this closer to what you're talking about? If it isn't, I'm going to have to conclude that either I'm a lot stupider than I thought, or you're talking about a square circle, something impossible.
5byrnema
It is much closer to what I'm talking about. Orthonormal writes that in the absence of a Framework of Objective Value, he found he still cared about things (the welfare of friends and family, the fate of the world, the truth of my his beliefs, etc). In contrast, I find my caring begins fading away. Some values go quickly and go first -- the fate of the world, the truth of my own beliefs -- but other values linger, long enough for me to question the validity of a worldview that would leave me indifferent to my family. Orthonormal also writes that in response to my hypothetical question about purpose, And none of these are terminal values for me. Existence, happiness, fun and beauty are pretty much completely meaningless to me in of themselves. In fact, the something which causes me to hesitate when I might feel indifference to my family is a feeling of responsibility. It occurs to me that satisfying my moral responsibility might be a terminal value for me. If I have none; if it really is the case that I have no moral responsibility to exist and love, I'd happily not exist and not love. Orthonormal, yourself, Eliezer, all seem to argue that value nihilism just doesn't happen. Others concede that nihilism does happen, but that this doesn't bother them or that they'd rather sit with an uncomfortable truth than be deluded. So perhaps it's the case that people are intrinsically motivated in different ways, or that people have different thresholds for how much lack of meaning they can tolerate. Or other 'solutions' come to mind.

It seems to me that you conflate the lack of an outside moral authority with a lack of meaning to morality. Consider "fairness". Suppose 3 people with equal intrinsic needs (e.g. equal caloric reserves and need for food) put in an equal amount of work on trapping a deer, with no history of past interaction between any of them. Fairness would call for each of them to receive an equal share of the deer. A 90/9/1 split is unfair. It is unfair even if none of them realize it is unfair; if you had a whole society where women got 10% the wages of men, it wouldn't suddenly become massively unfair at the first instant someone pointed it out. It is just that an equal split is the state of affairs we describe by the word "fair" and to describe 90/9/1 you'd need some other word like "foograh".

In the same sense, something can be just as fair, or unfair, without there being any God, nor yet somehow "the laws of physics", to state with controlling and final authority that it is fair.

Actually, even God's authority can't make a 90/9/1 split "fair". A God could enforce the split, but not make it fair.

So who needs an authority to tell us what we should do, either? God couldn't make murder right - so who needs God to make it wrong?

3byrnema
Thank you for your effort to understand. However, I don't believe this is in the right direction. I'm afraid I misunderstood or misrepresented my feelings about moral responsibility. For thoroughness, I'll try to explain it better here, but I don't think it's such a useful clue after all. I hear physical materialists explaining that they still feel value outside an objective value framework naturally/spontaneously. I was reporting that I didn't -- for some set of values, the values just seemed to fade away in the absence of an objective value framework. However, I admit that some values remained. The first value to obviously remain was a sense of moral responsibility, and it was that value that kept me faithful to the others. So perhaps it is a so-called 'terminal value', in any case, it was the limit where some part of myself said "if this is Truth, then I don't value Truth".
6CassandraR
The reason I feel value outside of an objective value framework is that I taught myself over weeks and months to do so. If a theist had the rug pulled out from under them morally speaking then they might well be completely bewildered by how to act and how to think. I am sure this would cause great confusion and pain. The process of moving from a theist world view to a materialistic world view is not some flipped switch, a person has to teach themselves new emotional and procedural reactions to common every day problems. The manner in which to do this is to start from the truth as best you can approximate it and train yourself to have emotional reactions that are in accordance with the truth. There is no easy way to to do this but I personally find it much easier to have a happy life once I trained myself to feel emotions in relation to facts rather than fictions.
4orthonormal
Upvoted for honesty and clarity. I'm not sure there's much more to discuss with you on the topic of theism, then; the object-level arguments are irrelevant to whether you believe. (There are plenty of other exciting topics around here, of course.) All I can do is attempt to convince you that atheism really isn't what it feels like from your perspective. EDIT: There was another paragraph here before I thought better of it.
5wedrifid
Perhaps we could say "needn't be what it feels like from your perspective". It clearly is that feeling for some. I wonder to what extent their difficulty is, in fact, an external-tribal-belief shaped hole in their neurological makeup.
0orthonormal
Agreed. I should remember I'm not neurotypical, in several ways.
4randallsquared
I'm not sure that's possible. As someone who's been an atheist for at least 30 years, I'd say atheism does feel like that, unless there's some other external source of morality to lean on. From the back and forth on this thread, I'm now wondering if there's a major divide between those who mostly care deeply without needing a reason to care, and those who mostly don't.
6AdeleneDawner
I'd thought of that myself a few days ago. It seems like something that we'd experience selection bias against encountering here.
0RobinZ
I would expect to see nihilist atheists overrepresented here - one of the principles of rationality is believing even when your emotions oppose it.
2AdeleneDawner
I'm not surprised to encounter people here who find nihlism comfortable, or at least tolerable, for that reason. People who find it disabling - who can't care without believing that there's an external reason to care - not so much.
3Paul Crowley
I don't feel that way at all, personally - I'm very happy to value what I value without any kind of cosmic backing.

Orthonormal, yourself, Eliezer, all seem to argue that value nihilism just doesn't happen.

That's a rather poor interpretation. I pointed out from my own experience that nihilism is not a necessary consequence of leaving religion. I swear to you that when I was religious I agonized over my fear of nihilism, that I loved Dostoyevsky and dreaded Nietzsche, that I poured out my soul in chapels and confessionals time and time again. I had a fierce conscience then, and I still have one now. I feel the same emotional and moral passions as before; I just recognize them as a part of me rather than a message from the heart of the cosmos— I don't need permission from the universe to care about others!

I don't deny that others have adopted positions of moral nihilism when leaving a faith; I know several of them from my philosophy classes. But this is not necessary, and not rational; therefore it is not a good instrumental excuse to maintain theism.

Now, I cannot tell you what you actually feel; but consider two possibilities in addition to your own:

  • What you experience may be an expectation of your values vanishing rather than an actual attenuation of them. This expectation can be mist

... (read more)

This might turn out to be vacuous, but it seems useful to me. Here goes nothing:

Do you have a favorite color? Or a favorite number, or word, or shirt, or other arbitrary thing? (Not something that's a favorite because it reminds you of something else, or something that you like because it's useful; something that you like just because you like it.)

Assuming you do, what objective value does it have over other similar things? None, right? Saying that purple is a better color than orange, or three is a better number than five (to use my own favorites) simply doesn't make sense.

But, assuming you answered 'yes' to the first question, you still like the thing, otherwise it wouldn't be a favorite. It makes sense to describe such things as fun or beautiful, and to use the word 'happiness' to describe the emotion they evoke. And you can have favorites among any type of things, including moral systems. Rationality doesn't mean giving those up - they're not irrational, they're arational. (It does mean being careful to make sure they don't conflict with each other or with reality, though - thinking that purple is somehow 'really' better than orange would be irrational.)

1Paul Crowley
Reminds me of Wittgenstein's "Ethics and aesthetics are one and the same". Not literally true I don't think, but I found it enlightening all the same.
7Vladimir_Nesov
You are not really entitled to your own stated values. You can't just assert that beauty is meaningless to you and through this act make it so. If beauty is important to you, being absolutely convinced that it's not won't make it unimportant. You are simply wrong and confused about your values, at which point getting a better conscious understanding of what is "morality" becomes even more important than if you were naive and relied on natural intuition alone.
0byrnema
I'm not sure to what extent terminal values can be chosen or not, but it seems to me that (the following slightly different than what you were describing) if you become absolutely convinced that your values aren't important, then it would be difficult to continue thinking your values are important. Maybe the fact that I can't be convinced of the unimportance of my values explains why I can't really be convinced there's no Framework of Objective Value, since my brain keeps outputting that this would make my values unimportant. But maybe, by the end of this thread, my brain will stop outputting that. I'm willing to do the necessary mental work. By the way, Furcas seemed to understand the negation of value I'm experiencing via an analogy of solipsism.
9orthonormal
One last time, importance ≠ universality. If we had been Babyeaters, we would think that eating babies is the right-B thing to do. This doesn't in any way imply we should be enthusiastic or even blasé about baby-eating, because we value the right thing, not the right-B thing that expresses the Babyeaters' morality! I understand that you can't imagine a value being important without it being completely objective and universal. But you can start by admitting that the concept of important-to-you value is at least distinct from the concept of an objective or universal value! Imagine first that there is an objective value that you just don't care about. Easy, right? Next, imagine that there is something you care about, deeply, that just isn't an objective value, but which your world would be awful/bland/horrifying without. Now give yourself permission to care about that thing anyway.
4Kutta
This the best (very) short guide to naturalistic metaethics I've read so far.
0byrnema
This is very helpful. The only thing I would clarify is that the lesson I need to learn is that importance ≠ objectivity. (I'm not at all concerned about universality.) I'm not sure. With a squirrel in the universe, I would have thought the universe was better with more nuts than with less. I can understand there being no objective value, but I can't understand objective value being causally or meaningfully distinct from the subjective value. Hm. I have no problem with 'permission'. I just find that I don't care about caring about it. If it's not actually horrible, then let the universe fill up with it! My impression is that intellectually (not viscerally, of course) I fail to weight my subjective view of things. If some mathematical proof really convinced me that something I thought subjectively horrible was objectively good, I think I would start liking it. (The only issue, that I mentioned before, is that a sense of moral responsibility would prevent me from being convinced by a mathematical proof to suddenly acquire beliefs that would cause me to do something I've already learned is immoral. I would have to consider the probability that I'm insane or hallucinating the proof, etc.)
7Eliezer Yudkowsky
I can barely imagine value nihilism, but not a value nihilism from which God or physics could possibly rescue you. If you think that your value nihilism has something to do with God, then I'm going to rate it as much more likely that you suffer from basic confusion, than that the absence of God is actually responsible for your values collapse whereas a real God could have saved it and let you live happily ever after just by ordering you to have fun.

I think the basic problem is that evolution re-used some of the same machinery to implement both beliefs and values. Our beliefs reflect features of the external world, so people expect to find similar external features corresponding to their values.

Actually searching for these features will fail to produce any results, which would be very dismaying as long as the beliefs-values confusion remains.

The God meme acts as a curiosity stopper; it says that these external features really do exist, but you're too stupid to understand all the details, so don't bother thinking about it.

2byrnema
Exactly! I think this is exactly the sort of 'solution' that I hoped physical materialism could propose. I'd have to think about whether the source of the problem is what Peter has guessed (whether this particular confusion) but from the inside it exactly feels like a hard-wiring problem (given by evolution) that I can't reconcile.
4byrnema
As I wrote above in this thread, I agree that there's not any clear way that the existence of God could solve this problem. [Note: I took out several big chunks about how religions address this problem, but I understand people here don't want to hear about religion discussed in a positive light. But the relevant bit:] Peter de Blanc wrote: And this seems exactly right. Without the God meme telling me that it all works out somehow -- for example, somehow the subjective/object value problem works out -- I'm left in a confused state.
3Furcas
What if the existence of a Framework of Objective Value wasn't the only thing you were wrong about? What if you are also wrong in your belief that you need this Framework in order to care about the things that used to be meaningful to you? What if this was simply one of the many things that your old religious beliefs had fooled you about? It is possible to be mistaken about one's self, just as we can be mistaken about the rest of reality. I know it feels like you need a Framework, but this feeling is merely evidence, not mathematical proof. And considering the number of ex-believers who used to feel as you do and who now live a meaningful life, you have to admit that your feeling isn't very strong evidence. Ask yourself how you know what you think you know.
4byrnema
I would be quite happy to be wrong. I can't think of a single reason not to wish to be wrong. (Not even the sting of a drop in status; in my mind, it would improve my status to have presented a problem that actually has a solution instead of one that just leads in circles.) Through the experiment of assimilating the ideas of Less Wrong over the course a year, I found my worldview changing and becoming more and more bereft of meaning as it seemed more and more logical that value is subjective. This means that no state of the universe is objectively any "better" than any other state, there's no coherent notion of progress, etc. And I can actually feel that pretty well; right on the edge of my consciousness, an awareness that nothing matters, I'm just a program that's running in some physical reality. I feel no loyalty or identity with this program, it just exists. And I find it hard to believe I ought to go there; some intuition tells me this isn't what I'm supposed to be learning. I've lost my way somehow. This reminds me of the labyrinth metaphor. Where the hell am I? Why am I the only one to find this particular dead end? Should I really listen to my friends on the walkie-talkie saying, 'keep going, it's not really a deep bottomless chasm!', or shouldn't I try and describe it better to make certain you know where I'm at?
3Jordan
When I first gave up the idea of objective morality I also plummeted into a weird sort of ambivalence. It lasted for a few years. Finally, I confronted the question of why I even bothered continuing to exist. I decided I wanted to live. I then decided I needed an arbitrary guiding principle in life to help me maintain that desire. I decided I wanted to live as interesting a life as possible. That was my only goal in life, and it was only there to keep me wanting to live. I pursued that goal for a few years, rather half-heartedly. It was enough to keep me going, but not much more. Then, one day, rather suddenly, I fell completely in love. Really, blubberingly, stupidly in love. I was completely consumed and couldn't have cared less if it was objectively meaningless. A week later, I found out the girl was also in love with me, and I promptly stopped loving her. Meditating on the whole thing afterwards, I realized I hadn't been in love, but had experienced some other, probably quite disgusting emotion. I had been pulled up from the abyss of subjectivity by the worst kind of garbage! It felt like the punchline of a zen koan. I realized that wallowing in ambivalence was just as worthless as embracing the stupidest purpose, and became ambivalent to the lack of objectivity itself. After that I began rediscovering and embracing my natural desires. A few years of that and I finally settled down into what I consider a healthy person. But, to this day, I still occasionally feel the fuzzy awareness at the edge of my consciousness that everything is meaningless. And, when I do, I just don't care. So what if everything I love is objectively worthless? Meaninglessness itself is meaningless, so screw it! I realize this whole story is probably bereft of any sort of rational take away, but I thought I'd share anyway, in the hopes of at least giving you some hope. Failing that, it was at least nice to take a break from rationality to write about something totally irrational.
1randallsquared
You are not. I cannot remember a time I genuinely believed in God, though I was raised Baptist by a fundamentalist believer. I don't know why I didn't succumb. When I was a teen, I didn't really bother doing anything I didn't want to do, except to avoid immediate punishment. All of my goals were basically just fantasies. Sometime during the 90s I applied Pascal's Wager to objective morality and began behaving as though it existed, since it seemed clear that a more intelligent goal-seeking being than I might well discover some objective morality which I couldn't understand the argument for, and that working toward an objective morality (which is the same thing as a universal top goal, since "morality" consists of statements about goals) requires that I attempt to maximize my ability to do so when it's explained what it is. This is basically the same role you're using God for, if I understand correctly. Unfortunately, as my hope for a positive singularity dwindles, so does my level of caring about, basically, everything not immediately satisfying to me. I remind myself that the Wager still holds even with a very small chance, but very small chance persistently feels like zero chance. Anyway, I don't have a solution, but I wanted to point out that this problem is felt by at least some other people as well, and doesn't necessarily have anything to do with God, per se. I suppose some might suggest that I've merely substituted a sufficiently intelligent goal-seeker for "God"...
1AdeleneDawner
If you're still concerned about that after all the discussion about it, it might be a good idea to get some more one-on-one help. Off the top of my head I'd suggest finding a reputable Buddhist monk/master/whatever to work with: I know that meditation sometimes evokes the kind of problem you're afraid of encountering, so they should have some way of dealing with that.
0Vladimir_Nesov
This is wrong. Some states are really objectively better than other states. The trick is, "better" originates from your own preference, not God-given decree. You care about getting the world to be objectively better, while a pebble-sorter cares about getting the world to be objectively more prime.
5wedrifid
Rather, it is using a different definition of 'better' (or, you could argue, 'objectively') than you are. Byrnema's usage may not be sophisticated or the most useful way to carve reality but it is a popular usage and his intended meaning is clear. That is the framework I use. I agree that byrnema could benefit from an improved understanding of this kind of philosophy. Nevertheless, byrnema's statement is a straightforward use of language that is easy to understand, trivially true and entirely unhelpful.
0Vladimir_Nesov
It doesn't work for most of any reasonable definition, because you'd need "better" to mean "absolute indifference", which doesn't rhyme.
0wedrifid
No it wouldn't. You are confused.
0Vladimir_Nesov
I'm pretty sure I can't be confused about the real-world content of this discussion, but we are having trouble communicating. As a way out, you could suggest reasonable interpretations of "better" and "objectively" that make byrnema's "no state of the universe is objectively any "better" than any other state" into a correct statement.
1wedrifid
You appear to have a solid understanding of the deep philosophy. Your basic claims in the two ancestors are wrong and trivially so at about the level of language parsing and logic. Far from being required, "absolute indifference" is doesn't even work as a meaning in the context: "No state of the universe is objectively any "absolute indifference" than any other state". If you fixed the grammar to make the meaning fit it would make the statement wrong. I'm not comfortable making any precise descriptions for a popular philosophy that I think is stupid (my way of thinking about the underlying concepts more or less matches yours). But it would be something along the lines of defining "objectively better" to mean "scores high in a description or implementation of betterness outside of the universe, not dependent on me, etc". Then, if there is in fact no such 'objectively better' thingumy (God, silly half baked philosophy of universal morality, etc) people would say stuff like byrnema did and it wouldn't be wrong, just useless.
0Vladimir_Nesov
"According to a position of absolute indifference, no state of the universe is preferable to any other." That "stupid" for me got identified as "incorrect", not a way to correctly interpret the byrnema's phrase to make it right (but a reasonable guess about the way the phrase came to be).

"According to a position of absolute indifference, no state of the universe is preferable to any other."

And this I think is why people find moral non-cognitivism so easy to misunderstand - people always try to parse it to understand which variety of moral realism you subscribe to.

  • "There is no final true moral standard."
  • "Ah, so you're saying that all acts are equally good according to the final true moral standard?"
  • "No, I'm saying that there is no final true moral standard."
  • "Oh, so all moral standards are equally good according to the final true moral standard?"
  • "No, I'm saying that there is no final true moral standard."
  • "Oh, so all moral judgements are equally good according to the final true moral standard?"
  • \whimper**
5Eliezer Yudkowsky
I like to use the word "transcendent", as in "no transcendent morality", where the word "transcendent" is chosen to sound very impressive and important but not actually mean anything. However, you can still be a moral cognitivist and believe that moral statements have truth-values, they just won't be transcendent truth-values. What is a "transcendent truth-value"? Shrugs. It's not like "transcedental morality" is a way the universe could have been but wasn't.
0byrnema
Yes, I think that transcendent is a great adjective for this concept of morality I'm attached to. I like it because it makes it clear why I would label the attachment 'theistic' even though I have no attachment that I'm aware of to other necessarily 'religious' beliefs. Since I do 'believe in' physical materialism, I expect science to eventually explain that morality can transcend the subjective/objective chasm in some way or that if morality does not, to identify whether this fact about the universe is consistent or inconsistent with my particular programming. (This latter component specifically is the part I was thinking you haven't covered; I can only say this much now because the discussion had helped develop my thoughts quite a bit already.)
2Eliezer Yudkowsky
Er, did you actually read the Metaethics sequence?
1wedrifid
That is a description that you can get to using your definition of 'better' (approximately, depending on how you prefer to represent differences between human preferences). It still completely does away with the meaning Byrnema conveyed. That was clear. But no matter how superior our philosophy we are still considering straw men if we parse common language with our own idiosyncratic variant. We must choose between translating from their language, forcing them to use ours, ignoring them or, well, being wrong a lot.
5byrnema
This thread between you and Vladimir_Nesov is fascinating, because you're talking about exactly what I don't understand. Allusions to my worldview being unsophisticated, not useful, stupid and incorrect fill me with the excitement of anticipation that there is a high probability of there being something to learn here. Some comments: (1) It appears that the whole issue of what I meant when I wrote, "no state of the universe is objectively any "better" than any other state," has been resolved. We agree that it is trivially true, useless and on some level insane to be concerned with it. (2) Vladimir_Nesov wrote, "You care about getting the world to be objectively better [in the way you define better], while a pebble-sorter cares about getting the world to be objectively more prime [the way he defines better]." This is a good point to launch from. Suppose it is true that there is no objective 'better', so that the universe is no more improved by me changing it in ways that I think are better or by the pebble-sorter making things more prime, than either of us doing nothing or not existing. Then I find I don't place any value on whether we are subjectively improving the universe in our different ways, doing nothing or not existing. All of these things would be equivalently useless. For what it's worth, I understand that this value I'm lacking -- to persist in caring about my subjective values even if they're not objectively substantiated -- is a subjective value. While I seem to lack it, you guys could very reasonably have this value in great measure. So. Is this a value I can work on developing? Or is there some logical fallacy I'm making that would make this whole dilemma moot once I understood it?
7orthonormal
This is connected to the Rebelling Within Nature post: have you considered that your criterion "you shouldn't care about a value if it isn't objective", is another value that is particular to you as a human? A simple Paperclip Maximizer wouldn't have the criterion "stop caring about paperclips if it turns out the goodness of paperclips isn't written into the fabric of the universe". (Nor would it have the criterion of respecting other agents' moralities, another thing which you value.)
1wedrifid
Have a look at Eliezer's posts on morality and perhaps 'subjectively objective'. (But also consider Adelene's suggestion on looking into whether your dissociation is the result of a neurological or psychological state that you could benefit from fixing.) Meanwhile I think you do, in fact, have this subjective measure. Not because you must for any philosophical reason but because your behaviour and descriptions indicate that you do subjectively care about your subjective value. Even thought you don't think you do. To put it another way, your subjective values are objective facts about the state of the universe and your part thereof and I believe you are wrong about them.
3randallsquared
Is there a sense in which you did not just say "The trick is to pretend that your subjective preference is really a statement about objective values"? If by "objectively better" you don't mean "better according to a metric that doesn't depend on subjective preferences", then I think you may be talking past the problem.
6Vladimir_Nesov
By "objectively better" I mean that given an ordering called "better", it is an objective fact that one state is "better" than another state. The ordering "better" is constructed from your own decision-making algorithm, you could say from subjective preference. This ordering however is not a matter of personal choice: you can't decide what it is, you only decide given what it already happens to be. It is only "subjective" in the sense that different agents have different preference.
1Paul Crowley
I can't quite follow that description. "More prime" really is an objective description of a yardstick against which you can measure the world. So is "preferred by me". But to use "objectively better" as a synonym for "preferred by byrnema" seems to me to invite confusion.
2Vladimir_Nesov
Yes it does, and I took your position recently when this terminological question came up, with Eliezer insisting on the same usage that I applied above and most of everyone else objecting to that as confusing (link to the thread -- H/T to Wei Dai). The reason to take up this terminology is to answer the specific confusion byrnema is having: that no state of the world is objectively better than other, and implied conclusion along the lines of there being nothing to care about. "Preferred by byrnema" is bad terminology because of another confusion, where she seems to assume that she knows what she really prefers. So, I could say "objectively more preferred by byrnema", but that can be misinterpreted as "objectively more the way byrnema thinks it should be", which is circular as the foundation for byrnema's own decision-making, just as with a calculator Y that when asked "2+2=?" thinks of an answer in the form "What will calculator Y answer?", and then prints out "42", which thus turns out to be a correct answer to "What will calculator Y answer?". By intermediary of the concept of "better", it's easier to distinguish what byrnema really prefers (but can't know in detail), and what she thinks she prefers, or knows of what she really prefers (or what is "better"). This comment probably does a better job at explaining the distinction, but it took a bigger set-up (and I'm not saying anything not already contained in Eliezer's metaethics sequence). See also: * Math is Subjunctively Objective * Where Recursive Justification Hits Bottom * No License To Be Human (some discussion of right vs. human-right terminology) * Metaethics sequence
1wedrifid
It was in the post for asking Eliezer Questions for his video interview. It is one thing to use an idiosyncratic terminology yourself but quite another to interpret other people's more standard usages according to your definitions and respond to them as such. The latter is attacking a Straw Man and the fallaciousness of the argument is compounded with the pretentiousness.
0Vladimir_Nesov
Nope, can't find my comments on this topic there. I assure you that I'm speaking in good faith. If you see a way in which I'm talking past byrnema, help me to understand.
3Wei Dai
Is this the thread you're referring to?
1Vladimir_Nesov
It is, thank you.
0wedrifid
Ahh. I was thinking of the less wrong singularity article. I don't doubt that. I probably should consider my words more carefully so I don't cause offence except when I mean to. Both because it would be better and because it is practical. Assume I didn't use the word 'pretentious' and instead stated that "when people go about saying people are wrong I expect them to have a higher standard of correctness while doing so than I otherwise would." If you substituted "your thinking is insane" for "this is wrong" I probably would have upvoted.
1wedrifid
I suspect it may be even more confusing if you pressed Vladmir into territory where his preferences did not match those of byrnema. I would then expect him to make the claim "You care about getting the world to be objectively , I care about getting the world objectively better, while a pebble sorter cares about getting the world to be objectively more prime". But that line between 'sharing' better around and inventing words like booglewhatsit is often to be applied inconsistently so I cannot be sure on Vladmir's take.
1Paul Crowley
See also Doublethink
2byrnema
A free-floating belief system doesn't have to be double-think. In fact, the whole point of it would be to fill gaps because you would like a coherent, consistent world view even when one isn't given to you. I think that continuing to care about subjective value knowing that there is no objective value requires a disconcerting level of double-think.
2UnholySmoke
On what are you basing your assumption that the world should have whatever you mean by 'meaning'?
2MichaelVassar
I wouldn't even say that the rationalist view is properly seen as being a sub-set of physical materialism, just an evolutionary descendant of materialism. More like abstract ideal dynamicism.
0byrnema
Yes, agreed. (Whenever I've used and use the phrase 'physical materialism', this is what I'm referring to.)
1Nanani
The universe has the meaning we give it. Meaning is a perception of minds, not an inherent free-floating property of the universe.
0Mitchell_Porter
Off-topic, but: do you think the meaning of your own thoughts and cognitive activity is similarly observer-dependent?
0byrnema
By the way, I've read enough on Less Wrong to guess that your first reaction will be to feel some frustration that I must not have read the sequences. I've read enough of the sequences to believe that your main argument against feeling value-nihilism is that it just doesn't happen if the person lives in the moment and experiences life openly. Instead of looking for external validation of values, we look within ourselves and feel the internal validation. Is this correct? In which case, what about a person who feels like this kind of visceral experience is only a choice -- a moral choice?

When I became convinced that my belief in God was poorly founded, I worried intensely that I would become a nihilist and/or feel a perpetual vacuum of value. I've been incredibly relieved to find this fear unfounded.

On the nihilism front, I found that even in the absence of any Framework of Objective Value, I still cared about things (the welfare of friends and family, the fate of the world, the truth of my own beliefs, etc). I had thought that I'd cared about these things only insofar as they fit within the old FOV, but it turned out this fear was just a defense mechanism I employed in order to resist changing my worldview. Even with the FOV gone, I am simply the sort of being that cares about these things, and I don't need the permission of anyone or anything to do so!

I feel the same sense of purpose, passion, and meaning about these matters now that I felt when I was religious. Life is at times less comforting in other ways, but my fear of nihilism was misplaced. (Worse, it was subconsciously manufactured in order to stand in for other fears related to leaving religion, so that I wouldn't have believed someone else telling me this until I went through it myself!)

1Dr_Manhattan
out of curiosity, what was your choice of poison?
5orthonormal
Catholicism Classic, Extra Strength.
0[anonymous]
Relatedly, have you read http://lesswrong.com/lw/18b/reason_as_memetic_immune_disorder/ ?
1MrHen
The list on my wiki page isn't technically exhaustive. It is my bookmark for reading through everything in order. There are a few extras there from when I thought I would try to record everything but it turned out to be too troublesome and the chronological context interests me so I stopping recording anything outside of my place in the full list of posts. That being said, it is still basically nothing. For some reason I felt like clarifying anyway. :) I have hit a few of the Map/Territory posts and my current favorite of what I have read is Mysterious Answers to Mysterious Questions. I am not reading through the posts looking for a silver bullet. I am reading, processing, and looking for Truth. I assume this is what you intended, anyway, and I dislike creating expectations when there isn't a good reason to have them.
6Kutta
Speaking from my experience, I whole-heartedly recommend going through Eliezer's old posts and also old LW top level posts in a chronological manner. They're extremely dense in cross-references and I've ended up a few times in browser tab creating sprees that eventually gave me headaches before I switched to a systematic reading plan. Also, keeping track of the comment discussions is only possible this way. Additionally, there is some sense of unfolding and progression that arises in the strictly chronological way that would be a shame to miss. Naturally, Eliezer tried to advance from easy and independently understandable topics to difficult and heavily interrelated ones. I daresay there was even a heavy emotional charge at the point we reached the final sequences, and I'm sure I was not the only one who was bewildered and intellectually/emotionally exhausted back then. I think it's definitely worthwhile to emulate the same reading experience by sticking to chronological order. As a side note, I'm not sure I can recommend binge OB/LW reading to younger humans and less life-hardened persons. It gave me a couple of minor and medium crises of faiths and major shifts of views in a few months. Being a vivid and often lucid dreamer, I've also had a more than concerning number of dreams that starred Eliezer Yudkowsky.
[-]Kevin170

I think your personal beliefs do matter. From my perspective, there is a big difference between "I believe that Jesus Christ lived on Earth and died for my sins and God really listens to my prayers", "I believe that some entity exists in the universe with power greater than we can imagine", "the entire universe is God" or "God is love."

9orthonormal
I'd add that how much rationality I ascribe to someone with a particular religious outlook has quite little correlation with our agreement on object-level beliefs. That is, I find a dogmatic Calvinist to be more likely to think rationally than a person with some vague hodgepodge of beliefs, although the latter will be more likely to agree with me on evolution and on social issues, because the former's beliefs are (to some extent) optimized for consistency while the latter's are generally not.
6MrHen
Are you saying that the difference between your examples is enough to include me or exclude me from LessWrong? Or is the difference in how you in particular relate to me here? What actions revolve around the differences you see in those examples?
4Kevin
I don't think we would exclude someone solely on the basis of belief, as one of the goals here is to educate. I'm not sure there is much action involved, but people might treat you differently if you admitted to being an evangelical Christian compared to being a believer because you are uncomfortable giving into the nihilism of non-belief Edit: After rereading your post, yes, there are rational religious people. I have a few friends of the type, and I think the most important part of being a rational religious person is admitting that belief is irrational, steeped in feelings of culture or helplessness rather than convincing evidence. It's a slippery slope though, if you keep thinking about it you may find it hard to hold onto your belief. Maybe in a few days you should make a top-level post about your beliefs and we can try to examine the reasons why you believe the way you do, and try and understand why you are comfortable with conflicting beliefs. No pitchforks, I promise, you seem to know the linguistic patterns to use here so that no one will pounce on you.
5MrHen
If I cannot hold onto a belief it isn't worth holding on to. My current plan is to inch into the heavy topics with a few basic posts about belief, doubt, and self-delusion. But I know some of these things are discussed elsewhere because I remember someone at OB talking about the plausibility of self-delusion. In any case, I am still working through the Sequences. I expect a lot of my questions are answered there.
1Technologos
I agree with Kevin that belief is insufficient for exclusion/rejection. Best I can tell, it's not so much what you believe that matters here as what you say and do: if you sincerely seek to improve yourself and make this clear without hostility, you will be accepted no matter the gap (as you have found with this post and previous comments). The difference between the beliefs Kevin cited lies in the effect they may have on the perspective from which you can contribute ideas. Jefferson's deism had essentially no effect on his political and moral philosophizing (at least, his work could easily have been produced by an atheist). Pat Robertson's religiosity has a great deal of effect on what he says and does, and that would cause a problem. The fact that you wrote this post suggests you are in the former category, and I for one am glad you're here.
8orthonormal
I agree with the rest of your comment, but this seems very wrong to me. I'd say rather that the unity we (should) look for on LW is usually more meta-level than object-level, more about pursuing correct processes of changing belief than about holding the right conclusions. Object-level understanding, if not agreement, will usually emerge on its own if the meta-level is in good shape.
1Technologos
Indeed, I agree--I meant that it doesn't matter what conclusions you hold as much as how you interact with people as you search for them.

Presumably, there is a level of entry to LessWrong that is enforced. Does this level include filtering out certain beliefs and belief systems?

Any rule that would prevent Robert Aumann from contributing here, or that would have prevented Kurt Gödel from contributing here is a bad rule.

I have a question for you: do you expect that you will still be a theist after having read all the sequences?

4MrHen
Yes. I don't know what is in the sequences so it is pretty hard to accurately predict my state of beliefs on the flip-side. But as of yet, I have not yet imagined a path that will lead me to atheism. All I have to go on are other peoples' testimonies and predictions. While those are all pointing to exiting atheist there hasn't been much explanation as to why that is the prediction. I do not find this strange. I expect to find the explanations in the sequences.
[-]RobinZ110

It occurs to me that I never responded to your explicit questions.

1. Should I have kept this to myself? What benefit does an irrational person have for confessing their irrationality? (Is this even possible? Is this post an attempted ploy?) I somewhat expect this post and the ensuing discussion to completely wreck my credibility as a commentator and participant.

I think it is fairly obvious that people's beliefspace can have great chasms beneath the sanity waterline while still containing valuable islands or continents of rationality. For my purposes, when asking for book recommendations and the like, I will discount yours to an extent on these grounds (or not, if they are in a specific domain where I consider religion irrelevant), but argument screens out authority, and you've proven your capacity to provide desireable (on the karma scale) commentary. Which leads to:

2. Presumably, there is a level of entry to LessWrong that is enforced. Does this level include filtering out certain beliefs and belief systems? Or is the system merit-based via karma and community voting? My karma is well above the level needed to post and my comments generally do better than worse. A merit-base

... (read more)
[-]Jack110

I think the minimal level of rationality necessary to participate successfully here has almost nothing to do with actual beliefs and everything to do with possessing the right attitude-- willing to change your mind, a desire to be have more accurate beliefs, updating with new evidence etc. See the Twelve Virtues of Rationality. You seem to be more than adequate in that regard.

If being a theist is a big part of your life, if you do things that you wouldn't do if you were an atheist then I suggest that your theism might be a big enough deal that you should stop beating around the bush and just subject your views to examination and argument in an open thread or in a dedicated thread for people to discuss issues where they don't agree with the rest of the community. But that is a recommendation, not a demand or anything.

If your theism is just a comforting, abstract belief it may well be harmless and you might as well take your time.

I wonder if we make too big a deal out of atheism here. Once you are an atheist it seems obviously true, but it is one of the hardest beliefs to change when you're a theist because it is so entangled in community, identity and normative issues. Scientology ... (read more)

3byrnema
There's this one. And in the interests of organizing information and arguments on LW, there is an argument to be made for separate posts to discuss the differences that lead to really lengthy discussions -- for example, there are posts dedicated to different angles of tolerating theism and -- now -- more posts dedicated to the problem of consciousness. Long after the discussions under these posts have died down, these posts are still places where the ideas can be picked up and probed by a newcomer.
0MrHen
One day I expect to have this conversation here. Until then, I expect a handful of discussions leading into why I still believe in God. There is a lot of ground to cover before I address the mean questions head on. As it is now, I am completely ill equipped for such a task. Wrong as in incorrect or wrong as in immoral? I don't think its wrong under either usage. The only reason I would think it is incorrect is because some people no longer have what it takes to be a rationalist.

Somewhat long and rambly response, perhaps in the spirit of the post:

  • I think those who quest for rationality, even if not completely, ought to be welcome here. Caveat that applies to all: I don't really deserve a vote, as a short-timer here.

  • So long as you are not trying to deliberately peddle irrationality, you're acting in good faith. That goes a fair distance.

  • Religious people are regularly rational and right on a lot of different issues. Rejecting a religious person's view solely because of religion doesn't seem like a good idea at all. (Deciding not to use time on a zombie-vampire hypothesis because it stems from religious belief rather than empirical evidence is dandy, though.) Irrational atheists are also commonplace.

Religion is an indicator of rationality, just not the be-all end-all of it.

This isn't a binary sin/no-sin situation. You can be rational in some areas and not others. Some religious people are able to be quite rational in virtually all day-to-day dealings. Some are poisoned.

We're all wanna-be rationals at some point. This post, to me, is great - the best thing I've seen written by MrHen. If someone tries to tell us that God wants us to eat less bacon, it... (read more)

8JamesAndrix
With a prediction record like that we should prefer that she we here instead. ;-) I don't supposed she rated her confidence numerically?
0Alicorn
300? It's been done... more than twice.
3Jack
That would be a really nice tool if taken seriously. I don't think there are any valid arguments for theism with true premises but a list of 600+ strawmen isn't going to do much for anyone.
-3[anonymous]
If you can find arguments that aren't ultimately strawmen, please post them. I haven't seen them yet. edit: By this I mean, if you can find arguments that when reduced to bullet points don't sound like those from that list, I'd like to see them.

The point of the community is to figure out how to think, not to blame outsiders.

Unfortunately, the siren song of majoritarianism makes it critical to establish that the world is mad if one is to progress past the gates of Aumann with one's own sanity.

Discussion of the sanity waterline is largely focused on establishing epistemic non-equivalence between claimed "beliefs" in order to prevent efforts to avoid overconfidence from being self-undermining.

I liked this post.

Note that "Wannabe Rational" is not terribly different from "aspiring rationalist" -- the very term that most LWers use for themselves!

All of us, presumably, have some beliefs that are not accurate. That, of course, makes us irrational. But we'd like to be more rational. That desire, that aspiration, is the entrance requirement here.

It's true that there is a limit on how rational you can be and still be a theist. But that's not the same as the limit on how rational you can become in the future, given that you are now a theist (or have whatever incorrect belief X).

I haven't read your entire post, but I find it very strange (and distracting, if I'm honest) that you would word it as if you believed it was irrational to believe in God. It is as if you either believe your belief is irrational (in which case, why believe it?) or you believe that it is polite to defer linguistically to the local majority position in this case. (Or something I haven't thought of - it's not like I've mathematically shown that these constitute all cases.)

I expect to find your discussion interesting - I love meta-discussions - but I'm just throwing that out there.

Edit: Ah, I see you discussed that very thing just a few paragraphs later. Interesting.

5MrHen
I thought about addressing this directly in the post but figured it would show up in a comment rather quickly. There are a handful of small reasons for me doing so: * Linguistically, LessWrong thinks of religious beliefs as irrational. I do think it is polite to defer to this usage. * I understand why this community considers religion to be irrational and do not feel like contending the point. * The post is not about religious beliefs but irrational beliefs. I use religion as an example because I used myself as an example of what I was talking about. * I expect it to be jarring to read, which hopefully forces the reader to realize that the post has little to do with the beliefs themselves.
1RobinZ
I can buy that - although now I feel as if my second reply is rather patronizing. I apologize if the links therein are inapplicable to your situation; I would not have worded it as I did if I believed that the reason for your phrasing was as you described.
1MrHen
It's all good. I found something useful in the comment. :)

People on LW like to insist that there is a litmus test for rationality; certain things any rationalist believes or does not believe as a necessary condition of being rational. This post makes this pretty explicit (see 'slam-dunks').

However, I wish that the LW community would make more of a distinction between rational beliefs based on really good epistemological foundations (i.e., esoteric philosophical stuff) and rational beliefs that are rational because they actually matter -- because they're useful and help you win.

I'm someone who is interested in ph... (read more)

1byrnema
I just realized that while this is my argument for why I don't think theists are categorically irrational, it doesn't mean that any of them would belong here. Less Wrong obviously values having an accurate map not just to the extent that it facilitates "winning", but also for its own sake, because they value truth. So finally I would qualify that the argument against having theists here isn't that they're so necessarily irrational, but theism conflicts with the value of having an accurate map. Likewise, Less Wrong might value certain epistemological foundations, such as Occam's razor (obviously) and any others that lead to choosing many worlds as the natural hypothesis. I just forgot (while composing the message above) that 'Less Wrong' represents a combination of instrumental rationality AND VALUES. I usually think of these values as valuing human life, but these values include valuing epistemic rationality. While Less Wrong is much more tolerant of different values than wrong beliefs in general, its justifiably not so tolerant of different values about beliefs. I think that my comment above should have been down-voted more than it was, since it's not representing the community norm of valuing truth for its own sake. I'm not valuing truth so much these days because I'm going through a value-nihilistic phase -- that ironically, I blame on Less Wrong. But 'you guys' that care about truth might down-vote a comment arguing that there is no value to beliefs beyond their effectiveness in achieving goals.
3zero_call
It seems to me like you're creating an artificial dichotomy between the value of truth itself and the material relevancy of truth. To me, these ideas are rather coupled together, and I would up-vote your first post for the same reason I would up-vote your second post. In other words, to me, "valuing truth for its own sake" includes valuing truth for its importance, testability, relevance, etc. in other areas.

For what it's worth:

  • I would like to see more people like the original poster here.
  • I do not think that the first order of business for a theist coming here needs to be examining their religious beliefs. Which seems to be an assumption behind a lot that was said here.

This is not an atheism conversion site, right? There needn't be pressure. Let them learn the methods of rationality and the language of Bayes, without eyeing them for whether they're ready to profess the teacher's password yet. If they're making useful contributions to the topics they post on, no less than atheist members, that screens off other considerations.

Anyone who claims to be rational in all area of their lives is speaking with irrational self confidence. The human brain was not designed to make optimal predictions from data, or to carry out flawless deductions, or to properly update priors when new information becomes available. The human brain evolved because it helped our ancestors spread their genes in the world that existed millions of years ago, and when we encounter situations that are too different from those that we were built to survive in, our brains sometimes fail us. There are simple optical... (read more)

4pjeby
I would reverse the ordering you have there: overcoming an emotional attachment is actually the easiest thing to do, finding the irrational belief is the hardest. Actually, finding any implicit belief/assumption is hard, whether it's rational or not. We see the picture framed by our beliefs, but not (usually) the frame itself. Admitting and eliminating one's emotional beliefs can be done in a systematic,near-rote way, simply by asking a few questions (see e.g. Lefkoe or Katie). Identifying one's emotional beliefs, on the other hand, requires something to compare them to, and you can never be quite certain where to start. Brains don't have a "view source code" button, so one is forced to reverse-engineer the assumptions.

Throwing people out because they hold certain beliefs generally leads to groupthink effects that are lead to less clear thinking.

Having someone who plays devils advocate against the consensus is sometimes even helpful if everyone believes in the consensus. Otherwise one often finds oneself arguing against strawmans that come from not fully understanding the argument that's made by the opposing side.

Also, admittedly, I am unjustifiably attached to that area of my map. It's going to take a while to figure out why I am so attached and what I can do about it. I am not fully convinced that rationalism is the silver-bullet that will solve Life, the Universe, and Everything. I am not letting this new thing near something I hold precious. This is a selfish act and will get in the way of my learning, but that sacrifice is something I am willing to make.

I have had a theory for some time now that people confuse "God" with "good[ness]". Th... (read more)

7Vladimir_Nesov
This calls for Dennett's classic "Thank goodness".
5Bo102010
One of my favorite bits of writing ever, in part because it gave me the right answer to "Bo, I'm going to pray for you." "OK, and I will sacrifice a goat for you."
7Corey_Newsome
"Corey, I'm going to pray for you." "OK, then I'll think for both of us." Or, "Ok, then I'm going to prey on you."

"I'll pray for you."

"I'll think for you."

Is that original? GF and I both think it's awesome.

3Corey_Newsome
No, but unfortunately I can't find out where it came from. Perhaps P. Z. Myers's collection of infidel quotes (Edit: see PeerInfinity's comment) but I can't access it right now due to Linux problems. (Incidentally, he'll be in the Bay area for a week in a few days. Info here.) At any rate, that's a good page to read when you're feeling particularly anti-theist and want ammo.
2PeerInfinity
The "infidel quotes" link is broken. Or at least it failed to load when I clicked it. Is this the page you meant to link to: http://www.pharyngula.org/quotes.html

Should LessWrong /kick people who fail at rationality? Who makes the decision? Who draws the sanity water-line?

If we were doing that I would have /kicked Robin Hanson a long time ago and probably Eliezer too. There are few people who do not have at least one position they stick to more than would be rational.

As far as I am concerned you are more than welcome and seem to be a thoroughly positive influence towards rational discussion. Besides, you will probably not believe in God for much longer. People just don't tend to change that sort of fundamental part of their identity straight away unless they have some sort of traumatic experience (eg. hazing).

3Kevin
http://lesswrong.com/lw/1ly/consciousness/1fjv From this exchange, it doesn't sound like the Alexxarian was being threatened with /kick for failing rationality -- it was for failing to use the right linguistic patterns when he was consistently (and correctly) questioned by people using the right linguistic patterns. The exchange would have gone very differently if Alexxarian said something like "Solak's book sounds convincing to me" instead of "[Solak] logically proves". MrHen's post is soaked in doubt and admissions of uncertainty, so it is nearly impossible for us to judge him.
5RobinZ
As a participant in that thread, I saw four problems which threatened to earn him the banhammer: 1. Topic derailing - rather than engage with the material he was ostensibly replying to, Alexxarian chose to promote his own ideas. 2. Excessive linkage to outside material without proper summarizing. 3. Poor understanding of comment etiquette. 4. Vague thinking and writing. Linguistic patterns appear in the ultimate and penultimate points, but they do not constitute the whole story.
1wedrifid
I didn't make part of that conversation but it sounds like Alexxarian was being threatened for reasons distinct from having a particular irrational belief. Do you think Alexxarian's convo was what MrHen was really talking about when he asked the questions here? Being unfamiliar with that potential context I simply took them at face value as general questions of policy.
4Kevin
I don't think that specific conversation was being referred to, but the general pattern of Eliezer's willingness to ban people that are consistently downmodded in conversations. My broader point was that by using the appropriate language to admit wrongness and irrationality and uncertainty, it should be permissible to be almost arbitrarily irrational here, at least until someone tells you to go read the sequences before commenting again.
4wedrifid
What I have always found weird was him actually threatening to delete all future comments from an account rather than actually banning the account. Freedom with message deletion actually makes me more nervous than a free hand with the /kick command. It seems more transparent. Humility and basic courtesy do go a long way, don't they?
6Kevin
I think it was the meta thread where I commented that Less Wrong needs a Hacker News style dead/showdead system, which allows you to arbitrarily censor while simultaneously allaying concerns about censorship. Amen.

Believing in God may be "below the sanity waterline", but there are plenty of other ways to have crazy beliefs for the wrong reasons (anything other than "because as far as I can tell, it's true") while being an atheist - about about science, about themselves, about politics, about morality ...

I think the "politics is the mind killer" policy is a bit of an avowal that the people here are fully capable of irrationality, and that it's more productive to just avoid the subject.

If OB/LW had started a few centuries ago, maybe the p... (read more)

Re: Irrational Beliefs.

When I was born, I was given a baby-blanket (blue), and a teddy bear. During childhood, I developed the belief that these two entities protected me, and even clung to those beliefs (although in a much less fully believed fashion). The presence of these two items, even though they really did nothing more than sit in my closet, did help to calm me in times of stress... Yet, I knew there was no possible way that a square piece of cloth, and a piece of cloth sewn into the shape of a bear (stuffed and buttoned with eyes) could affect the ... (read more)

"So, yeah. I believe in God. I figure my particular beliefs are a little irrelevant at this point."

I think the particulars of your beliefs are important, because they reveal how irrational you might be. Most people get away with God belief because it isn't immediately contradicted by experience. If you merely believe a special force permeates the universe, that's not testable and doesn't affect your life, really. However, if you believe this force is intelligent and interacts with the world (causes miracles, led the Israelites out of Egypt, e... (read more)

Making a general response to the post, now:

I think it is fairly obvious that the LessWrong community is not innately privileged as arbiters of rationality, or of fact. As such, it is reasonable to be cautious about obscuring large portions of your map with new ink; I don't think anyone should criticize you for moving slowly.

However, regarding your hesitance to examine some beliefs, the obvious thing to do (since your hesitance does not tell you whether or not the beliefs are correct, only examining them does) is to make the consequences of your discovery f... (read more)

2orthonormal
Leave a Line of Retreat
1RobinZ
That was the one I was thinking of. Thanks for the link.
2MrHen
Agreed. And this is very good advice. My map has beliefs about my map and I figured those are very high priority. Any ooga-booga's about touching an area is probably in the meta-map. As of right now, most of those are open for analysis. The big, annoying is a self-referential lockout that is likely to get tricky. Of all the discussions that would thoroughly surprise the community here, this one takes the cake. My younger self was pretty clever and saw the future too clearly for my own good. The ulterior motive of this post is to give me a way to discuss these things without people going, "Wait, backup. You believe in God?"
0RobinZ
Bear in mind that not everyone reads every post - assuming you continue to discuss matters related to theism without a major reversal of opinion (either on your part or ours - I, naturally, expect the latter to be unlikely), this will still happen occasionally, with increasing frequency as time progresses.
1MrHen
Agreed. Having this post in the archives is useful for my far-future self. I expect it to save me a lot of time.

I disagree with creating a hierarchy of rational levels, as you are suggesting. For one thing, how do you categorize all the beliefs of an individual? How do you rank every single belief in terms of value or usefulness? These are serious obstacles that would stand in the way of the execution of your program.

Moreover, I don't believe this categorization of perspective serves any real purpose. In fact it seems that many topics lie either "outside" of rationality, or else, they are not really served by a rational analysis. People shouldn't receive d... (read more)

2MrHen
At this point, I have no better answer than feeling it out. It makes it a bit wishy-washy, but all I am really trying to do is get a rough estimate of someone's ability to improve their map. I agree that it is unfeasible to categorize someone's every belief and then register their rationality on a scale. I think I expect more from rationality than you do. I don't think any topic lies outside of the map/territory analogy. Whether we possess the ability to gather evidence from some areas of the territory is a debate worth having. Somewhere in here is the mantra, "Drawing on the map does not affect the territory." I cannot come up with some beliefs and then argue vehemently that there must be territory to go along with it. I expect studying inaccessible territories to be much like how they discovered extra planets in the solar system. Even if we cannot go there ourselves, we can still figure out that something is there. Fair enough. I was trying to accomplish two things with this post and I tried using the flagellating to help people see the other point. It seems to have mixed success.

I get the feeling that most discussions about the beliefs themselves are not going to be terribly useful.

You lost me there. I can't think how this discussion can yield a useful result if held entirely at the meta level. It makes a difference what you mean by "believe in God"; your beliefs matter to the extent that they make a difference in how you behave, decide, and so on. Words like "rational" and "rationalist" can be a distraction, as can "God"; behaviour and outcomes offer better focus.

If you find yourself pra... (read more)

2MrHen
Because there will be more people like me. Is your response, "It depends on the individual beliefs"? How does this play into participating at LessWrong?
5Morendil
Not so much on the individual beliefs, as on what your thought processes are and in what ways you might want to improve them. We do not possess isolated beliefs, but networks of beliefs. And a belief isn't, by itself, irrational; what is irrational is the process whereby beliefs are arrived at, or maintained, in the face of evidence. I am an atheist, but I'm far from certain that none of my current beliefs and the way I maintain them would be deemed "irrational" if they came up for discussion, and judged as harshly as theism seems to be. My intent in participating here is to improve my own thinking processes. Some of the ways this happens are a) coming across posts which describe common flaws in thinking, whereupon I can examine myself for evidence of these flaws; b) coming across posts which describe tools or techniques I can try out on my own; c) perhaps most interesting, seeing other people apply their own thinking processes to interesting issues and learning from their successes (and sometimes their failures). The karma system strikes me as an inadequate filtering solution, but better than nothing. I'm now routinely browsing LW with the "anti-kibitzing" script in an effort to improve the quality of my own feedback in the form of up- and downvotes. My first reading of a comment from you would be looking for insights valuable in one of the three ways above. Perhaps if your comment struck me as inexplicably obscure I might check out your user name or karma. By becoming a more active commenter and poster, I hoped to learn as others gave me feedback on whether my contributions are valuable in one of these ways. The karma system had significant and subtle effects on the ways I chose to engage others here - for good or ill, on balance, I'm still not sure.
1MrHen
Is it possible to glimpse or understand someone's thought processes are without delving into their particular beliefs? I assume yes. Since Religion is some of a touchy subject, I offer everything else I say as evidence of my thought processes. Is that enough? Yeah, that makes sense. There are a few interesting discussions that can lead from this, but I am fairly certain we agree on the major issues. The basic reason I did not want to go into the particular beliefs here is because (a) I felt the meta-discussion about how people should deal with these things was important and (b) I was unsure what the reaction would be.
1Morendil
That's for you to say. You chose to bring up religion - more specifically "belief in God". You could have illustrated how you think without bringing up that particular confession; you did so of your own initiative. The major "meta" question of your post has already been addressed here: yes, you can strive to become "less wrong" whatever your starting point happens to be. All that seems to be required is a capacity for inquiry and a sense of what "wrong" is. We couldn't function if we weren't rational to some extent. Any adult LessWronger presumably earns enough money to keep a roof over their head, food on the table and an Internet connection within easy reach; this is evidence that some at least of their actions are rational in the sense of making appropriate contributions to their projects. This community seems to be about more than that basic ability to function in society. There is a strong sense of a more global responsibility: refining the art of human rationality enough to defend not just myself, not just my family, not just my friends, but much bigger groups. Before hanging around here I thought I had ambition, to the extent that I wanted to save my profession from itself. Well, this is a group of people attracted to the notion of at least saving humanity from itself. In that context, no, I don't think your plea for a "waterline exception" covering your specific pet belief should be taken seriously. I do, however, think we stand to gain by taking a closer look at religious belief, without attempting to turn it into a bogeyman or a caricature. For this to happen, it seems to me we need to examine the beliefs themselves. Religious, in fact even spiritual belief is something of a mystery to me; what I find particularly puzzling is precisely how some very smart people I know are able to simultaneously hold those (to me) bizarre beliefs and still function very well in other intellectual domains. The closest I've come to understanding it was while reading
1MrHen
Okay, that makes sense. To be clear, I am not trying to resist your questions or curiosity. The more I read the responses here the more I am internally committing to have the discussion about the particulars of my religiousness. Fair enough. This answers the question adequately. I completely agree. Standing on the other side, I find it puzzling that more people are puzzled.

My apologies in advance for rambling:

To begin, the subject reminds of a bumper sticker I occasionally see on a car around here: "Militant Agnostic: I Don't Know, And You Don't Either!"* Though there are plenty of examples of irrational religious beliefs leading to bad results, nonetheless I am not convinced that rationality is most useful when applied to (against?) religion. Just off the top of my head, applying it to politics directly (per Caplan's Myth of the Rational Voter), or even something as mundane as climate (one way or the other), would... (read more)

4Jack
Religion is the most likely motivating force for biological or nuclear terrorism in the next 25 years. It exacerbates geopolitical tensions that could easily lead to broader conflicts (India-Pakistan, Israel-Arab world). And a large part of why AIDS kills millions of Africans every year, contributing to the near impossibility of building economic infrastructure there, is religious superstition and the inane dogma of the Catholic church. For me at least those issues are somewhat more important than making sure rich people don't die of old age.
1Nic_Smith
I suppose my overly economical view offended. Sorry. I would prefer a world where such conflicts and suffering did not exist. However, it still does not follow that this is where the most effort should be expended. You are talking about dramatically changing the religious beliefs of billions over a few decades. I've suggested that tweaking the political beliefs of some hundreds of millions, already somewhat educated, roughly over the same time period or perhaps a bit longer, may be more doable.
2Jack
I'm not offended by your overly economical view. If you have some argument for why anti-aging research will help people more in the long term, great, lets here it. Nor do I doubt applying rationality to politics would have some good effects- for one we could set policies that undermine religion and superstition elsewhere. My objection was just that cryonics and anti-aging aren't even close to being important enough to be the operating concern here. A Friendly AI, maybe. But if suspect rich-middle class Westerners stop dying of old age I suspect many of the world's problems would be exacerbated and only one would be solved. No, it is definitely more doable. It just isn't important enough to do if your only reason is financial and legal support for cryonics.
2Nic_Smith
Ok: people have value -- human capital, if necessary -- that compounds with time: knowledge, social ties, personal organization, etc. Currently, this is greatly offset by the physical and mental decline of aging. If we could undo and prevent that decline, people would have the opportunity to be unimaginably productive. The problems that you've mentioned are difficult now, but they'd be easier after someone spent a second lifetime dedicated solely to working on them. Furthermore, the management of physical and financial capital across great periods of time is limited -- there isn't anyone that can realistically oversee 300+ year projects and make sure they turn out right. All of this is of value not only to the individual whose life is extended, but to others as well. Admittedly, cryonics doesn't fall into this story perfectly, although a political environment that's better for anti-aging in general should also be better for cryonics. I will also confess that I don't want to die. You shouldn't either.
0Nic_Smith
In case anyone misinterprets that last sidenote as a subtle jab: the book also says that many, not all, of these people switched sides roundabout (IIRC) the 50s through 70s, so no, it isn't.

Rationality is a tool. There must be something more fundamental for which the tool is wielded. I liken it to a formal mathematical system, where rationality is the process of proof, and what lies beneath are the axioms of the system. Some choices of axioms are inconsistent, but there are likely many choices that aren't.

While a rational person should never arrive at and then hold an unfalsifiable belief, I don't think it's irrational if an unfalsifiable belief is a starting axiom, something fundamental to who you are. Belief in God may or may not be such an axiom for you, but either way I find it useful to try and keep in mind the purpose of my rationality when applying it to areas of my map that scream when prodded.

There is one thing I don't understand: you seem to perceive your belief in God as irrational. In my understanding you can't belief in something and at the same time belief that this belief is irrational.

If I believe "the sky is red" and I'm aware that this is irrational since I know that in reality the sky is blue there is no way for me to continue believing "the sky is red".

Or did I misunderstand you somehow?

1MrHen
A similar question was asked elsewhere in the comments. I made a bigger reply there. The short answer is that I am being tricky. :) This is completely unrelated to my post, but I find this example interesting for the following reasons: * The realization that the sky is not red and that the sky is blue are two different things. Accepting "the sky is red" as irrational is possible without discovering that the sky is blue. If I happen to find an irrational belief in my map but have nothing to put there instead, what is the correct behavior? When I need to act on that area of the map, and all I have is an irrational belief, what should be done? * I do not consider it impossible to continue believing in a known irrational belief. This is a much larger discussion. The short version: Not everybody wants to be rational.

The realization that the sky is not red and that the sky is blue are two different things. Accepting "the sky is red" as irrational is possible without discovering that the sky is blue. If I happen to find an irrational belief in my map but have nothing to put there instead, what is the correct behavior? When I need to act on that area of the map, and all I have is an irrational belief, what should be done?

The first thing is to realize that you don't have even the irrational belief, because if the map is wrong, it's worse than useless. You should regress to the prior, accept not knowing the answer, but at the same time being careful about "you either win the lottery or lose, hence equal odds" fallacy (it's "privileging the hypothesis" lingering even after you remove a given hypothesis from dominance). Incidentally, it's rarely a mistake to let go of your beliefs: if they are correct, reality will imprint them back.

I experienced this process while erasing my beliefs in folk medicine practices. At one point, I decided to forget all I knew about this stuff since I was little, and to draw the judgment anew in each case, as if I heard of it for the first... (read more)

1MrHen
I think I understand you. Let me repeat what you said with my words and see if I get it: An irrational belief is damaging. It is better to hold no belief and regress to the "I don't know" state of assigning probabilities to outcomes. Unfortunately, "privileging the hypothesis" is pulling an "article I should have read by now" tag from my memory. Apparently I should go read an article. :) The followup question I have is how do I act when I cannot find an alternative hypothesis? In other words, I have an irrational belief and I have to use that area of the map. "Do nothing," is an action. Should I just insert that an hope for the best? What if "Do nothing" is the rational belief? Act randomly? And so on so forth. My point here can be boiled down to this: Beliefs fuel actions. Actions are expected from reality. Better beliefs produce better actions. What happens when I have no belief or only irrational beliefs when deciding how to act? Assume there is no time for further introspection or fact-finding.
1Vladimir_Nesov
The second-best guess after the disabled known-irrational solution is often more interesting than "do nothing". On the other hand, "do nothing", when it's the way to go, may be hard to accept for a number of reasons (it can be seen as a signal of not caring, or of excessive loyalty to your position of disbelieving). This is a dangerous pressure, one that can push you to accept a different dogma in place of the discarded one just to fill the gap.
0MrHen
Soft reminder: This is just theory-chat and it has nothing to do with me or my post. Part of the problem is that some maps don't keep track of second-best solutions. Namely, a common irrational behavior is to chuck everything that doesn't match or adhere to principal dogma. The problem is not so much that there needs to be a way to choose a second-best. The problem is what happens when there is no second best. I am unable to parse, "This". What are you referring to? As in, what is a dangerous pressure?
1Vladimir_Nesov
The pressure to "do something", in particular to accept a system of beliefs that promotes a particular "something", when for all you know you should just "stay there".
0MrHen
Ah, gotcha. That makes sense.
-2Jayson_Virissimo
This isn't always the case. It is fairly easy to find anecdotes of explorers (and especially those in war) that have gotten lost and found their way to safety using the "wrong map". Sometimes having a map (even the wrong one) can provide just the amount of hope needed to press onwards.
4Strange7
There are far fewer available anecdotes of explorers who persisted in using an incorrect map, became even more lost, and were never heard from again. I suspect this is a matter of selection bias.
0Jayson_Virissimo
Sounds probable.
[-]h-H00

it'll probably save a lot of time to discuss the particulars of your belief in God instead of going meta. ie. 'God' is a very specific entity, discussing the specifics instead of imagined abstractions is more useful.

1MrHen
It wouldn't accomplish the same things that I wanted to accomplish with this post. The meta was a point in its own right. I consider this post a success as it was written. There are ways it could be improved but I do not think adding more details about my particular beliefs is one of those ways.

Suppose you could change your desires. Would you choose to abandon your desire to believe in (whatever) God? How about if it turned out to conflict with success in your other values?

Life with less-conflicting desires may be more effective or pleasurable. Maybe it's possible to have a mystical belief that retreats from actual rent-paying rational world-modeling, and only modifies your values and personal interactions. I'd still worry: am I now taking an irrational path toward satisfying myself, because of unquestioned beliefs about how I should behave?

I... (read more)

3MrHen
I can change my desires. But to actually do so requires a desire to do so. These meta-desires are tricky buggers and one wrong step will wreck havoc with the whole system. I don't feel like outlining everything; I just want to point out that my particular case is not as simple as desiring the wrong thing. I, on the other hand, feel like treading carefully anytime something as dangerous as desire is used to apply sweeping changes to a belief system. Pulling the word "God" out is going to put a suspiciously God-shaped hole in my belief system. The first thing I am going to try is finding something else God-shaped and plugging the gaping hole in my suddenly crashing worldview. Instead, I find it easier and more successful to chip parts out of the map and replace them with better chips. I'm not in a hurry and I'd rather see things replaced with Correct stuff instead of merely Better stuff. I am not trying to say your advice is invalid but I know just enough of myself to see red flags popping up all over the place. It is possible my red-flagger is completely whacked, but if this is the case I should start working on my red-flagger.
[-][anonymous]00

Someone upvoted this already? It hasn't been up for more than a minute. Do people here really read and process that quickly?

EDIT: Wait, I just checked the timestamps. My internal clock apparently has issues. It looks like it was about 4 minutes.

[-][anonymous]00

holds breath

[+]gd779-120
[+]roland-170