Comment author: lisper 10 February 2016 12:05:10AM -1 points [-]

when the stories found in the Bible were first told, were they claims of truth or mostly persuasion tricks?

I have no idea. Things were so vastly different back then I can't possibly even mount an educated guess about that. What difference does it make how it started? Today, at least in the U.S., I think it's a defensibly hypothesis that what people call "spiritual experiences" are largely about community and shared subjective experience.

spirituality also claims to have insight into some factual matters (history, for example) and moral dilemmas.

Sure, but that's not the subject I'm addressing. The subject I'm addressing is the belief that many people in the rational community seem to hold (Dawkins being the most prominent example) that the only possible reason anyone could even profess to believe in God is because they are an idiot.

this doesnt look at all like the spirituality found in the world around us.

Yes, that's mostly true (though I am personally acquainted with a number of people who profess to believe in God but who are otherwise seem perfectly rational). I'm not saying that the conclusions reached by religious people are correct. I'm simply advancing the hypothesis that religious people reach the conclusions that they do is in part that they have different subjective experiences than non-relgious people.

Comment author: TheMajor 10 February 2016 12:26:05AM 1 point [-]

Could you taboo 'are [...] about' in your "what people call "spiritual experiences" are largely about community and shared subjective experience."?

Also your main point, that religious people reach their conclusions partly because they have experienced different things than non-religious people, is simply true. But why would you write a long metaphor-riddled piece about this, and give it the clickbait title "Is Spirituality Irrational?". And even with this formulation there is still some Motte-and-Bailey going on if you intend to reconcile spirituality and rationality - just because different experiences were a contributing factor to accepting spirituality does not strongly support that spirituality and rationality can go hand-in-hand. Most importantly your final claim doesn't seem to help in answering my 'core conflict' above.

Comment author: lisper 09 February 2016 05:53:12PM 3 points [-]

Do you truly think that most of spirituality is an attempt to communicate a feeling of belonging that one gets also when giving up after being bullied for a week? And that this feeling is both incommunicable and easily induced with some practice (you give meditation as an example)?

That's a little bit of an oversimplified caricature, but yes, I do more or less believe that this is true. Moreover, I think there is evidence to support this position beyond just the intuitive argument I've presented here. The idea that religion evolved as a way of maintaining social cohesion is hardly original with me. I'm frankly a little bit surprised that I'm getting pushback on this; I had assumed this was common knowledge.

Comment author: TheMajor 09 February 2016 10:57:35PM 1 point [-]

The strong part of the claim is not "There exists a feeling of belonging, and religion is particularly good at inducing it" or even "Religion is among the best if not outright the very best method for maintaining social cohesion", which as you say are not claims that I think would recieve a lot of pushback (here, at least). The strong part is "Do you truly think that most of spirituality is an attempt to communicate a feeling of belonging" - i.e. when the stories found in the Bible were first told, were they claims of truth or mostly persuasion tricks?

I would accept that most of the modern function of spirituality today is to provide cohesion, but at the same time spirituality also claims to have insight into some factual matters (history, for example) and moral dilemmas. I don't see how accepting that these insights were generated with the purpose/function of maintaining group cohesion is correlated at all with them being true. I think this is the core conflict of Spirituality vs Rationality, the title of the post; not that maintaining group cohesion is irrational, but that accepting answers to factual and sometimes moral questions through dogma instead of evidence cannot be reconciled with rationality.

If there was a spirituality where all the participants acknowledged that the main purpose is group cohesion, all spoken and written text is to be interpreted as metaphors at best and, say, regular church-going makes everybody more happy all around, then I think most rationalists would be all for that. But this doesnt look at all like the spirituality found in the world around us.

Comment author: TheMajor 09 February 2016 03:43:05PM *  5 points [-]

I have started writing a comment multiple times, only to remove what I wrote mid-sentence. I think I figured out why that is: your post is tempting us to argue against the existence of experiences that cannot be communicated (do you mean: 'not perfectly communicated' or 'not even hinted at that they exist'? Communication is not binary), and with the sentences:

The reason I want to convince you to entertain this notion is that an awful lot of energy gets wasted by arguing against religious beliefs on logical grounds, pointing out contradictions in the Bible and whatnot. Such arguments tend to be ineffective, which can be very frustrating for those who advance them. The antidote for this frustration is to realize that spirituality is not about logic.

you attempt to ban a whole class of arguments that might well be relevant. Your post is a wonderful piece of rhetoric (although some of the analogies get stretched a bit thin), but it hardly communicates anything. Other than

people might profess to believe in God for reasons other than indoctrination or stupidity. Religious texts and rituals might be attempts to share real subjective experiences

there doesn't seem to be a single claim in the whole text. Do you truly think that most of spirituality is an attempt to communicate a feeling of belonging that one gets also when giving up after being bullied for a week? And that this feeling is both incommunicable and easily induced with some practice (you give meditation as an example)?

Comment author: TheMajor 28 January 2016 08:58:40PM *  1 point [-]

I have seen this argument on LessWrong before, and don't think the other explanations are as clear as they can be. They are correct though, so my apologies if this just clutters up the thread.

The Bayesian way of looking at this is clear: the prior probability of any particular sequence is 1/2^[large number]. Alice sees this sequence and reports it to Bob. Presumably Alice intends on telling Bob the truth about what she saw, so let's say that there's a 90% chance that she will not make a mistake during the reporting. The other 10% will cover all cases ranging from misremembering/misreading a flip to outright lying. The point is that if Alice is lying, this 10% has to be divided up between the other 2^[large number]-1 other possible sequences - if Alice is going to lie, any particular sequence is very unlikely to be presented by her as the true sequence, since there are a lot of ways for her to lie. So, assuming that Alice was intending to speak the truth, her giving that sequence is very strong (in my example 9*(2^[large number]-1):1) evidence that that particular sequence was indeed the true one over any specific other sequence - 'coincidentally' precisely strong enough to turn the posterior belief of Bob that that sequence is correct to 90%.

A fun side remark is that the above also clearly shows why Bob should be more skeptical when Alice presents sequences like HHHHHHHHHH or HTHTHTHTHTHT - if Alice were planning on lying these are exactly the sequences that she might pick with a greater than uniform probabilty out of all the sequences that were not thrown, and therefore each possible actual sequence contributes a higher-than-average amount of probability that Alice would present one of these special sequences, so the fact that Alice informs Bob of such a sequence is weaker evidence for this particular sequence over any other one than it would be in the regular case, and Bob ends up with a lower posterior that the sequence is actually correct.

Comment author: TheMajor 23 January 2016 12:30:04PM *  2 points [-]

I am not convinced that there exists anything like aleatory uncertainty - even QM uncertainty lies in the map. Having said that I agree with your point: that this doesn't matter, and value of information is the relevant measure (which is clearly not binary).

Having read your response to Dagon I am now confused - you state that:

This is in contrast to Eliezer's point that "Uncertainty exists in the map, not in the territory"

but above you only show the orthogonal point that allowing for irresolvable uncertainty can provide useful models, regardless of the existence of such uncertainty. If this is your main point (along with introducing the standard notation used in these models), how is this a contrast with uncertainty being in the map? Lots of good models have elements that can not be found in real life, for example smooth surfaces, right angles or irreducible macroscopic building blocks.

Comment author: jbay 12 January 2016 08:07:03AM *  0 points [-]

I don't understand why there is so much resistance to the idea that stating "X with probability P(X)" also implies "~X with probability 1-P(X)". The point of assigning probabilities to a prediction is that it represents your state of belief. Both statements uniquely specify the same state of belief, so to treat them differently based on which one you wrote down is irrational. Once you accept that these are the same statement, the conclusion in my post is inevitable, the mirror symmetry of the calibration curve becomes obvious, and given that symmetry, all lines must pass through the point (0.5,0.5).

Imagine the following conversation:

A: "I predict with 50% certainty that Trump will not win the nomination".

B: "So, you think there's a 50% chance that he will?"

A: "No, I didn't say that. I said there's a 50% chance that he won't."

B: "But you sort of did say it. You said the logically equivalent thing."

A: "I said the logically equivalent thing, yes, but I said one and I left the other unsaid."

B: "So if I believe there's only a 10% chance Trump will win, is there any doubt that I believe there's a 90% chance he won't?

A: "Of course, nobody would disagree, if you said there's a 10% chance Trump will win, then you also must believe that there's a 90% chance that he won't. Unless you think there's some probability that he both will and will not win, which is absurd."

B: "So if my state of belief that there's a 10% chance of A necessarily implies I also believe a 90% chance of ~A, then what is the difference between stating one or the other?"

A: "Well, everyone agrees that makes sense for 90% and 10% confidence. It's only for 50% confidence that the rules are different and it matters which one you don't say."

B: "What about for 50.000001% and 49.999999%?"

A: "Of course, naturally, that's just like 90% and 10%."

B: "So what's magic about 50%?"

Comment author: TheMajor 12 January 2016 10:34:35PM *  0 points [-]

I think it would be silly to resist to the idea that "X with probability P(X)" is equivalent to "~X with probability 1-P(X)". This statement is simply true.

However, it does not imply that prediction lists like this should include X and ~X as possible claims. To see this, let's consider person A who only lists "X, probability P", and person B who lists "X, probability P, and ~X, probability 1-P". Clearly these two are making the exact same claim about the future of the world. If we use an entropy rule to grade both of these people, we will find that no matter the outcome person B will have exactly twice the entropy (penalty) that person A has, so if we afterwards want to compare results of two people, only one of whom doubled up on the predictions, there is an easy way to do it (just double the penalty for those who didn't). So far so good: everything logically consistent, making the same claim about the world still easily lets you compare results aftewards. Nevertheless, there are two (related) things that need to be remarked, which is what I think all the controversy is over:

1) If, instead of the correct log weight rule, we use something stupid like a least-squares (or just eyeballing it per bracket), there is a significant difference between our people A and B above, precisely in their 50% predictions. For any probability assignment other than 50% the error rate at probability P and at 1-P are related and opposite, since getting a probability P prediction right (say, X), means getting a probability 1-P prediction wrong (~X). But for 50% these two get added up (with our stupid scoring rules) before being used to deduce calibration results. As a result we find that our doubler, player B, will always have exactly half of his 50% predictions right, which will score really well on stupid scoring rules (as an extreme example, to a naive scoring rule somebody who predicts 50% on every claim, regardless of logical constency, will seem to be perfectly calibrated).

2) Once we use a good scoring rule, i.e. the log rule, we can easily jump back and forth between people who double up on the claims and those who do not, as claimed/shown above.

In view of these two points I think that all of the magic is hidden in the scoring rule, not in the procedure used when recording the predictions. In other words, this doubling up does nothing useful. And since on calibration graphs people tend to think that getting half of your 50% predictions is really good, I say that the doubling version is actually slightly more misleading. The solution is clearly to use a proper scoring rule, and then you can do whatever you wish. But in reality it is best to not confuse your audience by artificially creating more dependencies between your predictions.

Comment author: casebash 05 January 2016 11:34:44PM 0 points [-]

"It takes massive amounts of evidence to convince me that the offers in each of your games are sincere and accurate." - Again, this only works if you assume we are modelling the real world, not perfect celestial beings with perfect knowledge. I have made no claims about whether perfect theoretical rationality can exist in theory in a world with certain "realism" constraints, just that if logic is the only constraint, perfect rationality doesn't exist in general.

Comment author: TheMajor 06 January 2016 12:25:38AM 0 points [-]

I must admit that I am now confused about the goal of your post. The words 'perfect celestial beings with perfect knowledge' sound like they mean something, but I'm not sure if we are trying to attach the same meaning to these words. To most people 'unlimited' means something like 'more than a few thousand', i.e. really large, but for your paradoxes you need actual mathematical unboundedness (or for the example with the 100, arbitrary accuracy). I'd say that if the closest counterexample to the existence of 'rationality' is a world where beings are no longer limited by physical constraints (otherwise this would provide reasonable upper bounds on this utility?) on either side of the scale (infinitely high utility along with infinitely high accuracy, so no atoms?), where for some reason one of such beings goes around distributing free utils and the other has infinitely much evidence that this offer is sincere, we're pretty safe. Or am I misunderstanding something?

I think the bottom line is that 'unbounded', instead of 'really frickin large', is a tough bar to pass and it should not carelessly be assumed in hypotheticals.

Comment author: TheMajor 05 January 2016 06:55:30PM 0 points [-]

The whole point of assigning 50% probability to a claim is that you literally have no idea whether or not it will happen. So of course including X or ~X in any statement is going to be arbitrary. That's what 50% means.

However, this is not solved by doubling up on your predictions, since now (by construction) your predictions are very dependent. I don't understand the controversy about Scott getting 0/3 on 50% predictions - it even happens to perfectly calibrated people 1/8 times, let alone real humans. If you have a long list of statements you are 50% certain about, you have literally no reason to not put one side of an issue instead of the other side on your prediction list. If, however, afterwards it turns out that significantly less than half of your (arbitrarily chosen) sides turned out to be wrong, you probably aren't very good at recognising when you are 50% confident (to make this more clear, imagine Scott had gotten 0/100 instead of 0/3 on his 50% predictions).

Comment author: TheMajor 05 January 2016 06:40:44PM 13 points [-]

How very deep. But if I'm not mistaken the original argument around Chesterton's fence is that somebody had gone through great efforts to put a fence somewhere, and presumably would not have wasted that time if it would be useless anyway. In your example, "the common practice of taking down Chesterton fences", this is not the case. The general principle is to not undo that which others have worked hard for to create, unless you are certain that it is useless/counterproductive. Nobody worked hard on making sure people could remove fences without understanding them (or at the very least I'm willing to claim that this is counterproductive), so this principle is not protected.

Comment author: TheMajor 05 January 2016 06:28:57PM *  3 points [-]

I'm not convinced. It takes massive amounts of evidence to convince me that the offers in each of your games are sincere and accurate. In particular it takes an infinite amount of evidence to prove that your agents can keep handing out increasing utility/tripling/whatever. When something incredible seems to happen, follow the probability.

I'm reminded of the two-envelope game, where seemingly the player can get more and more money(/utility) by swapping envelopes back and forth. Of course the solution is clear if you assume (any!) prior on the money in the envelopes, and the same is happening if we start thinking about the powers of your game hosts.

View more: Next