## [Link]Rationalization is Superior to Rationality

1 18 December 2015 08:30PM

Philosophy and the practice of Bayesian statistics

This is a 2012 paper by Andrew Gelman and Cosma Rohilla Shalizi on what they view as a misuse of Bayesian statistics in scientific reasoning. I found this interesting because their definition of hypothetico-deductivism closely matches up with Eliezer Yudkowsky's definition of rationalization, and their definition of inductive inference closely matches up with his definition of rationality. The definitions:

Eliezer Yudkowsky:

Rationality - Starting from evidence, and then crunching probability flows, in order to output a probable conclusion.

Rationalization - Starting from a conclusion, and then crunching probability flows, in order to output evidence apparently favoring that conclusion.

Andrew Gelman and Cosma Rohilla Shalizi:

Inductive Inference - An accretion of evidence is summarized by a posterior distribution, and scientific process is associated with the rise and fall in the posterior probabilities of various models.

Hypothetico-Deductivism - Scientists devise hypotheses, deduce implications for observations from them, and test those implications. Scientific hypotheses can be rejected (i.e., falsified), but never really established or accepted in the same way.

Now, what's interesting about the paper is that in contrast to Eliezer Yudkowsky's view they argue that rationalization (hypothetico-deductivism) is the correct analytic method, and rationality as Eliezer Yudkowsky defined it is wrong. They make the following argument:

Social-scientific data analysis is especially salient for our purposes because there is general agreement that, in this domain, all models in use are wrong – not merely falsifiable, but actually false. With enough data – and often only a fairly moderate amount – any analyst could reject any model now in use to any desired level of confidence. Model fitting is nonetheless a valuable activity, and indeed the crux of data analysis. To understand why this is so, we need to examine how models are built, fitted, used and checked, and the effects of misspecification on models.

They also argue Popper made multiple errors, but that his fundamental view is closer to correct than Kuhn's, and that correct science is about attempting to falsify hypotheses. They simply disagree with how Popper went about doing it.

Another interesting issue to me is that if you look at the main post Against Rationalization, Adirian and Vladimir_Nesov both suggested that both forms of analysis are acceptable, but TheAncientGeek was the only one who argued rationalization over rationality, and his comment received multiple downvotes. This also appears to me to have been a major concept central to many parts of the sequences. Andrew Gelman and Eliezer Yudkowsky had a bloggingheads.tv conversation together, b̶̶̶u̶̶̶t̶̶̶ ̶̶̶I̶̶̶'̶̶̶m̶̶̶ ̶̶̶n̶̶̶o̶̶̶t̶̶̶ ̶̶̶s̶̶̶u̶̶̶r̶̶̶e̶̶̶ ̶̶̶i̶̶̶f̶̶̶ ̶̶̶t̶̶̶h̶̶̶i̶̶̶s̶̶̶ ̶̶̶p̶̶̶a̶̶̶r̶̶̶t̶̶̶i̶̶̶c̶̶̶u̶̶̶l̶̶̶a̶̶̶r̶̶̶ ̶̶̶t̶̶̶o̶̶̶p̶̶̶i̶̶̶c̶̶̶ ̶̶̶e̶̶̶v̶̶̶e̶̶̶r̶̶̶ ̶̶̶c̶̶̶a̶̶̶m̶̶̶e̶̶̶ ̶̶̶u̶̶̶p̶̶̶.̶̶̶

Thoughts?

Edit - Andrew Gelman and Eliezer Yudkowsky discuss this issue at the end of the bloggingheads video.  Click on "The difference between Eliezer and Nassim" for their take.  I also fixed a link.

## I believe it's doublethink

23 21 February 2012 10:30PM

This is my attempt to provide examples and a summarised view of the posts on "Against Doublethink" on the page How To Actually Change Your Mind.

### What You Should Believe

Lets assume I am sitting down with my friend John and we each have incomplete and potentially inaccurate maps of a local mountain. When John says "My map has a bridge at grid reference 234567", I should add a note to my map saying "John's map has a bridge at grid reference 234567" *not* actually add the bridge to my map.

The same is true of beliefs. If Sarah tells me "the sky is green" I should, assuming she is not lying, add to my set of beliefs "Sarah believes the sky is green". What happens too often is that we directly add "The sky is green" to our beliefs. It is an overactive optimisation that works in most cases but causes occasional problems.

Taking the analogy a step further we can decide to question John about why he has drawn the bridge on his map. Then, depending on the reason, we can choose to draw the bridge on our map or not.

We can give our beliefs the same treatment. Upon asking Sarah why she believed the sky is green, if she said "someone told me" and couldn't provide further information I wouldn't choose to believe it. If, however, she said "I have seen it for myself" then I may choose to believe it, depending on my priors.

### I Believe You Believe

The curious case is when someone says "I believe X". This can be meant a few ways:

1. I have low confidence in this belief. e.g. "I believe that my friend Bob's eyes are hazel, but I'm not sure".
2. I have this belief but have reasons to think you wont share it. e.g. "I believe she is attractive".
3. I have the fact 'I believe the sky is green' in my mental model of the world. e.g. "I believe god exists."

The first case I do not have a problem with. It means your probability density has not yet shown a clear winner but you are giving me the answer that is in the lead at the moment. In this case I should add a note saying "You believes there is a bridge here, you are not very confident in the belief".

I don't have a problem with the second case either. I can have the belief "Angelina Jolie is attractive", someone else not have that belief, and we both be rational. This is because we are using different criteria for attractive. If I were to change it to a consistent definition of attractive it wouldn't be a problem e.g. The phrase "Angelina Jolie is regularly voted in the top 100 most attractive people in the world" doesn't require the phrase 'I believe...'.

The last case is even more curious. Lets assume that John (from our first example) says "I believe there is a bridge at grid reference 234567" but means it in the third case. I should add a note to my map saying "John has the following note on his map: 'I believe there is a bridge at grid reference 234567'". You would hope that the reason he has that note is because there is actually a bridge on his map. Unfortunately people are not that rational. You can have a cached belief that says "I believe X" even if you do not have "X" as a belief. By querying why they have that belief you should be able to work out if you should believe it, or even if they should.

To use the example from religion you can have the belief "I believe god exists" even if you do not have the belief "god exists".

### Recommendations

I'm going to put myself on the line and give some recommendations:

1. When we are told or recite a fact, try to remember why it is or was added. The reason will often be poor.
2. When telling others facts, tell them the reason you believe it, e.g. say "I think there is a bridge here because I overheard someone talking about it". This should help you weed out cached beliefs in yourself and give the other person a better metric for adding to the own beliefs.
3. When being told something, ask them why they have the belief, it also helps if you recite it back to them as if you are trying to understand, for example: "I see. You think there is a bridge here. Why do you think that?".
4. When we hear "I believe" or "I think" try to classify the statement as one of the three options above.

## Cancer scientist meets amateur (This American Life)

1 15 November 2011 01:59AM

This American Life episode 450: "So Crazy It Just Might Work". The whole episode is good, but act one (6:48-42:27) is relevant to LW, about a trained scientist teaming up with an amateur on a cancer cure.

(Technical nit: It sounds to me like the reporter doesn't know the difference between sound and electromagnetism.)

## Do we have it too easy?

27 02 November 2011 01:53AM

I am worried that I have it too easy. I recently discovered LessWrong for myself, and it feels very exciting and very important and I am learning a lot, but how do I know that I am really on a right way? I have some achievements to show, but there are some worrisome signs too.

I need some background to explain what I mean. I was raised in an atheist/agnostic family, but some time in my early teens I gave a mysterious answer to a mysterious question, and... And long story short, influenced by everything I was reading at the time I became a theist. I wasn't religious in the sense that I never followed any established religion, but I had my own "theological model" (heavily influenced by theosophy and other western interpretations of eastern religions). I believed in god, and it was a very important part of my life (around the end of high school, beginning of college I started talking about it with my friends and was quite open and proud about it).

Snip 15-20 years. This summer I started lurking on LessWrong, reading mostly Eliezer's sequences. One morning, walking to the train station, thinking about something I read, my thoughts wondered to how this all affects my faith. And I noticed myself flinching away, and thought Isn't this what Eliezer calls "flinching away"? I didn't resolve my doubts there and then, but there was no turning back and couple of days later I was an atheist. This is my first "achievement". The second is: when I got to the "free will" sequence, I stopped before reading any spoilers, gave myself a weekend and I figured it out! (Not perfectly, but at least one part I figured out very clearly, and got important insights into the other part.) Which I would have never thought I would be able to do, because as it happens, this was the original mysterious question on which I got so confused as a teenager. (Another, smaller "achievement" I documented here.)

Maybe these are not too impressive, but they are not completely trivial either (actually, I am a bit proud of myself :)). But, I get a distinct feeling that something is off. Take the atheism: I think, one of the reasons I so easily let go of my precious belief, was that I had something to replace it with. And this is very-very scary, that I sometimes get the same feeling of amazing discovery reading Eliezer as when I was 13, and my mind just accepts it all unconditionally! I have to constantly remind myself that this is not what I should do with it!

Do not misunderstand, I am not afraid of becoming a part of some cult. (I had some experience with less or more strongly cultish groups, and I didn't have hard time of seeing through and not falling for them. So, I am not afraid. Maybe foolishly.)  What I am afraid of, is that I will do the same mistake on a different level: I won't actually change my mind, won't learn what's really matters. Because, even if everything I read here turns out to be 100% accurate, it would be a mistake "believing in it". Because, as soon as I get to a real-world problem I will just go astray again.

This comment is the closest I saw here on LessWrong to my concerns. It also sheds some light on why is this happening. Eliezer describes the experience vividly enough, that afterwards my mind behaves as if I had the experience too. Which is, of course, the whole point, but also one source of this problem. Because I didn't have the experience, it wasn't me who thought it through, so I don't have it in my bones. I will need much more to make the technique/conclusion a part of myself (and a lot of critical thinking, or else I am worse off and not better.)  And no, Eliezer, I don't know how to make it less dark either.  Other than what is already quite clear: we have to be tested on our rationality. The skills have to be tested, or one won't be able to use them properly.  The "free will" challenge is very good, but only if one takes it. (I took it, because it was a crucial question for me.) And not everything can be tested like this. And it's not enough.

So, my question to more experienced LessWrongers: how did you cope with this (if you had such worries)?  Or am I even right on this (do I "worry" in the right direction)?

(Oh, and also, is this content appropriate for a "Main" post? Now that I have enough precious karma. :))

## "The True Rejection Challenge" - Thread 2

7 02 July 2011 11:49AM

The old thread (found here: http://lesswrong.com/lw/6dc/the_true_rejection_challenge/ ) was becoming very unwieldy and hard to check, so many people suggested we made a second one. I just realized that the only reason it didn't exist yet was bystander effect-like, so I desiced to just do this one.

An exercise:

Name something that you do not do but should/wish you did/are told you ought, or that you do less than is normally recommended.  (For instance, "exercise" or "eat vegetables".)

Make an exhaustive list of your sufficient conditions for avoiding this thing.  (If you suspect that your list may be non-exhaustive, mention that in your comment.)

Precommit that: If someone comes up with a way to do the thing which doesn't have any of your listed problems, you will at least try it.  It counts if you come up with this response yourself upon making your list.

(Based on: Is That Your True Rejection?)

Edit to add: Kindly stick to the spirit of the exercise; if you have no advice in line with the exercise, this is not the place to offer it.  Do not drift into confrontational or abusive demands that people adjust their restrictions to suit your cached suggestion, and do not offer unsolicited other-optimizing.

## List of compartmentalized people (who both win and fail at truth-seeking)

-9 13 May 2011 04:14PM

Following up on an impromptu list XiXiDu made of famous recent scientists & thinkers who also held quite odd beliefs, I've created a wiki article with that list & a few other people.

This Discussion is posted for feedback on a few points:

1. Is this a good idea in the first place? I feel vaguely uneasy, like it could be taken as a 'hit list' or a list of inviolable norms.
2. What's a better name? 'Irrationalists' is a bad name but the only half-way self-explanatory one I could think of at the moment.
3. Who's missing? There are currently only 8 people on the list right now.
4. Is it reasonable to limit the list temporally only to people who lived in the 20th century & later, and so had access to all the data and philosophy done then that we take for granted?
5. I added in a few 'See Alsos' that I could think of; are there more germane wiki articles? Especially LW articles? (I know Aumann in particular has been discussed occasionally by Eliezer - worth linking directly?)

15 26 February 2011 09:57AM

http://theferrett.livejournal.com/1587858.html

Excerpt:

We live in a culture so bound by what most people are willing to do that we often take them as hard limits - "I can't do more than that," we say. "I've done the best I can." But it really isn't. It's just the best we're willing to do for right then.

When I was running and got my side-stitch, I really thought that I'd put 100% into it. But the truth was that I hated running, and I hated exercise, and I was putting maybe 20% of myself into it. If I was being chased by a bear, suddenly I'd find new reserves within me. And though I hated math homework, and thought that the grudging half an hour I did was really balls-out for math homework, I'd forget how many hours I'd spend memorizing PAC-Man patterns.

After that, I realized where my real limits were - they were way up there. And maybe I could stop telling myself and others that I did my best. I didn't. Not even close. I did what I thought was reasonable.

Sometimes you don't want reasonable.

The thing about it is that you don't have to feel guilty about not giving it your all, all the time. That'd be crazy. If you started panning your friends to see the latest Rush concert, you'd be a mooch. But what's important is not to conflate "a reasonable effort" as the top end. Be honest. Know what percentage you're actually willing to give, and acknowledge that if it was that critical, you could do a lot of other, very creative, things to solve this problem. I don't ask you guys for money because I find it distasteful - but when my sister-in-law's life was at stake and I didn't have the cash, you bet your ass I begged.