[Link]Rationalization is Superior to Rationality
Philosophy and the practice of Bayesian statistics
This is a 2012 paper by Andrew Gelman and Cosma Rohilla Shalizi on what they view as a misuse of Bayesian statistics in scientific reasoning. I found this interesting because their definition of hypothetico-deductivism closely matches up with Eliezer Yudkowsky's definition of rationalization, and their definition of inductive inference closely matches up with his definition of rationality. The definitions:
Eliezer Yudkowsky:
Rationality - Starting from evidence, and then crunching probability flows, in order to output a probable conclusion.
Rationalization - Starting from a conclusion, and then crunching probability flows, in order to output evidence apparently favoring that conclusion.
Andrew Gelman and Cosma Rohilla Shalizi:
Inductive Inference - An accretion of evidence is summarized by a posterior distribution, and scientific process is associated with the rise and fall in the posterior probabilities of various models.
Hypothetico-Deductivism - Scientists devise hypotheses, deduce implications for observations from them, and test those implications. Scientific hypotheses can be rejected (i.e., falsified), but never really established or accepted in the same way.
Now, what's interesting about the paper is that in contrast to Eliezer Yudkowsky's view they argue that rationalization (hypothetico-deductivism) is the correct analytic method, and rationality as Eliezer Yudkowsky defined it is wrong. They make the following argument:
Social-scientific data analysis is especially salient for our purposes because there is general agreement that, in this domain, all models in use are wrong – not merely falsifiable, but actually false. With enough data – and often only a fairly moderate amount – any analyst could reject any model now in use to any desired level of confidence. Model fitting is nonetheless a valuable activity, and indeed the crux of data analysis. To understand why this is so, we need to examine how models are built, fitted, used and checked, and the effects of misspecification on models.
They also argue Popper made multiple errors, but that his fundamental view is closer to correct than Kuhn's, and that correct science is about attempting to falsify hypotheses. They simply disagree with how Popper went about doing it.
Another interesting issue to me is that if you look at the main post Against Rationalization, Adirian and Vladimir_Nesov both suggested that both forms of analysis are acceptable, but TheAncientGeek was the only one who argued rationalization over rationality, and his comment received multiple downvotes. This also appears to me to have been a major concept central to many parts of the sequences. Andrew Gelman and Eliezer Yudkowsky had a bloggingheads.tv conversation together, b̶̶̶u̶̶̶t̶̶̶ ̶̶̶I̶̶̶'̶̶̶m̶̶̶ ̶̶̶n̶̶̶o̶̶̶t̶̶̶ ̶̶̶s̶̶̶u̶̶̶r̶̶̶e̶̶̶ ̶̶̶i̶̶̶f̶̶̶ ̶̶̶t̶̶̶h̶̶̶i̶̶̶s̶̶̶ ̶̶̶p̶̶̶a̶̶̶r̶̶̶t̶̶̶i̶̶̶c̶̶̶u̶̶̶l̶̶̶a̶̶̶r̶̶̶ ̶̶̶t̶̶̶o̶̶̶p̶̶̶i̶̶̶c̶̶̶ ̶̶̶e̶̶̶v̶̶̶e̶̶̶r̶̶̶ ̶̶̶c̶̶̶a̶̶̶m̶̶̶e̶̶̶ ̶̶̶u̶̶̶p̶̶̶.̶̶̶
Thoughts?
Edit - Andrew Gelman and Eliezer Yudkowsky discuss this issue at the end of the bloggingheads video. Click on "The difference between Eliezer and Nassim" for their take. I also fixed a link.
In Defense of the Fundamental Attribution Error
The Fundamental Attribution Error
Also known, more accurately, as "Correspondence Bias."
http://lesswrong.com/lw/hz/correspondence_bias/
The "more accurately" part is pretty important; bias -may- result in error, but need not -necessarily- do so, and in some cases may result in reduced error.
A Simple Example
Suppose I write a stupid article that makes no sense and rambles on without any coherent point. There might be a situational cause of this; maybe I'm tired. Correcting for correspondence bias means that more weight should be given to the situational explanation than the dispositional explanation, that I'm the sort of person who writes stupid articles that ramble on. The question becomes, however, whether or not this increases the accuracy of your assessment of me; does correcting for this bias make you, in fact, less wrong?
In this specific case, no, it doesn't. A person who belongs to the class of people who write stupid articles is more likely to write stupid articles than a person who doesn't belong to that class - I'd be surprised if I ever saw Gwern write anything that wasn't well-considered, well-structured, and well-cited. If somebody like Gwern or Eliezer wrote a really stupid article, we have sufficient evidence that he's not a member of that class of people to make that conclusion a poor one; the situational explanation is better, he's having some kind of off day. However, given an arbitrary stupid article written by somebody for which we have no prior information, the distribution is substantially different. We have different priors for "Randomly chosen person X writes article" and "Article is bad" implies "X is a bad writer of articles" than we do for "Well-known article author Y writes article" and "Article is bad" implies "Y is a bad writer of articles".
Getting to the Point
The FAE is putting emphasis on internal factors rather than external. It's jumping first to the conclusion that somebody who just swerved is a bad driver, rather than first considering the possibility that there was an object in the road they were avoiding, given only the evidence that they swerved. Whether or not the FAE is an error - whether it is more wrong - depends on whether or not the conclusion you jumped to was correct, and more importantly, whether, on average, that conclusion would be correct.
It's very easy to produce studies in which the FAE results in people making incorrect judgements. This is not, however, the same as the FAE resulting in an average of more incorrect judgements in the real world.
Correspondence Bias as Internal Rationalization
I'd suggest the major issue with correspondence bias is not, as commonly presented, incorrectly interpreting the behavior of other people - rather, the major issue is with incorrectly interpreting your own behavior. The error is not in how you interpret other peoples' behaviors, but in how you interpret your own.
Turning to Eliezer's example in the linked article, if you find yourself kicking vending machines, maybe the answer is that -you- are a naturally angry person, or, as I would prefer to phrase it, you have poor self-control. The "floating history" Eliezer refers to sounds more to me like rationalizations for poor behavior than anything approaching "good" reasons for expressing your anger through violence directed at inanimate objects. I noticed -many- of those rationalizations cropping up when I quit smoking - "Oh, I'm having a terrible day, I could just have one cigarette to take the edge off." I don't walk by a smoker and assume they had a terrible day, however, because those were -excuses- for a behavior that I shouldn't be engaging in.
It's possible, of course, that Eliezer's example was simply a poorly chosen one; the examples in studies certainly seem better, such as assuming the authors of articles held the positions they wrote about. But the examples used in those studies are also extraordinarily artificial, at least in individualistic countries, where it's assumed, and generally true, that people writing articles do have the freedom to write what they agree with, and infringements of this (say, in the context of a newspaper asking a columnist to change a review to be less hostile to an advertiser) are regarded very harshly.
Collectivist versus Individualist Countries
There's been some research done, comparing collectivist societies to individualist societies; collectivist societies don't present the same level of effect from the correspondence bias. A point to consider, however, is that in collectivist societies, the artificial scenarios used in studies are more "natural" - it's part of their society to adjust themselves to the circumstances, whereas individualist societies see circumstance as something that should be adapted to the individual. It's -not- an infringement, or unexpected, for the state-owned newspaper to require everything written to be pro-state.
Maybe the differing levels of effect are less a matter of "Collectivist societies are more sensitive to environment" so much as that, in both cultures, the calibration of a heuristic is accurate, but it's simply calibrated to different test cases.
Conclusion
I don't have anything conclusive to say, here, merely a position: The Correspondence Bias is a bias that, on the whole, helps people arrive at more accurate, rather than less accurate, conclusions, and should be corrected with care to improving accuracy and correctness, rather than the mere elimination of bias.
How Not to Make Money
Sarcastic Practical Advice Series: 1 How Not to Make Money
I'm calling this a series because I would like it to be a series, feel free to write your own post on "how not to do something many people want to do", especially you, future me.
I'm very good at not making money, and maybe this is a skill you have found yourself needing to perfect.
But worry not. Stop rationalizing! I'll teach you some of the craft before you can say all the palindromes in the Finnish language.
(1) Be one of those people who actually turn knowledge, general knowledge, into personally designed actions/policies. The kind of people who, upon learning that driving is more dangerous than being attacked by spiders, and experiencing the first person evolved fear of spiders, understands that he should be as afraid of driving badly as he is of spiders, or much more, and drives accordingly.
(2) Understand that there is no metaphysical Self, only a virtual center of narrative gravity (Read Dennett), whose manner of discounting time is hyperbolic (Read George Ainslie), weirdly self-representative (Read GEB), and basically a mess.
(3) Read Reasons and Persons, by Parfit, and really give up on your Naïve intuitions about personal identity over time. Using (1) act accordingly, i.e. screw future retired you.
(4) Go through a university program in the humanities, so no one tempts you by throwing money at you after you graduate - This has happened to an academically oriented friend of mine who graduated a Medical Doctor, but actually wanted to be in the lab playing with brains. - If you can make into Greek Mythology, or Iranian Literature, good for you, Philosophy is ok, as are social sciences, as long as you do theory and don't get into politics or institutional design later on. If you go to psychology, you are dangerously near Human Resources, so be sure to be doing it for the reasons Pinker would do it, because you want to understand our internal computer, not to treat people.
(5) Have some cash: This seems obvious, but it’s worth reminding if you are a machine discounting hyperbolically, you'd better be safe for the next two months.
(6) Study research on happiness and money: Money doesn't buy happiness, and when it does, it's by buying things to others, regardless of Price. Giving a bike, a Porsche, or a Starbucks coffee to your friends provides you the same amount of fuzzies. Use (1) act accordingly.
(7) Be curious: If you are the kind of person who knows by heart that the Finish language is more propense to palindromization, you are in a great route not to make money. If you get really excited about space, good for you. If you are so moved by curiosity you can't sleep before you finally figure it out, worry not, money ain't coming your way. Don’t forget all those really cool books you want to read.
(8) Avoid being Anhedonic: Anhedonia is one of the great enemies of those who don't want to make money. If all feels more or less the same to you, there is great incentive to go after the gold, it won't harm you much, and it will afford you the number one value of the Anhedonic, a false sense of security, and the illusion that happiness lies somewhere ahead of you in the future. If you can be thrilled or excited by the latest Adam Sandler movie, if a double rainbow will make you cry like a baby even in a video, and if you watch this sax video with a young, healthy, fertile female more than once because it’s a good video, rest assured, you’ll be fine.
(9) What do you care what other people think?:
Feynman nailed this aspect of the no-money making business. You may not have noticed but everyone, especially your family, thinks you should make money, Graham says
All parents tend to be more conservative for their kids than they would for themselves, simply because, as parents, they share risks more than rewards. If your eight year old son decides to climb a tall tree, or your teenage daughter decides to date the local bad boy, you won’t get a share in the excitement, but if your son falls, or your daughter gets pregnant, you’ll have to deal with the consequences. - How to do what you love.
It’s not just parents; everyone gets more shares of your money than of your excitement. If this was not the case, Effective Altruists would be advocating roller coasters and volcano lairs with cat people, not high income careers.
(10) Couchsurf and meet couchsurfers and world travelers: If you never did it, go around couchsurfing for a while. As it happens, due to many factors, travelling all the time, a dream of the majority, is cheaper than staying in one spot. Meeting world travelers like 1Mac Madison, 2Puneet Sahani, 3Frederico Balbiani, and 4Rand Hunt made me realize, respectively, that: 1 It’s possible to travel 2/3rd of the time as a CS major; 2 Indian Citizenship and zero money won’t stop you; 3 Not speaking English or wanting to work with what gave you degrees doesn’t stop you; 4 Spending 90 dollars in 100 days is possible. You’ll feel much less pressure to make money after meeting similar people and being one of them.
(11) Don’t experience Status Anxiety: The World suffers from an intense affliction. Alain de Botton named it Status Anxiety. You are not just richer than most people nowadays. You are unimaginably, unbelievably wealthy (in term of resources you can use) in comparison to everyone that ever lived. But the point is, the less time you spend comparing, regardless of who you are comparing with, the happier you feel.
(12) Be persuadable by intellectuals outside traditional science, like De Botton and Alan Watts, but not by really terrible The Secret style self-help.
(13) Consider money over-valued: In economics, the price of things is determined by the supply and demand of that particular thing. The interesting thing is that demand is not measured by how many people want something how badly, but this multiplied by each person’s wealth… If so many (wealthy) people value Rolex watches, they will be overpriced for you, especially if they are paying in luck, inheritance, or interest, and you are paying in work (though both use money as a medium).
Money is a medium of trade, how could it be over-valued?
Simple, there are many other mediums of trade (being nice, becoming more attractive, being a good listener, going to the “right place at the right time”, knowledge, enviable skills, prestige, dominance, strength, signaling, risk – i.e. stealing, Vegas, or bitcoin - , sex, time, energy). If you think these items are cheaper than money, you go for them as your medium of trade. And indeed they are cheaper than money, because everyone knows that money is valuable, and nearly no one thought consciously of the trade value of those things.
(14) Fake it till you don’t make it: My final advice would be to try out not spending money. Do it for a month (I did it for two), set a personal unbearably low barrier according to your standards. Dine before going to dinner with friends, by bike, of course. Carry water instead of buying it. Deny any social activity that would be somewhat costly and substitute it for some personal project, internet download, or analogous near-free alternative. Exercise outside, not in the gym. Take notes on how good your days were, you may find out, as did Kingsley that: “We act as though comfort and luxury were the chief requirements of life, when all that we need to make us happy is something to be enthusiastic about.” Furthermore, with Barry Schwartz, you may find out that less is more, and when you have fewer options of what to do, this gives you not only happiness, but extra capacity to use your psychological attention to actually do what you want to do, do as Obama did, save your precious share of mindspace.
There, I hope you feel more fully equipped not to make money, should you ever need this hard earned, practical life-skill. You’re welcome.
Motivated skepticism: it's harder to avoid than I'd think
Yesterday I caught myself on rationalizing. It's the first time I caught myself on rationalizing before I finished (verbalizing) the thought. But, I finished the thought, and even though I knew it was rationalizing from the very start, I ended up believing it.
The original question is not very interesting, but here it is to illustrate the issue: I was talking with a friend about US TV series. And he mentioned that his wife insist on watching movies/shows in English without subtitles; to improve their English skills. (I also started watching American shows many-many years ago with the purpose of improving language skills. In English with English subtitles. And it did help immensely, but I never got rid of the subtitles even though originally I was planning to.)
So, my though answer was the following: "How does she know that this helps more than watching with subtitles? Did she measure it somehow? Watching without subtitles mainly helps with listening comprehension. And I'm already good enough with that. Using subtitles on the other hand is always an opportunity to improve on the more obscure part of English vocabulary. In almost every episode there is one or two very rarely used words, English vocabulary is just so enormous, and without subtitles you'd just skip over it..." Fully verbalized it was something like this.
Now, the first part of that argument is a "fully general counterargument". And the rest of it, though might be plausible, should be treated with great suspicion since I know that the roots for it are in motivated skepticism. At least, my realization that this is rationalization stopped me from using this argument. But, it is still quite hard to "unbelieve" it. Maybe I should have tried to stop myself from finishing the thought once I realized its nature? Or you just have to pay even more attention, and it will come with practice? Of course, this is a trivial issue, completely unimportant. But why would I think that it will be easier if the issue is important?
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)