Rationality Quotes March 2014
Another month has passed and here is a new rationality quotes thread. The usual rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (326)
-- James Dean
I find this a useful quote to keep in mind when I'm experiencing mental states that I don't want to experience.
Scott Aaronson in reply to Max Tegmark replying to Scott's review of Max's book. He goes on:
(Emphasis mine.)
This sounds similar to the view that is sometimes called the fragility of deduction. It was why John Stuart Mill distrusted "long chains of logical reasoning" and according to Paul Samuelson it is why "Marshall treated such chains as if their truth content was subject to radioactive decay and leakage."
And that is why the long chains of logical reasoning used in the UFAI argument should not be regarded as terminating in conclusions of near certainty or high probability.
You could say that about anything.
Maybe, but it would not be very painful in many cases. In most cases, people who put forward highly conjunctive arguments don't put out them forward as urgent, near certainties which require immediate and copious funding.Moreover, most audiences have enough common sense to implications as lossy.
MIRI/LW presen ts an unusual set of circa,stances which is worth pointing out.
-Neal Stephenson, Cryptonomicon
-- Francis Bacon, Novum Organum
-- John Stuart Mill
"Therefore, this kind of experiment can never convince me of the reality of Mrs Stewart's ESP; not because I assert Pf=0 dogmatically at the start, but because the verifiable facts can be accounted for by many alternative hypotheses, every one of which I consider inherently more plausible than Hf, and none of which is ruled out by the information available to me.
Indeed, the very evidence which the ESP'ers throw at us to convince us, has the opposite effect on our state of belief; issuing reports of sensational data defeats its own purpose. For if the prior probability for deception is greater than that of ESP, then the more improbable the alleged data are on the null hypothesis of no deception and no ESP, the more strongly we are led to believe, not in ESP, but in deception. For this reason, the advocates of ESP (or any other marvel) will never succeed in persuading scientists that their phenomenon is real, until they learn how to eliminate the possibility of deception in the mind of the reader. As (5.15) shows, the reader's total prior probability for deception by all mechanisms must be pushed down below that of ESP."
ET Jaynes, Probability Theory (S 5.2.2)
I found this (and the preceding bit) noteworthy on two points; first in the obvious mathematical respect that explains the relationship between favored hypotheses and less favored hypotheses which are both supported by data;
Second, by the realization that researchers favoring ESP most likely fail to apprehend the hypothesis that they are testing, w/r to their critics. In the case in question, they collected 37100 predictions, which seems a little excessive considering it had essentially no persuasive power to skeptics.
-- Napoleon Bonaparte.
Context: Aang ("A") is a classic Batman's Rule (never kill) hero, as a result of his upbringing in Air Nomad culture. It appears to him that he must kill someone in order to save the world. He is the only one who can do it, because he's currently the one and only avatar. Yangchen ("Y") is the last avatar to have also been an Air Nomad, and has probably faced similar dilemmas in the past. Aang can communicate with her spirit, but she's dead and can't do things directly anymore.
The story would have been better if Aang had listened to her advice, in my opinion.
May I make a general request to people posting quotes? Please include not just the author's name but sufficient information to enable a reader to find the relevant quote. This doesn't necessarily have to be full MLA format; but a title, journal or book name if from a print source, page number or URL, and date would be helpful. Hyperlinked URLs are excellent if available but do not substitute for the rest of this information since these threads will likely outlive the location of some of the sources.
Doing so enables the reader not just to get a brief hit of rationality but to say, "Gee, that's interesting. I'd like to learn more," and read further in the source.
In fact, why don't we add a fifth bullet point to the header:
That's what archive.org is for. (Okay, it's not perfectly reliable, but...)
If you want to avoid that problem, whenever you post a link you should submit it to archive.org or archive.is.
Edwin Lyngar at Salon
I don't think certainty is an emotion in the first place. The emotion that people who are certain feel is confidence.
Even then I think it's frequently better than anger.
Confidence in oneself, or confidence in someone or something external? The two feel quite different to me, subjectively -- something like a feeling of lightness and elevation vs. groundedness and solidity, although English doesn't have a very good vocabulary for this sort of thing.
Warren Buffett
"As I fear not a child with a weapon he cannot lift, I will never fear the mind of a man who does not think.’”
Words of Radiance, Brandon Sanderson, page 795
Both the metaphor and its literal application only make sense if "cannot" and "does not" means "never", and they really don't.
While I'd never fear the mind of a man who literally is in a coma and doesnt think at all, I'd have plenty of reason to fear the mind of a man whose ability to think is merely limited. He can be a stupid moral reasoner and a clever killer at the same time.
In that case, though, you're afraid of the man's axe more than his mind.
I recall one Sherlock Holmes book, where Holmes said that he has a lot of trouble predicting the actions of idiots; an intelligent man, Holmes can work out what actions he would take in a given situation, but an idiot could do anything.
Of course, this presumes that one knows the goals of said intelligent agent.
Bernard and Sir Humphrey are British government functionaries in the comedy show 'Yes Minister'
Bernard: If it's our job to carry out government policies, shouldn't we believe in them?
Sir Humphrey: Oh, what an extraordinary idea! I have served 11 governments in the past 30 years. If I'd believed in all their policies, I'd have been passionately committed to keeping out of the Common Market, and passionately committed to joining it. I'd have been utterly convinced of the rightness of nationalising steel and of denationalising it and renationalising it. Capital punishment? I'd have been a fervent retentionist and an ardent abolitionist. I'd have been a Keynesian and a Friedmanite, a grammar school preserver and destroyer, a nationalisation freak and a privatisation maniac, but above all, I would have been a stark-staring raving schizophrenic!
Jane Austen, Sense and Sensibility
It's pretty remarkable to detect yourself in that kind of mistake; most people are very good at finding confirming evidence for whatever judgments they've made about people, and ignoring any contrary indications.
Yes, this is a good point - we generally don't realise that we are (self-)deceived, so we can't even begin to think about where we went wrong.
Of course, Elinor Dashwood is something of an authorial stand-in, so it's not really surprising that she's incredibly wise and perspicacious like that.
Yes. This can happen, but what you describe is more common. Once a coworker was sure that I was an ultra-right-wing militiaman, based on one indirect, misleading bit of evidence, ignoring all else.
Merely declaring my general political alignment and support and opposition to various candidates was totally inadequate. I had to explicitly enumerate several political positions to get him to adjust, and even then he seized on the nuances to try to interpret it as my secretly being a right-wing nut.
I think those mistakes usually happen for an entirely different reason. New people remind us of ones we've already met, and we unconsciously "fill in the blanks" in what we know about the new person with what we know about person we know, or some kind of average-ish judgement about the group of comparable people we know.
-- Sri Aurobindo (1872-1950), Savitri - A Legend and a Symbol
"Consider the people who routinely disagree with you. See how confident they look while being dead wrong? That’s exactly how you look to them." - Scott Adams
"This is the essence of intuitive heuristics: when faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution." - Daniel Kahneman
"Luck plays a large role in every story of success; it is almost always easy to identify a small change in the story that would have turned a remarkable achievement into a mediocre outcome." - Daniel Kahneman
.
It's from a book: Thinking Fast and Slow.
Quotes from the Screwtape Letters have not been terribly well-received in this thread. So, perversely, I decided I had to take a turn:
-- The demon Screwtape, on how best to tempt a human being to destruction.
The existence of souls notwithstanding, Screwtape is clearly right: if you are charitable to almost everybody--except for those your see every day!--then you are not practicing the virtue of charity and are ill-served to imagine otherwise. You cannot fantasize good mental habits into being; they must be acted upon.
Who does more good with their life--the person who contributes a large amount of money to efficient charities while avoiding the people nearby, or the person who ignores anyone more than 100 miles away while being nice to his mother, his employer, and the man he meets in the train?
If he actually donates the money then the charity is not constrained to fantasy. By the miracle of the world banking network, people thousands of literal miles away can be brought as close as the sphere of action. Those concentric rings are measured in frequency and impactfulness of interaction, not physical distance.
What Screwtape is advocating is that he simply intend to donate the money once Givewell publishes a truely definitive report (which they never will). Or better, that he feel great compassion for people so many steps removed that he could not possibly do anything for them (perhaps the people of North Korea, who are beyond the reach of most charities due to government interdiction).
A tricky question.
The obvious, and trivially true, answer is that he who does both does more good than either. But that's not what you asked.
So. It can be hard to compare the two options when considering the actions of a single person, since the beneficiaries of the actions do not overlap. Therefore I shall employ a simple heuristic; I shall assume that the option which does the most good when one person does it is also the option that does the most good when everyone does it.
So, the first option; everyone (who can afford it) makes large donations to efficient charities, while everyone avoids those nearby and is unpleasant when forced to deal with someone else directly.
If I make a few assumptions about the effectiveness (and priorities) of the charities and the sum of the donations, I find myself considering a world where everyone is sufficiently fed, clothed, sheltered, medically cared for and educated. However, the fact that everyone is unpleasant to everyone else leads to everyone being grumpy, irritated, and mildly unhappy.
Considering the second option; charitable donations drastically decrease, but everyone is pleasant and helpful to everyone they meet face-to-face. In this possible world, there are people who go hungry, naked, homeless. But probably fewer than in our current world; because everyone they meet will be helpful, aiding if they can in their plight. And because everyone's pleasant and tries to uplift the mood of those they meet, a large majority of people consider themselves happy.
Yvain in these two old blog posts of his makes the case that it's not clear that a world with grumpy people is worse than a world with hungry people.
You are correct. It is by no means clear which is better.
This assumption seems trivially false to me, and despite being labeled as a mere 'heuristic', it is the crucial step in your argument. Can you explain why I should take it seriously?
Well, for most choices between "is this good?" and "is this bad?" the assumption is true. For example, is it good for me to drop my chocolate wrapper on the street instead of finding a rubbish bin? If I assume everyone were to do that, I get the idea of a street awash in chocolate wrappers, and I consider that reason enough to find a rubbish bin.
Furthermore, and more importantly, the aim here is not to produce an argument that one action is better than the other in a single, specific case; rather, it is to produce a general principle (whether it is generally better to be charitable to those nearby, or to those further away).
And if option A is generally better than option B, then I think it is very probable that universal application of A will remain better than universal application of B; and vice versa.
When you ask what it's like if everyone were to "do that", the answer you get is going to be determined by how you define "that". For instance, if everyone were to drop chocolate wrappers on the lawn of your annoying neighbor, you might be happy. So is it okay to drop the wrapper on your neighbor's lawn?
It's tempting to reply to this by saying "'doing the same thing' means removing all self-serving qualifiers, so the correct question is whether you would like it if people dropped wrappers wherever they wanted, not specifically on your neighbor's lawn". This reply doesn't work, because there are are plenty of situations where you want the qualifier--for instance, putting criminals in jail when the qualifier "criminal" excludes yourself.
(And what's your stance on homosexuality? If everyone were to do that, humanity would be extinct.)
I do need to be careful to define "that" as a generally applicable rule. In this case, the generally applicable rule would be, is it okay to drop chocolate wrappers on the lawn of people one finds annoying?
So I need to consider the world in which everyone drops chocolate wrappers on the lawn of people they find annoying. Considering this, the chances of someone dropping a wrapper on my lawn becomes dependent on the probability that someone will find me annoying.
So, in short, I can put as many qualifiers on the rule as I like. However, I have to be careful to attach my qualifiers to the true reason for my formulation of the rule; I cannot select the rule "it is acceptable to drop chocolate wrappers on that exact specific lawn over there" without referencing the process by which I chose that exact specific lawn.
I can't attach a qualifier to a specific person; but I can attach a qualifier to a specific quality, like being annoying, when considering a proposal.
Yeah, that's been confusing: I meant this principle of charity.
Why the Hell would I want to practice the virtue of charity? If anything, I want to help people. And hating people from a foreign country could be an excellent way to do damage!
I'm sorry, my original post was not quite precise. I meant charity in the sense of the Principle of Charity, not charitable contributions. If you prefer, substitute "kind" for "charitable"; it's not quite the same but illustrates the point just as well.
Keep in mind, we're talking about the damage you do to yourself. Hating people you've never met is not a very efficient way to damage yourself. Much better is to hate people you know intimately and see every day. That way you can practice your vices efficiently, and will have as many opportunities as possible to act them out.
"Intelligence is not only the ability to reason; it is also the ability to find relevant material in memory and to deploy attention when needed." - Daniel Kahneman
Anonymous commenter
-- Dan Geer
(rationality applicability: antifragility & disjunctive prediction vs. optimization for conjunctive prediction)
-The Wise Man in Darkside, a radio play by Tom Stoppard
I. This Is Not A Game.
II. Here And Now, You Are Alive.
-- Om and many other gods, Small Gods, Terry Pratchett
Funny. I came up with almost the exact same line:
This quote reminded me of a quote from an anime called Kaiji, albeit your quote is much more succinct.
More succinctly...
-- John Lennon, “Beautiful Boy (Darling Boy)”
This is not a drill. Therefore, make sure you have drills for the really important bits.
And bits for the really important drills.
And make sure that the bit is properly secured and the chuck key is removed before operating the drill.
And when someone tries to be the wall the stands in your way, you'll have something that will open a hole in them every time: your drill.
(In a related matter, if there's a wall in your way, smash it down. If there isn't a path, carve one yourself.)
--Kamina, in Tengen Toppa Gurren Lagann
BUT keep Chesterton's Fence in mind: if you don't know why there is a wall on your way, don't go blindly smashing it down. It might be there for a reason. First make absolutely sure you know why the wall exists in the first place; only then you may proceed with the smashing.
I like the sentiment, but you do realize that breaking walls has costs.
-- C.S. Lewis, The Screwtape Letters
I admit, I get horribly mind-killed whenever I realize I'm reading something by CS Lewis, especially anything from The Screwtape Letters. That's because years ago, the arguments in this book were used against me by a girl I was dating as a means to end our relationship (me being non-religious), who herself was convinced by her friends and family that we should break up.
That said, I was able to read this and appreciate it more clearly if I substituted the quote like so:
If we are attempting to spread good rationality around, would it be efficient to not try to convince people that rationality was "true", but instead attempt to promote good rationality by saying that rationality is "strong, stark, or courageous -- that it is the philosophy of the future"?
So you propose to spread rationality by encouraging irrationality?
Even assuming that this will work — that is, not just get people to buy into rationality (that part is simple) but actually become more rational, after this initial dose of irrational motivation — what do you suggest we do when our new recruits turn around and go "Hey, wait a tick; you guys got me into this through blatantly irrational arguments! You cynically and self-servingly pandered to my previously-held biases to get me on your side! You tricked me, you bastards!"? Grin and say "worked, didn't it"?
This is better than the other Screwtape quote, but - given the example of Ayn Rand - I think Lewis still gets causality backwards where smart Marxists were concerned. I think they started by being right about God and "materialism" when most people were insistently wrong (or didn't care about object-level truth.) This gave them an inflated view of their own intelligence and the explanatory power of Marxism.
While it is true that you shouldn't be a materialist just because it's fashionable, there's a fine line between saying "you shouldn't be a materialist just because it's fashionable" and "my opponents are just materialists because it's fashionable". The second is a straw man argument, and given that this is CS Lewis putting words in the mouth of Satan, I read this as the straw man argument. Needless to say, a straw man argument is not a good rationalist quote.
That sounds to me like you're assuming that Lewis wrote the book so that he could put the devil to say strawmannish things, in order to mock the devil. Which is not the case at all - the demon writing the letters is much more similar to MoR!Quirrel, displaying a degree of rationality-mixed-with-cynicism which it uses to point out ways by which the lives of humans can be made miserable, or by which humans make their own lives miserable. Much of it can be read as a treatise on various biases and cognitive mistakes to avoid, made more compelling by them being explained by someone who wants those mistakes to be exploited for actively harming people.
I read that quote as saying that the Devil (or a demon) deceives people by making them believe those things, not that the Devil believes these things himself. That's how demons behave, they lie to people. This one lies to people about why one should be a materialist and the people fall for it. The point is not to mock the demon, who in the quote is acting as a liar rather than a materialist, but to mock materialists themselves by implying that they are materialists for spurious reasons.
Of course, Lewis has plausible deniability. One can always claim he's not attributing anything to materialists in general--you're supposed to infer that; it's not actually stated.
Edit: Also, remember when Lewis wrote that. 1942 wasn't like today, when it's possible to say you don't believe in the supernatural and (if you live in the right area) not suffer too many consequences except not ever being able to run for political office. Any materialist at the time who claimed he was courageous could easily be just responding to persecution, not claiming that that was his reason for being a materialist. Mocking materialists for that would be like mocking gay pride parades today on the grounds that pride is a sin and a form of arrogance--pride in a vacuum is, but pride in response to someone telling you you're shameful isn't.
I don't think he was mocking, but I do think he was correct. I claim that it's perfectly true that most materialists today are materialists for spurious, non-object-level reasons. The same goes for all other widespread philosophies. People in general are biased and also don't care about philosophical truth much.
I think the non-object-level reasons that the devil names are interesting.
I think few new atheists care about whether atheism is strong or courageous. They rather care about the fact that it's what the intelligent people believe and they also want to be intelligent.
I suspect that most members of the Democratic Party are Democrats for spurious reasons too. But a Republican who lists a bunch of human foibles and writes a scenario that specifically names Democrats as being subject to them is probably attacking Democrats, at least in passing, not just attacking human beings.
Don't let yourself be mindkilled. Arguments aren't soldiers.
Focus on the true things you can say about the the world.
See filtered evidence. It is completely possible to mislead people by giving them only true information... but only those pieces of information which support the conclusion you want them to make.
If you had a perfect superhuman intelligence, perhaps you could give them dozen information about why X is wrong, a zero information about why Y is wrong, and yet the superintelligence might conclude: "Both X and Y are human political sides, so I will just take this generally as an evidence that humans are often wrong, especially when discussing politics. Because humans are so often wrong, it is very likely that the human who is giving this information to me is blind to the flaws of one side (which in this specific case happens to be Y), so all this information is only a very weak evidence for X being worse than Y."
But humans don't reason like this. Give them dozen information about why X is wrong, and zero information about why Y is wrong; in the next chapter give them dozen information about why Y is good and zero information about why X is good... and they will consider this a strong evidence that X is worse than Y. -- And Lewis most likely understands this.
I doubt that any LW member would take all of his information about the value of atheism from Lewis. If you let yourself convince that atheism is wrong by reading Lewis than your belief in atheism was very weak in the first place.
I have a hard time imagine pushing anyone in LW into a crisis of faith about atheism in which we wouldn't come out with better belief system than he started. If someone discovers that he actually follows in atheism because it's cool and works through his issues, he might end up in following atheism for better reasons.
I am tempted to reply to this with "May the Force be with you", but instead I'll ask "just what are you trying to say?" You just gave me a reply which consists entirely of slogans, with no hint as to how you think they apply.
I think the argument I made was fairly obvious, but let me break it down.
You care about who's attacking whom. If you are in that mindset arguments are soldiers. You treat the argument that there are atheists who are atheists because it's cool to be an atheist as a foreign soldier that has to be fought. A foreign soldier that doesn't play according to the rules.
Those considerations don't matter if you want to decide whether there are atheists who are motivated by the coolness of being an atheist. If you care about truth, you want to have true beliefs about how much atheists are motivated by the coolness factor of atheism. It doesn't matter for this discussion whether that argument is fair. What matters is whether it's true.
The demon is not just lying at random - the demon is lying with the purpose of getting a certain reaction (in this case, getting the human to subscribe to the philosophy of materialism). The original quote is advice on how to use the human's cognitive biases against him, in order to better achieve that goal.
The point of the quote isn't materialism. That could be replaced with any other philosophy, quite easily. The point of the quote is that, for many people, subscribing to a philosophy isn't about whether that philosophy is true at all; it's more about whether that philosophy is popular, or cool, or daring.
The point isn't to mock the demon, or the materialist. The point is to highlight a common human cognitive mistake.
Your understanding of 1942 is amazingly flawed. No-one in the developed world was persecuted for being a materialist at that time, but plenty were for their religion. Moreover, the fashionable belief at the time was dialectical materialism, and part of the claim made for it, by dialectical materialists themselves, was that it was the philosophy of the future.
Well, my first thought was Bertrand Russell being fired from CUNY, which was around 1940, although that was mostly because of his beliefs about sex (which are still directly related to his disbelief in religion). Religion classes in public schools were legal until 1948, and compulsory school prayer was legal until 1963. "In God We Trust" was declared the national motto of the US in 1956.
Like Salemicus said, no one of those things are persecutions. The closest of your examples is Bertrand Russell's firing, but even you admit that wasn't over his materialism.
By way of contrast there were in fact places in the developed world during the 1930's-1940's where one could be prosecuted for not being a materialist. And by prosecuted, I mean religious people were being semi-systematically arrested and/or executed (not necessarily in that order).
Saying "people in this time period are persecuted for their religion" implicitly limits it to Western democracies unless you specifically are talking about something else. It's like claiming that "in the 1980's, women weren't allowed to vote". That's literally true, because there are countries where in the 1980's (or even today) women could not vote, but it's not what most people would mean by saying such a thing.
Furthermore, the existence of laws implies persecution. If school prayer is compulsory, that means that people in schools are punished for not praying or have to pray against their will for fear of punishment. That's what "compulsory" means.
(Besides, if you're going to interpret it that way. I could point out that in countries like Saudi Arabia, people could be killed for not believing in God, and that this wasn't any better in the 1930's in most of those countries.)
Lewis' point of reference is the UK, not the US. I don't know how much that changes the picture.
I think the US counts as part of "the developed world", however.
If someone is a materialist just because it's fashionable, that's trouble. Lewis may be wrong on whether or not the Church is 'true,' but I don't think Lewis is wrong on calling out compartmentalization and inconsistency rather than thinking about whether or not doctrines are true or false.
-- C.S. Lewis, The Screwtape Letters
-- Betty Medsger
Tears in my eyes... awesome beyond words. The fact that this wasn't fictional makes it brilliant. The fact that the burglars were acting against an over-reaching institution is just icing on the cake. It's been sometime since I heard a story that warmed my heart so much.
How did they know they considered leaving a thank-you note? Did they confess afterwards?
Yes.
Well. Sort of. Not exactly.
They were stealing government files on domestic surveillance in order to leak them - the quote comes from the journalist who was their press contact (see the link beneath the quote.)
Eric Raymound
I agree with this. It's also a good quote. However there is an important caveat. It is possible to get caught-up in an echo chamber of equally crazy people. For extreme examples, consider the Lyndon Larouche crowd, Scientology, or the latest resurgence of the Protocols of the Elders of Zion. So just because not everybody believes you're crazy, does not imply you are in fact, not crazy.
I think it's important to distinguish between "crazy" and "irrational". Many crazy people are very rational. For instance, a fair portion of LW's users, including myself, have experienced some form of temporary or chronic mental illness; that's often exactly the impetus that gets someone to distrust their System 1 thinking and spend effort on deliberately becoming more rational.
--H. L. Mencken
Related:
-- Bertrand Russell, In Praise of Idleness
I once attempted to sum this up on another forum by saying, "Careerism is a subgoal stomp." Russell, of course, better expresses the broader point that the subgoal stomp of maximizing productivity has almost entirely replaced all discussion of what kind of terminal goals individuals and societies should have.
The problem with spending money -- and consumption in general -- is the opportunity cost. If by doing something you are not maximizing some value, then it would be better if you did.
The same criticism could be also made about production: if you work hard and make profit by creating value, but by doing something else you could create even more value, then it would be better if you did the other thing.
Perhaps the difference is that on the production side people at least try to maximize (of course with all the human irrationality involved), but on the consumption side we often forget to think about it. So we need to be reminded there more.
Not some value -- a lot of people are maximizing the wrong values.
One of the reasons why this whole thing is so complicated is that in reality people very rarely optimize for a single value. They optimize for a set of values which usually isn't too coherent and the weights for these values tend to fluctuate...
Jay A. Labinger
David Burns in "The feeling good handbook" about the "acceptance paradox" (The book has been shown to be effective at improving the condition of depressive people in controlled trials) Page 67
I feel like the claim made about the source is strong enough that it should have a link to a study.
It has sort of. It is in the forward of the recent edition of the book.
In general the book is well known on LW and I'm just reiterating a fact that's already established by other people. At the moment the LW search finds 431 hits for the search "feeling good handbook".
David Burns in "The feeling good handbook" (The book has been shown to be effective at improving the condition of depressive people in controlled trials) Page 69
David Burns in "The feeling good handbook" (The book has been shown to be effective at improving the condition of depressive people in controlled trials) Page 126
Upvoted. I don't know if this is true; indeed, I suspect it is definitely partially false. But, I don't think it is entirely false. It's interesting and has made me think.
It's the position of someone at the heart of the evidence based framework of Cognitive Behavior Therapy.
Even if you happen to disagree and think you know better how emotions work than the people with the credentials I think it's still useful to know where you differ with them.
I read David Burns book because it's the book that's most recommended on LW when it comes to dealing with emotions.
I'm not quoting someone from a New Age background but someone who actually is trying to teach people to get rid of irrational beliefs by recognizing their mental distortions.
According to the research summarized by Yvain, Cognitive Bahvioral Therapy does not seem to be more strongly supported as effective than Freudian psychoanalysis. Rather, it conducted the research which demonstrated its effectiveness relative to placebo earlier than other types of therapy, and thereby gained a reputation as evidence-based.
Not quite.
In Cognitive Behavior Therapy, there a strong focus on measuring the level of depression of the person who's coming to the sessions. Cognitive Therapist now tried different approaches of treating depression and believe in making decisions based on what interventions succeed in reducing the scores of someones level of depression.
A Freudian would tell you, that he doesn't really care about the score but that he cares that the person just feels better and succeeds in dealing with his childhood traumas.
David Burns wants that every patients fill out a questionnaire at every session that measures the amount of depression the person has at that time. He wants that psychologists who don't succeed in reducing the scores of their clients have no excuse because they are faced with hard numbers.
Given that's Burns approach of looking at the world there going to be less inferential difference in getting Burn understood be LessWrongers than if I would pick someone who could also produce results but who doesn't really care about gathering academic evidence for what works.
Don't confuse people who produce results by looking at published evidence and who base their practice on that evidence with people who just produce results.
The gist of the research though, is that while historically Freudian therapists have taken this approach, more recently many Freudian therapists have actually started performing the same sort of research on the effects of their therapy that Cognitive Behavioral Therapists have, and received essentially the same results. It's not that there isn't evidence to support Cognitive Behavioral Therapy working, but rather, more recent evidence seems to suggest that when they do gather the relevant data, other forms of therapy appear to work equally well. So it appears that the things about Cognitive Behavioral Therapy that actually work are not things that are specific to Cognitive Behavioral Therapy, but rather, are general qualities of therapy provided by a trained practitioner.
-- Paul Graham, The Acceleration of Addictiveness
Thank you for linking to the piece where the quote was drawn. Great article!
BTW, here is Eliezer's article on the same topic.
-Venkatesh Rao
Could you link the source of the quote?
It's in chapter 16 of Liars and Outliers, which AFAIK has no url.
Found it with google
The other thing is that he probably can't know whether you can hide.
-- John R. Anderson, Lynne M. Reder & Herbert A. Simon: Applications and Misapplications of Cognitive Psychology to Mathematics Education
Yes, but on net primitive tribes (at least in the short-run) seem to be made worse off from contact with technologically advanced civilizations.
Sure, but that doesn't change the fact that they could learn a lot from us. Indeed, if that weren't so, they
a) would be primitive and b) wouldn't be (especially and unusually) harmed by the contact.
Even after controlling for harmful or exploitative behavior by the advanced civilizations?
Yes because of germs.
And germs.
Are the ideas harmful?
Some religious views might be.
I'm pretty sure most uncontacted tribes had their own religious weirdness.
Depends which primitive tribes. Amerindians died from European diseases and Europeans died from African diseases.
Good point, but that doesn't fall under the "a lot to learn from" rubric. Your general point about contact is likely true.
Pretend I'm a fantastic in-person teacher but if you get near me you will die of some disease. Do you have a lot to learn from me?
As I said, your general point about net value of contact is correct.
So, it's positive-sum. But anyway, who cares what a primitive tribe learns? We, surely, are the center and purpose of the universe; our gain, however small or large, is the important thing.
Megan McArdle quoting or paraphrasing Jim Manzi.
[Edited in response to Kaj's comment.]
Some relevant links:
This is a higher rate than I'd expected. It implies that current policies in these three fields are not really thoroughly thought out, or at least not to the extent that I had expected. It seems that there is substantial room for improvement.
I would have expected perhaps one or two percent.
Remember, you expect 5% to give a statistically significant result just by chance...
That's only true of the programs which can be expected to produce no detriments, surely?
That's one possible explanation.
Another possible explanation is that there is a variety of powerful stakeholders in these fields and the new social programs are actually designed to benefit them and not whoever the programs claim to help.
Remember that programs will not even be tested unless there are good reasons to expect improvement over current protocol. Most programs that are explicitly considered are worse than those that are tested, and most possible programs are worse than those that are explicitly considered. Therefore we can expect that far, far fewer than ten percent of possible programs would yield significant improvements.
That is true. However, there is a second filtering process, after filtering by experts; and that is what I will refer to as filtering by experiment (i.e. we'll try this, and if it works we keep doing it, and if it doesn't we don't). Evolution is basically a mix of random mutation and filtering by experiment, and it shows that, given enough time, such a filter can be astonishingly effective. (That time can be drastically reduced by adding another filter - such as filtering-by-experts - before the filtering-by-experiment step)
The one-to-two percent expectation that I had was a subconscious expectation of the comparison of the effectiveness of the filtering-by-experts in comparison to the filtering-by-experiment over time. Investigating my reasoning more thoroughly, I think that what I had failed to appreciate is probably that there really hasn't been enough time for filtering-by-experiment to have as drastic an effect as I'd assumed; societies change enough over time that what was a good idea a thousand years ago is probably not going to be a good idea now. (Added to this, it likely takes more than a month to see whether such a social program actually is effective or not; so there hasn't really been time for all that many consecutive experiments, and there hasn't really been a properly designed worldwide experimental test model, either).
10% isn't that bad as long as you continue the programs that were found to succeed and stop the programs that were found to fail. Come up with 10 intelligent-sounding ideas, obtain expert endorsements, do 10 randomized controlled trials, get 1 significant improvement. Then repeat.
Unfortunately we don't really have the political system to do this.
But I have this great idea that will change that!
...Oh.
Unfortunately, governments are really bad at doing this.
True, but that doesn't mean we're laboring in the dark. It just means we've got our eyes closed.
Humans in general are very bad at this. The only reason capitalism works is that the losing experiments run out of money.
That's a very powerful reason.
It depends on how many completely ineffectual programs would demonstrate improvement versus current practices.
I think the quote is from Jim Manzi rather than Megan McArdle, given that McArdle starts the article with
and later on in the article it says
suggesting that the whole article after the first paragraph is a quote (or possibly paraphrase).
awesomequotes4u.com
On the contrary, honesty, conscientiousness, being law-abiding, etc. have powerful reputational effects. This is easily seen by the converse; look, for example, at the effect a criminal record has on chance of getting a job.
This quote only gets any mileage by equivocating on the meaning of fair. What the quote is really saying is: "If you expect the world to fulfil even modest dreams just because you try not to be a jerk, expect disappointment." But said like that, if loses all its seemingly deep wisdom. In fact, of course, if you personally fulfilled even some modest dream of a large proportion of the people on earth, you would be wealthy beyond the dreams of lucre.
I see the quote as warning against a certain kind of naivety. I'm known as a trustworthy person and it's brought me many advantages - people have happily loaned me large sums of money, for example, and I've been employed in high-trust-requiring positions. But I have cooperated in Prisoner's Dilemma-type situations when I really should have realized the other guy was going to defect. In one case, he'd told me he was a narcissist and a Slytherin, and I still thought he'd keep our agreement. I lost a lot.
Most of the comment is great, but
this part seems like a Just World Fallacy. You can start a chain of cause and effect that will make billions of people a bit happier, and yet someone else may take the reward.
But I agree that on average making a lot of people happy is a good way to get wealthy.
It always struck me that "fair" is one of the most misused words we have. What we mean when we say "fairness" is a sense that socially-constructed games have fixed rules leading to predictable outcomes, when some notion of a social contract or other ethical framework is exercised. If you enter a game with no rules, what would it even mean to expect a fair reward for fair play?
— Hermann Weyl
(quoted in Science And Sanity, by Alfred Korzybski, of "the map is not the territory" fame)
Steve Sailer
Winston Churchill
George Lakoff Progressives Need to Use Language That Reflects Moral Values
-- The Book of Mormon (Alma 30.24-28)
Edit: I'm mildly surprised by the reactions to this quote. The thing I find interesting about it is that Joseph Smith was apparently sufficiently familiar with Voltairesque anti-Christian ideas that he could relay them coherently and with some gusto. This goes some way towards passing the ideological Turing test.
I'm hardly an expert on the Book of Mormon, but this quote surprised me so I googled it. It appears to be an accurate quote but is not fully attributed. As best I can make out, the speaker is the antichrist (or some such evil character; not sure on the exact mythology in play here).
Failure to note that means this quote gives either an incorrect view of the Book of Mormon, or of the significance of the text, or both.
When quoting fiction, I recommend identifying both the character and the author. E.g.
--Korihor in the The Book of Mormon (Alma 30.24-28); Joseph Smith, 1830
Having said all that, it's still a damn good rationality quote.
I would think it's bad publicity for us to explicitly note a resemblance to antichrist-type characters.
Considering how much hating on religion there already is around here, I don't think there's much left to lose on that front.
Ah, of course, because it's more important to signal one's pure, untainted epistemic rationality than to actually get anything done in life, which might require interacting with outsiders.
Upvoted because that really is a failure mode worth keeping in mind, but I don't think it's responsible for the attitude towards religion around here; I think that's a plain old founder effect.
"Pick a side and stick with it, supporting your friends and bashing your enemies at every cost-effective opportunity" is the dominant strategy in factional politics much the same way tit-for-tat is dominant in iterative prisoner's dilemma. Generosity to strangers and mercy to enemies are so heavily encouraged because in the absence of that encouragement they're the rare, virtuous exception.
Yes, obviously we have to interact with outsiders. That's what makes them outsiders, rather than meaningless hypothetical aliens beyond our light-cone. The question is, should we be interacting with organized religion by trying to ally with, or at least avoid threatening, the people in charge? Or by threatening them so comprehensively that (figuratively speaking) we destroy their armies and take their cattle for our own?
The antichrist is a hypothetical figure who poses the greatest possible ideological threat, an exploit against which the overwhelming majority of Christianity's (worldly) resources and personnel cannot be secured. The popular theory is, that individual's public actions would trigger the ultimate 'evaporative cooling' event. Everybody who doesn't really believe, everybody who just checks "christian" on the census form and shows up to church for the social network and the pancakes, will stop doing so. In short, the sanity waterline would rise.
That's nice, but I'm Jewish ;-). Or in other words, the very nature of an "antichrist" pins you to opposing one kind of religion in specific, and also pins you to moral positions you probably don't want to take. It's the ultimate sin of privileging the hypothesis: you've assumed it's a Christian world you have to persuade away from their Christianity.
(In real life, I would argue the greatest utility to be gained from deconversions right now is in the Muslim world, where one currently finds the greatest amount of religious violence over the smallest differences. You could tell me to go become the Anti-Muhammad, but again, I'm already Jewish.)
Remember, the Antichrist is also puppy-kickingly evil. You don't hate puppies, do you? Then why are you signing up for a role that outright requires you to kick them?
Are you sure there's not some other evil villain you'd prefer to be?
This is a failure mode I worry about, but I'm not sure ironic atheist re-appropriation of religious texts is going to turn off anyone we had a chance of attracting in the first place. Will reconsider this position if someone says, "oh yeah, my deconversion process was totally slowed down by stuff like that from atheists," but I'd be surprised.
Unless we're trying to appeal to contrarians.
Nassim Taleb
Really curious about the "supposedly" in this. Does Nassim not actually believe or endorse this view?
I read it as him endorsing the empirical proposition, but not the value proposition. That's the point of the second sentence- he wouldn't say "these are good friends to lose" if there wasn't the potential to lose friends.
Nassim Taleb
What does this mean?
Think of Steve Jobs vs. the business school professor who wrote a book about entrepreneurship.
My interpretation is that having an explanation for something is useless if you can't actually make it happen. And even if you don't fully understand how something works, it's good to be able to use it.
For example, I would much rather be able to use a computer than know how it works.
Also, if you can't do it, that calls into question whether your explanation is actually valid. Anyone can explain something, so long as they're not required to actually make the explanation useful.
So we could rephrase as: "If I really understand X's, I can build one, but if I kind of understand X's, I can at least use one"?
I interpret it as related to expert-at versus expert-on. If you assume that an expert-on is always an expert-at, then someone explaining something they can't do is clearly not an expert.
I'm not sure that assumption is true, though I could believe it's a useful rule of thumb.
I don't know, but it sounds similar to "It's smarter to be lucky than it's lucky to be smart."
-- José Manuel Rodriguez Delgado
--Scott Adams, Interview with Julia Galef, February 10, 2014
— Paul Krugman, "Sergeant Friday Was Not A Fox"
"Just looking at the data like a scientist" does not give you magic scientist powers. Models of the world are what allow you to predict it, without need for magic scientist vision.
Adams doesn't elaborate on this point, but I read him as saying, if you've actually measured things and taken data that goes to your point, then your model is more likely to be correct.
For example, suppose a model says that raising the minimum wage reduces employment. That's a pretty common model in economics and it can be backed up with a lot of math. However I would not find that alone convincing. On the other hand, if an economist goes out into the world and looks at what actually happened when the minimum wage was raised, that would be more convincing. If they can figure out a way to do an experiment in which, for example, 5 nearby towns raise their minimum wage, 5 keep it the same, and another 5 lower it, that would be even more convincing.
Another example: consider a model that says
Those three statements are reasonably well established and backed up by data. However if you throw in a model that says dietary cholesterol causes in-body cholesterol, and in-body cholesterol causes heart disease, and therefore eating eggs reduces life expectancy; you've jumped way beyond what the data supports. On the other hand, if you compare the levels of all-cause morbidity among people who eat eggs and people who don't or, better yet, do a multiyear controlled experiment in which
the only diet variation between groups is that some people eat eggs and others don't, the answers you get are far more likely to be correct.
Here's another one: you have lots of detailed calculations that say if you smash two protons together at .999999c relative velocity, and you do it a few million times, then you'll see certain particles show up in the debris with very precise probabilities. Only when you run the experiment, you discover that the fractions of different particles you see don't quite match what you expected because there's an additional resonance you didn't know about and didn't include in the model.
In other words, empirical data beats mere models. Models can be self-consistent and plausible, but not fully reflect the real world. Models that go beyond what the data says run the risk of assuming causal connections that don't exist (dietary cholesterol to in-body cholesterol) or missing factors outside the model (maybe eggs do increase the risk of heart disease but reduce the risk of cancer) that are more important.
Of course all these experiments are really hard to do, and take years of time and millions, even billions, of dollars, so often we muddle along with seriously flawed models instead. However we need to remember that models are just models, not data, and be reasonably skeptical of their recommendations. In particular, if we're about to do something really expensive and difficult like changing a nation's dietary preferences based on nothing more than a model, maybe we should step back and spend the money and the time needed to collect real data before we go full speed ahead.
Fair enough - political conditioning has caused me to assume that any non-specialist saying "don't trust models, just 'look at the data'," is the victim of some sort of anti-epistemology.
In context, it's less likely that that's the case, but I still think this quote is painting with much too wide a brush.
Prediction is going beyond the data, so a model that never goes beyond the data isn't going to be much use.
Climate change models incorporated data, so they are not purely theoretical like the economic model you mentioned.
I ... think he's talking about basic correlation, statistical analysis, that sort of thing?
(I enjoy Scott's writing, but I didn't upvote the grandparent.)
What's that from?
I made it up based on Eliezer's saying. I'm much too Gryffindor/Sunshine to call it world optimization.
My bad. Deleting.