Rationality Quotes March 2014
Another month has passed and here is a new rationality quotes thread. The usual rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (326)
-- Betty Medsger
Tears in my eyes... awesome beyond words. The fact that this wasn't fictional makes it brilliant. The fact that the burglars were acting against an over-reaching institution is just icing on the cake. It's been sometime since I heard a story that warmed my heart so much.
Could you link the source of the quote?
It's in chapter 16 of Liars and Outliers, which AFAIK has no url.
Found it with google
"Consider the people who routinely disagree with you. See how confident they look while being dead wrong? That’s exactly how you look to them." - Scott Adams
Megan McArdle quoting or paraphrasing Jim Manzi.
[Edited in response to Kaj's comment.]
This is a higher rate than I'd expected. It implies that current policies in these three fields are not really thoroughly thought out, or at least not to the extent that I had expected. It seems that there is substantial room for improvement.
I would have expected perhaps one or two percent.
Remember that programs will not even be tested unless there are good reasons to expect improvement over current protocol. Most programs that are explicitly considered are worse than those that are tested, and most possible programs are worse than those that are explicitly considered. Therefore we can expect that far, far fewer than ten percent of possible programs would yield significant improvements.
That is true. However, there is a second filtering process, after filtering by experts; and that is what I will refer to as filtering by experiment (i.e. we'll try this, and if it works we keep doing it, and if it doesn't we don't). Evolution is basically a mix of random mutation and filtering by experiment, and it shows that, given enough time, such a filter can be astonishingly effective. (That time can be drastically reduced by adding another filter - such as filtering-by-experts - before the filtering-by-experiment step)
The one-to-two percent expectation that I had was a subconscious expectation of the comparison of the effectiveness of the filtering-by-experts in comparison to the filtering-by-experiment over time. Investigating my reasoning more thoroughly, I think that what I had failed to appreciate is probably that there really hasn't been enough time for filtering-by-experiment to have as drastic an effect as I'd assumed; societies change enough over time that what was a good idea a thousand years ago is probably not going to be a good idea now. (Added to this, it likely takes more than a month to see whether such a social program actually is effective or not; so there hasn't really been time for all that many consecutive experiments, and there hasn't really been a properly designed worldwide experimental test model, either).
That's one possible explanation.
Another possible explanation is that there is a variety of powerful stakeholders in these fields and the new social programs are actually designed to benefit them and not whoever the programs claim to help.
Remember, you expect 5% to give a statistically significant result just by chance...
That's only true of the programs which can be expected to produce no detriments, surely?
10% isn't that bad as long as you continue the programs that were found to succeed and stop the programs that were found to fail. Come up with 10 intelligent-sounding ideas, obtain expert endorsements, do 10 randomized controlled trials, get 1 significant improvement. Then repeat.
Unfortunately we don't really have the political system to do this.
But I have this great idea that will change that!
...Oh.
Unfortunately, governments are really bad at doing this.
Humans in general are very bad at this. The only reason capitalism works is that the losing experiments run out of money.
That's a very powerful reason.
Some relevant links:
I think the quote is from Jim Manzi rather than Megan McArdle, given that McArdle starts the article with
and later on in the article it says
suggesting that the whole article after the first paragraph is a quote (or possibly paraphrase).
"This is the essence of intuitive heuristics: when faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution." - Daniel Kahneman
Context: Aang ("A") is a classic Batman's Rule (never kill) hero, as a result of his upbringing in Air Nomad culture. It appears to him that he must kill someone in order to save the world. He is the only one who can do it, because he's currently the one and only avatar. Yangchen ("Y") is the last avatar to have also been an Air Nomad, and has probably faced similar dilemmas in the past. Aang can communicate with her spirit, but she's dead and can't do things directly anymore.
The story would have been better if Aang had listened to her advice, in my opinion.
"Luck plays a large role in every story of success; it is almost always easy to identify a small change in the story that would have turned a remarkable achievement into a mediocre outcome." - Daniel Kahneman
-- Francis Bacon, Novum Organum
-- John R. Anderson, Lynne M. Reder & Herbert A. Simon: Applications and Misapplications of Cognitive Psychology to Mathematics Education
"Intelligence is not only the ability to reason; it is also the ability to find relevant material in memory and to deploy attention when needed." - Daniel Kahneman
Yes, but on net primitive tribes (at least in the short-run) seem to be made worse off from contact with technologically advanced civilizations.
Sure, but that doesn't change the fact that they could learn a lot from us. Indeed, if that weren't so, they
a) would be primitive and b) wouldn't be (especially and unusually) harmed by the contact.
Even after controlling for harmful or exploitative behavior by the advanced civilizations?
Yes because of germs.
Depends which primitive tribes. Amerindians died from European diseases and Europeans died from African diseases.
And germs.
Are the ideas harmful?
Some religious views might be.
I'm pretty sure most uncontacted tribes had their own religious weirdness.
So, it's positive-sum. But anyway, who cares what a primitive tribe learns? We, surely, are the center and purpose of the universe; our gain, however small or large, is the important thing.
-- Paul Graham, The Acceleration of Addictiveness
Thank you for linking to the piece where the quote was drawn. Great article!
BTW, here is Eliezer's article on the same topic.
Warren Buffett
-- John Stuart Mill
— Hermann Weyl
(quoted in Science And Sanity, by Alfred Korzybski, of "the map is not the territory" fame)
-- Bertrand Russell, In Praise of Idleness
I once attempted to sum this up on another forum by saying, "Careerism is a subgoal stomp." Russell, of course, better expresses the broader point that the subgoal stomp of maximizing productivity has almost entirely replaced all discussion of what kind of terminal goals individuals and societies should have.
-- Dan Geer
(rationality applicability: antifragility & disjunctive prediction vs. optimization for conjunctive prediction)
The other thing is that he probably can't know whether you can hide.
-Venkatesh Rao
--H. L. Mencken
Related:
-Neal Stephenson, Cryptonomicon
-The Wise Man in Darkside, a radio play by Tom Stoppard
This quote reminded me of a quote from an anime called Kaiji, albeit your quote is much more succinct.
More succinctly...
-- John Lennon, “Beautiful Boy (Darling Boy)”
I. This Is Not A Game.
II. Here And Now, You Are Alive.
-- Om and many other gods, Small Gods, Terry Pratchett
This is not a drill. Therefore, make sure you have drills for the really important bits.
And bits for the really important drills.
And make sure that the bit is properly secured and the chuck key is removed before operating the drill.
And when someone tries to be the wall the stands in your way, you'll have something that will open a hole in them every time: your drill.
(In a related matter, if there's a wall in your way, smash it down. If there isn't a path, carve one yourself.)
BUT keep Chesterton's Fence in mind: if you don't know why there is a wall on your way, don't go blindly smashing it down. It might be there for a reason. First make absolutely sure you know why the wall exists in the first place; only then you may proceed with the smashing.
--Kamina, in Tengen Toppa Gurren Lagann
I like the sentiment, but you do realize that breaking walls has costs.
Funny. I came up with almost the exact same line:
-- C.S. Lewis, The Screwtape Letters
This is better than the other Screwtape quote, but - given the example of Ayn Rand - I think Lewis still gets causality backwards where smart Marxists were concerned. I think they started by being right about God and "materialism" when most people were insistently wrong (or didn't care about object-level truth.) This gave them an inflated view of their own intelligence and the explanatory power of Marxism.
I admit, I get horribly mind-killed whenever I realize I'm reading something by CS Lewis, especially anything from The Screwtape Letters. That's because years ago, the arguments in this book were used against me by a girl I was dating as a means to end our relationship (me being non-religious), who herself was convinced by her friends and family that we should break up.
That said, I was able to read this and appreciate it more clearly if I substituted the quote like so:
If we are attempting to spread good rationality around, would it be efficient to not try to convince people that rationality was "true", but instead attempt to promote good rationality by saying that rationality is "strong, stark, or courageous -- that it is the philosophy of the future"?
So you propose to spread rationality by encouraging irrationality?
Even assuming that this will work — that is, not just get people to buy into rationality (that part is simple) but actually become more rational, after this initial dose of irrational motivation — what do you suggest we do when our new recruits turn around and go "Hey, wait a tick; you guys got me into this through blatantly irrational arguments! You cynically and self-servingly pandered to my previously-held biases to get me on your side! You tricked me, you bastards!"? Grin and say "worked, didn't it"?
While it is true that you shouldn't be a materialist just because it's fashionable, there's a fine line between saying "you shouldn't be a materialist just because it's fashionable" and "my opponents are just materialists because it's fashionable". The second is a straw man argument, and given that this is CS Lewis putting words in the mouth of Satan, I read this as the straw man argument. Needless to say, a straw man argument is not a good rationalist quote.
That sounds to me like you're assuming that Lewis wrote the book so that he could put the devil to say strawmannish things, in order to mock the devil. Which is not the case at all - the demon writing the letters is much more similar to MoR!Quirrel, displaying a degree of rationality-mixed-with-cynicism which it uses to point out ways by which the lives of humans can be made miserable, or by which humans make their own lives miserable. Much of it can be read as a treatise on various biases and cognitive mistakes to avoid, made more compelling by them being explained by someone who wants those mistakes to be exploited for actively harming people.
I read that quote as saying that the Devil (or a demon) deceives people by making them believe those things, not that the Devil believes these things himself. That's how demons behave, they lie to people. This one lies to people about why one should be a materialist and the people fall for it. The point is not to mock the demon, who in the quote is acting as a liar rather than a materialist, but to mock materialists themselves by implying that they are materialists for spurious reasons.
Of course, Lewis has plausible deniability. One can always claim he's not attributing anything to materialists in general--you're supposed to infer that; it's not actually stated.
Edit: Also, remember when Lewis wrote that. 1942 wasn't like today, when it's possible to say you don't believe in the supernatural and (if you live in the right area) not suffer too many consequences except not ever being able to run for political office. Any materialist at the time who claimed he was courageous could easily be just responding to persecution, not claiming that that was his reason for being a materialist. Mocking materialists for that would be like mocking gay pride parades today on the grounds that pride is a sin and a form of arrogance--pride in a vacuum is, but pride in response to someone telling you you're shameful isn't.
The demon is not just lying at random - the demon is lying with the purpose of getting a certain reaction (in this case, getting the human to subscribe to the philosophy of materialism). The original quote is advice on how to use the human's cognitive biases against him, in order to better achieve that goal.
The point of the quote isn't materialism. That could be replaced with any other philosophy, quite easily. The point of the quote is that, for many people, subscribing to a philosophy isn't about whether that philosophy is true at all; it's more about whether that philosophy is popular, or cool, or daring.
The point isn't to mock the demon, or the materialist. The point is to highlight a common human cognitive mistake.
I don't think he was mocking, but I do think he was correct. I claim that it's perfectly true that most materialists today are materialists for spurious, non-object-level reasons. The same goes for all other widespread philosophies. People in general are biased and also don't care about philosophical truth much.
I think the non-object-level reasons that the devil names are interesting.
I think few new atheists care about whether atheism is strong or courageous. They rather care about the fact that it's what the intelligent people believe and they also want to be intelligent.
I suspect that most members of the Democratic Party are Democrats for spurious reasons too. But a Republican who lists a bunch of human foibles and writes a scenario that specifically names Democrats as being subject to them is probably attacking Democrats, at least in passing, not just attacking human beings.
Don't let yourself be mindkilled. Arguments aren't soldiers.
Focus on the true things you can say about the the world.
I am tempted to reply to this with "May the Force be with you", but instead I'll ask "just what are you trying to say?" You just gave me a reply which consists entirely of slogans, with no hint as to how you think they apply.
See filtered evidence. It is completely possible to mislead people by giving them only true information... but only those pieces of information which support the conclusion you want them to make.
If you had a perfect superhuman intelligence, perhaps you could give them dozen information about why X is wrong, a zero information about why Y is wrong, and yet the superintelligence might conclude: "Both X and Y are human political sides, so I will just take this generally as an evidence that humans are often wrong, especially when discussing politics. Because humans are so often wrong, it is very likely that the human who is giving this information to me is blind to the flaws of one side (which in this specific case happens to be Y), so all this information is only a very weak evidence for X being worse than Y."
But humans don't reason like this. Give them dozen information about why X is wrong, and zero information about why Y is wrong; in the next chapter give them dozen information about why Y is good and zero information about why X is good... and they will consider this a strong evidence that X is worse than Y. -- And Lewis most likely understands this.
Your understanding of 1942 is amazingly flawed. No-one in the developed world was persecuted for being a materialist at that time, but plenty were for their religion. Moreover, the fashionable belief at the time was dialectical materialism, and part of the claim made for it, by dialectical materialists themselves, was that it was the philosophy of the future.
Well, my first thought was Bertrand Russell being fired from CUNY, which was around 1940, although that was mostly because of his beliefs about sex (which are still directly related to his disbelief in religion). Religion classes in public schools were legal until 1948, and compulsory school prayer was legal until 1963. "In God We Trust" was declared the national motto of the US in 1956.
So given that none of these are examples of people being persecuted for their materialism, can I take it that you agree?
Like Salemicus said, no one of those things are persecutions. The closest of your examples is Bertrand Russell's firing, but even you admit that wasn't over his materialism.
By way of contrast there were in fact places in the developed world during the 1930's-1940's where one could be prosecuted for not being a materialist. And by prosecuted, I mean religious people were being semi-systematically arrested and/or executed (not necessarily in that order).
Lewis' point of reference is the UK, not the US. I don't know how much that changes the picture.
I think the US counts as part of "the developed world", however.
Being proud of something that is actually shameful strikes me as particularly sinful.
How about being ashamed of something that is actually prideworthy?
If someone is a materialist just because it's fashionable, that's trouble. Lewis may be wrong on whether or not the Church is 'true,' but I don't think Lewis is wrong on calling out compartmentalization and inconsistency rather than thinking about whether or not doctrines are true or false.
May I make a general request to people posting quotes? Please include not just the author's name but sufficient information to enable a reader to find the relevant quote. This doesn't necessarily have to be full MLA format; but a title, journal or book name if from a print source, page number or URL, and date would be helpful. Hyperlinked URLs are excellent if available but do not substitute for the rest of this information since these threads will likely outlive the location of some of the sources.
Doing so enables the reader not just to get a brief hit of rationality but to say, "Gee, that's interesting. I'd like to learn more," and read further in the source.
In fact, why don't we add a fifth bullet point to the header:
That's what archive.org is for. (Okay, it's not perfectly reliable, but...)
If you want to avoid that problem, whenever you post a link you should submit it to archive.org or archive.is.
-- Napoleon Bonaparte.
David Burns in "The feeling good handbook" about the "acceptance paradox" (The book has been shown to be effective at improving the condition of depressive people in controlled trials) Page 67
Scott Aaronson in reply to Max Tegmark replying to Scott's review of Max's book. He goes on:
(Emphasis mine.)
This sounds similar to the view that is sometimes called the fragility of deduction. It was why John Stuart Mill distrusted "long chains of logical reasoning" and according to Paul Samuelson it is why "Marshall treated such chains as if their truth content was subject to radioactive decay and leakage."
And that is why the long chains of logical reasoning used in the UFAI argument should not be regarded as terminating in conclusions of near certainty or high probability.
You could say that about anything.
Maybe, but it would not be very painful in many cases. In most cases, people who put forward highly conjunctive arguments don't put out them forward as urgent, near certainties which require immediate and copious funding.Moreover, most audiences have enough common sense to implications as lossy.
MIRI/LW presen ts an unusual set of circa,stances which is worth pointing out.
Jay A. Labinger
Bernard and Sir Humphrey are British government functionaries in the comedy show 'Yes Minister'
Bernard: If it's our job to carry out government policies, shouldn't we believe in them?
Sir Humphrey: Oh, what an extraordinary idea! I have served 11 governments in the past 30 years. If I'd believed in all their policies, I'd have been passionately committed to keeping out of the Common Market, and passionately committed to joining it. I'd have been utterly convinced of the rightness of nationalising steel and of denationalising it and renationalising it. Capital punishment? I'd have been a fervent retentionist and an ardent abolitionist. I'd have been a Keynesian and a Friedmanite, a grammar school preserver and destroyer, a nationalisation freak and a privatisation maniac, but above all, I would have been a stark-staring raving schizophrenic!
Anonymous commenter
awesomequotes4u.com
On the contrary, honesty, conscientiousness, being law-abiding, etc. have powerful reputational effects. This is easily seen by the converse; look, for example, at the effect a criminal record has on chance of getting a job.
This quote only gets any mileage by equivocating on the meaning of fair. What the quote is really saying is: "If you expect the world to fulfil even modest dreams just because you try not to be a jerk, expect disappointment." But said like that, if loses all its seemingly deep wisdom. In fact, of course, if you personally fulfilled even some modest dream of a large proportion of the people on earth, you would be wealthy beyond the dreams of lucre.
Most of the comment is great, but
this part seems like a Just World Fallacy. You can start a chain of cause and effect that will make billions of people a bit happier, and yet someone else may take the reward.
But I agree that on average making a lot of people happy is a good way to get wealthy.
I see the quote as warning against a certain kind of naivety. I'm known as a trustworthy person and it's brought me many advantages - people have happily loaned me large sums of money, for example, and I've been employed in high-trust-requiring positions. But I have cooperated in Prisoner's Dilemma-type situations when I really should have realized the other guy was going to defect. In one case, he'd told me he was a narcissist and a Slytherin, and I still thought he'd keep our agreement. I lost a lot.
It always struck me that "fair" is one of the most misused words we have. What we mean when we say "fairness" is a sense that socially-constructed games have fixed rules leading to predictable outcomes, when some notion of a social contract or other ethical framework is exercised. If you enter a game with no rules, what would it even mean to expect a fair reward for fair play?
Eric Raymound
I agree with this. It's also a good quote. However there is an important caveat. It is possible to get caught-up in an echo chamber of equally crazy people. For extreme examples, consider the Lyndon Larouche crowd, Scientology, or the latest resurgence of the Protocols of the Elders of Zion. So just because not everybody believes you're crazy, does not imply you are in fact, not crazy.
I think it's important to distinguish between "crazy" and "irrational". Many crazy people are very rational. For instance, a fair portion of LW's users, including myself, have experienced some form of temporary or chronic mental illness; that's often exactly the impetus that gets someone to distrust their System 1 thinking and spend effort on deliberately becoming more rational.
"Therefore, this kind of experiment can never convince me of the reality of Mrs Stewart's ESP; not because I assert Pf=0 dogmatically at the start, but because the verifiable facts can be accounted for by many alternative hypotheses, every one of which I consider inherently more plausible than Hf, and none of which is ruled out by the information available to me.
Indeed, the very evidence which the ESP'ers throw at us to convince us, has the opposite effect on our state of belief; issuing reports of sensational data defeats its own purpose. For if the prior probability for deception is greater than that of ESP, then the more improbable the alleged data are on the null hypothesis of no deception and no ESP, the more strongly we are led to believe, not in ESP, but in deception. For this reason, the advocates of ESP (or any other marvel) will never succeed in persuading scientists that their phenomenon is real, until they learn how to eliminate the possibility of deception in the mind of the reader. As (5.15) shows, the reader's total prior probability for deception by all mechanisms must be pushed down below that of ESP."
ET Jaynes, Probability Theory (S 5.2.2)
Edwin Lyngar at Salon
Jane Austen, Sense and Sensibility
I think those mistakes usually happen for an entirely different reason. New people remind us of ones we've already met, and we unconsciously "fill in the blanks" in what we know about the new person with what we know about person we know, or some kind of average-ish judgement about the group of comparable people we know.
It's pretty remarkable to detect yourself in that kind of mistake; most people are very good at finding confirming evidence for whatever judgments they've made about people, and ignoring any contrary indications.
Yes, this is a good point - we generally don't realise that we are (self-)deceived, so we can't even begin to think about where we went wrong.
Of course, Elinor Dashwood is something of an authorial stand-in, so it's not really surprising that she's incredibly wise and perspicacious like that.
Winston Churchill
-- Sri Aurobindo (1872-1950), Savitri - A Legend and a Symbol
Quotes from the Screwtape Letters have not been terribly well-received in this thread. So, perversely, I decided I had to take a turn:
-- The demon Screwtape, on how best to tempt a human being to destruction.
The existence of souls notwithstanding, Screwtape is clearly right: if you are charitable to almost everybody--except for those your see every day!--then you are not practicing the virtue of charity and are ill-served to imagine otherwise. You cannot fantasize good mental habits into being; they must be acted upon.
Who does more good with their life--the person who contributes a large amount of money to efficient charities while avoiding the people nearby, or the person who ignores anyone more than 100 miles away while being nice to his mother, his employer, and the man he meets in the train?
If he actually donates the money then the charity is not constrained to fantasy. By the miracle of the world banking network, people thousands of literal miles away can be brought as close as the sphere of action. Those concentric rings are measured in frequency and impactfulness of interaction, not physical distance.
What Screwtape is advocating is that he simply intend to donate the money once Givewell publishes a truely definitive report (which they never will). Or better, that he feel great compassion for people so many steps removed that he could not possibly do anything for them (perhaps the people of North Korea, who are beyond the reach of most charities due to government interdiction).
A tricky question.
The obvious, and trivially true, answer is that he who does both does more good than either. But that's not what you asked.
So. It can be hard to compare the two options when considering the actions of a single person, since the beneficiaries of the actions do not overlap. Therefore I shall employ a simple heuristic; I shall assume that the option which does the most good when one person does it is also the option that does the most good when everyone does it.
So, the first option; everyone (who can afford it) makes large donations to efficient charities, while everyone avoids those nearby and is unpleasant when forced to deal with someone else directly.
If I make a few assumptions about the effectiveness (and priorities) of the charities and the sum of the donations, I find myself considering a world where everyone is sufficiently fed, clothed, sheltered, medically cared for and educated. However, the fact that everyone is unpleasant to everyone else leads to everyone being grumpy, irritated, and mildly unhappy.
Considering the second option; charitable donations drastically decrease, but everyone is pleasant and helpful to everyone they meet face-to-face. In this possible world, there are people who go hungry, naked, homeless. But probably fewer than in our current world; because everyone they meet will be helpful, aiding if they can in their plight. And because everyone's pleasant and tries to uplift the mood of those they meet, a large majority of people consider themselves happy.
This assumption seems trivially false to me, and despite being labeled as a mere 'heuristic', it is the crucial step in your argument. Can you explain why I should take it seriously?
Well, for most choices between "is this good?" and "is this bad?" the assumption is true. For example, is it good for me to drop my chocolate wrapper on the street instead of finding a rubbish bin? If I assume everyone were to do that, I get the idea of a street awash in chocolate wrappers, and I consider that reason enough to find a rubbish bin.
Furthermore, and more importantly, the aim here is not to produce an argument that one action is better than the other in a single, specific case; rather, it is to produce a general principle (whether it is generally better to be charitable to those nearby, or to those further away).
And if option A is generally better than option B, then I think it is very probable that universal application of A will remain better than universal application of B; and vice versa.
When you ask what it's like if everyone were to "do that", the answer you get is going to be determined by how you define "that". For instance, if everyone were to drop chocolate wrappers on the lawn of your annoying neighbor, you might be happy. So is it okay to drop the wrapper on your neighbor's lawn?
It's tempting to reply to this by saying "'doing the same thing' means removing all self-serving qualifiers, so the correct question is whether you would like it if people dropped wrappers wherever they wanted, not specifically on your neighbor's lawn". This reply doesn't work, because there are are plenty of situations where you want the qualifier--for instance, putting criminals in jail when the qualifier "criminal" excludes yourself.
(And what's your stance on homosexuality? If everyone were to do that, humanity would be extinct.)
I do need to be careful to define "that" as a generally applicable rule. In this case, the generally applicable rule would be, is it okay to drop chocolate wrappers on the lawn of people one finds annoying?
So I need to consider the world in which everyone drops chocolate wrappers on the lawn of people they find annoying. Considering this, the chances of someone dropping a wrapper on my lawn becomes dependent on the probability that someone will find me annoying.
So, in short, I can put as many qualifiers on the rule as I like. However, I have to be careful to attach my qualifiers to the true reason for my formulation of the rule; I cannot select the rule "it is acceptable to drop chocolate wrappers on that exact specific lawn over there" without referencing the process by which I chose that exact specific lawn.
I can't attach a qualifier to a specific person; but I can attach a qualifier to a specific quality, like being annoying, when considering a proposal.
Yvain in these two old blog posts of his makes the case that it's not clear that a world with grumpy people is worse than a world with hungry people.
You are correct. It is by no means clear which is better.
Why the Hell would I want to practice the virtue of charity? If anything, I want to help people. And hating people from a foreign country could be an excellent way to do damage!
Except, with that attitude you won't. You'll sit around telling yourself how virtuous you are for liking people you've never met, while being a misanthrope to everyone you personally know. Furthermore, if (or when) you mean one of the foreign people you supposedly love, you'll wind up being a misanthrope to them as well.
Really? How does you, personally, hating people from a foreign country do damage?
And why would I care about that if my donations produce a giant net benefit? When did I even claim to love anyone?
I'm sorry, my original post was not quite precise. I meant charity in the sense of the Principle of Charity, not charitable contributions. If you prefer, substitute "kind" for "charitable"; it's not quite the same but illustrates the point just as well.
Keep in mind, we're talking about the damage you do to yourself. Hating people you've never met is not a very efficient way to damage yourself. Much better is to hate people you know intimately and see every day. That way you can practice your vices efficiently, and will have as many opportunities as possible to act them out.
George Lakoff Progressives Need to Use Language That Reflects Moral Values
Steve Sailer
-- C.S. Lewis, The Screwtape Letters
David Burns in "The feeling good handbook" (The book has been shown to be effective at improving the condition of depressive people in controlled trials) Page 69
-- The Book of Mormon (Alma 30.24-28)
Edit: I'm mildly surprised by the reactions to this quote. The thing I find interesting about it is that Joseph Smith was apparently sufficiently familiar with Voltairesque anti-Christian ideas that he could relay them coherently and with some gusto. This goes some way towards passing the ideological Turing test.
I'm hardly an expert on the Book of Mormon, but this quote surprised me so I googled it. It appears to be an accurate quote but is not fully attributed. As best I can make out, the speaker is the antichrist (or some such evil character; not sure on the exact mythology in play here).
Failure to note that means this quote gives either an incorrect view of the Book of Mormon, or of the significance of the text, or both.
When quoting fiction, I recommend identifying both the character and the author. E.g.
--Korihor in the The Book of Mormon (Alma 30.24-28); Joseph Smith, 1830
Having said all that, it's still a damn good rationality quote.
I would think it's bad publicity for us to explicitly note a resemblance to antichrist-type characters.
Unless we're trying to appeal to contrarians.
Considering how much hating on religion there already is around here, I don't think there's much left to lose on that front.
Ah, of course, because it's more important to signal one's pure, untainted epistemic rationality than to actually get anything done in life, which might require interacting with outsiders.
Upvoted because that really is a failure mode worth keeping in mind, but I don't think it's responsible for the attitude towards religion around here; I think that's a plain old founder effect.
This is a failure mode I worry about, but I'm not sure ironic atheist re-appropriation of religious texts is going to turn off anyone we had a chance of attracting in the first place. Will reconsider this position if someone says, "oh yeah, my deconversion process was totally slowed down by stuff like that from atheists," but I'd be surprised.
Nassim Taleb
Nassim Taleb
What does this mean?
I interpret it as related to expert-at versus expert-on. If you assume that an expert-on is always an expert-at, then someone explaining something they can't do is clearly not an expert.
I'm not sure that assumption is true, though I could believe it's a useful rule of thumb.
My interpretation is that having an explanation for something is useless if you can't actually make it happen. And even if you don't fully understand how something works, it's good to be able to use it.
For example, I would much rather be able to use a computer than know how it works.
Also, if you can't do it, that calls into question whether your explanation is actually valid. Anyone can explain something, so long as they're not required to actually make the explanation useful.
So we could rephrase as: "If I really understand X's, I can build one, but if I kind of understand X's, I can at least use one"?
David Burns in "The feeling good handbook" (The book has been shown to be effective at improving the condition of depressive people in controlled trials) Page 126
Upvoted. I don't know if this is true; indeed, I suspect it is definitely partially false. But, I don't think it is entirely false. It's interesting and has made me think.
--Scott Adams, Interview with Julia Galef, February 10, 2014
— Paul Krugman, "Sergeant Friday Was Not A Fox"
"Just looking at the data like a scientist" does not give you magic scientist powers. Models of the world are what allow you to predict it, without need for magic scientist vision.
Adams doesn't elaborate on this point, but I read him as saying, if you've actually measured things and taken data that goes to your point, then your model is more likely to be correct.
For example, suppose a model says that raising the minimum wage reduces employment. That's a pretty common model in economics and it can be backed up with a lot of math. However I would not find that alone convincing. On the other hand, if an economist goes out into the world and looks at what actually happened when the minimum wage was raised, that would be more convincing. If they can figure out a way to do an experiment in which, for example, 5 nearby towns raise their minimum wage, 5 keep it the same, and another 5 lower it, that would be even more convincing.
Another example: consider a model that says
Those three statements are reasonably well established and backed up by data. However if you throw in a model that says dietary cholesterol causes in-body cholesterol, and in-body cholesterol causes heart disease, and therefore eating eggs reduces life expectancy; you've jumped way beyond what the data supports. On the other hand, if you compare the levels of all-cause morbidity among people who eat eggs and people who don't or, better yet, do a multiyear controlled experiment in which
the only diet variation between groups is that some people eat eggs and others don't, the answers you get are far more likely to be correct.
Here's another one: you have lots of detailed calculations that say if you smash two protons together at .999999c relative velocity, and you do it a few million times, then you'll see certain particles show up in the debris with very precise probabilities. Only when you run the experiment, you discover that the fractions of different particles you see don't quite match what you expected because there's an additional resonance you didn't know about and didn't include in the model.
In other words, empirical data beats mere models. Models can be self-consistent and plausible, but not fully reflect the real world. Models that go beyond what the data says run the risk of assuming causal connections that don't exist (dietary cholesterol to in-body cholesterol) or missing factors outside the model (maybe eggs do increase the risk of heart disease but reduce the risk of cancer) that are more important.
Of course all these experiments are really hard to do, and take years of time and millions, even billions, of dollars, so often we muddle along with seriously flawed models instead. However we need to remember that models are just models, not data, and be reasonably skeptical of their recommendations. In particular, if we're about to do something really expensive and difficult like changing a nation's dietary preferences based on nothing more than a model, maybe we should step back and spend the money and the time needed to collect real data before we go full speed ahead.
Fair enough - political conditioning has caused me to assume that any non-specialist saying "don't trust models, just 'look at the data'," is the victim of some sort of anti-epistemology.
In context, it's less likely that that's the case, but I still think this quote is painting with much too wide a brush.
Prediction is going beyond the data, so a model that never goes beyond the data isn't going to be much use.
Climate change models incorporated data, so they are not purely theoretical like the economic model you mentioned.
I ... think he's talking about basic correlation, statistical analysis, that sort of thing?
(I enjoy Scott's writing, but I didn't upvote the grandparent.)
Nassim Taleb
Nassim Taleb
"As I fear not a child with a weapon he cannot lift, I will never fear the mind of a man who does not think.’”
Words of Radiance, Brandon Sanderson, page 795
Both the metaphor and its literal application only make sense if "cannot" and "does not" means "never", and they really don't.
While I'd never fear the mind of a man who literally is in a coma and doesnt think at all, I'd have plenty of reason to fear the mind of a man whose ability to think is merely limited. He can be a stupid moral reasoner and a clever killer at the same time.
I recall one Sherlock Holmes book, where Holmes said that he has a lot of trouble predicting the actions of idiots; an intelligent man, Holmes can work out what actions he would take in a given situation, but an idiot could do anything.
Of course, this presumes that one knows the goals of said intelligent agent.
In that case, though, you're afraid of the man's axe more than his mind.
What's that from?
"The fragilista falls for the Soviet-Harvard delusion, the (unscientific) overestimation of the reach of scientific knowledge. Because of such delusion, he is what is called a naive rationalist, a rationalizer, or sometimes just a rationalist, in the sense that he believes that the reasons behind things are automatically accessible to him. And let us not confuse rationalizing with rational—the two are almost always exact opposites. Outside of physics, and generally in complex domains, the reasons behind things have had a tendency to make themselves less obvious to us, and even less to the fragilista." - Nassim Taleb
-- José Manuel Rodriguez Delgado
-- James Dean
I find this a useful quote to keep in mind when I'm experiencing mental states that I don't want to experience.