Rationality Quotes February 2013
Another monthly installment of the rationality quotes thread. The usual rules apply:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments or posts from Less Wrong itself or from Overcoming Bias.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (563)
-- Scott Sumner (talking about Italian politicians when the EU controls their monetary policy, but it generalizes)
This just prompted me to (hypothetically, for the sake of amusement) reinterpret many of Eliezer's actions as a psychological experiment wherein he has contrived exaggerated scenarios in order to test this empirically.
-- Chad Fowler (from The 4-Hour Body)
(Joseph Heath & Andrew Potter, The Rebel Sell)
Sun Tzu on establishing a causal chain from reality to your beliefs.
Dupe.
"We're even wrong about which mistakes we're making."
-Carl Winfeld
--Lawrence Watt-Evans, The Spriggan Mirror
-- C. S. Lewis, Out of the Silent Planet
Reminds me of this:
-- From the final screen of Call of Cthulhu: The Wasted Land
...Hooray for the phygists?
Well, there are lots of cultists running around trying to summon an Elder God. This will almost certainly end in disaster. The options we have to fight this are: a) We can try to stop all Elder-God-summoning related program activities or b) We can try to get there first and summon a Friendly Elder God.
Both a) and b) are almost impossibly difficult and I find it hard to decide which is less impossible.
I think you have the lesson entirely backward.
How so? A person convinced that any nuclear power plant is a risk of multi megaton explosion would have some very weird ideas of how nuclear power plants should be built; they would deem moderated reactors impractical, negative thermal coefficient of reactivity infeasible, etc (or be simply unaware of the mechanisms that allow to achieve stability), and would build some fast neutron reactor that relies on very rapid control rod movement for it's stability. Meanwhile normal engineering produced nuclear power plants that, imperfect they might be, do not make a crater when they blow up.
To the extent that you already know that nuclear power plants are basically safe they clearly do not apply as an analogy here. Reasoning from them like this is an error.
Yes, but you can say that because you have the independent evidence that nuclear power plants are workable, beyond the mere say-so of a couple of scientists. You don't have that kind of evidence for AI safety.
Also, this:
... is not a given. What makes you think that the worst it would do is kill you, when killing is not the worst thing humans do to each other?
William Deseriewicz
The whole speech is worth reading as one giant rationality quote
Not bad, although it seems to equate originality with goodness a little too much.
Linus Pauling
It's necessary, but not sufficient.
The example in the comic is not a good one. Of the choices on the board, E being proportional to mc^2 is the only option where the units match. You only need to have that one idea to save yourself the trouble of having lots of other ideas.
It's a joke, which I assume is intended for a mostly non-physicist audience.
We demand complete rigour from all forms of levity! The unexamined joke is not worth joking!
Mickey Mouse is dead Got kicked in the head Cause people got too serious They planned out what they said They couldn't take the fantasy They tried to accept reality Analyzed the laughs Cause pleasure comes in halves The purity of comedy They had to take it seriously Changed the words around Tried to make it look profound ...
--Sub Hum Ans, "Mickey Mouse is Dead"
To prevent lines from being merged together, add two spaces at the end of each one.
That's so...typewriter.
Thanks.
Yes, but also being able to tell which of those ideas are good is even better.
From the alt-text in the above-linked comic:
Bryan Caplan
This sounds like a recipe for stagnation. A true friend is willing to encourage you to grow.
I think I parsed that quote less along the lines of 'dude, you hardly know any math and so I won't love you' and more along the lines of 'dude, you seem to have the same taste for movies and music and we can have a conversation -- I love (hanging out with) you'.
The former has an objective measure and thus one can speak of definite growth while the latter is subjective.
That's not what I mean. Suppose you have various negative personality traits that are negatively influencing your life (e.g. perhaps you are selfish or short-tempered). If you don't carefully cull the people around you, you might start noticing that many people react negatively to you, and you might start wondering why. If you determine that the problem is with you and not them, that's an opportunity for growth. If you only surround yourself with people who are willing, for whatever reason, to ignore your negative personality traits, then you've lost an opportunity to notice them.
Similarly, and this should be scary to anyone who cares about epistemic rationality, suppose you have various false beliefs and you decide that those beliefs are so important to your identity that people who don't also believe them can't possibly love you the way you are, so you only surround yourself with people who agree with them...
Sure, in such a case, I've optimized for my own 'social harmony'. We all do this to varying degrees anyway. Signalling, sub-cultures and all that blah. Note that the quote simply speaks of a process (selection) to maximize an end (social harmony, however that is defined). It doesn't say anything about whether such selection should be for false or true values (however these are defined).
"Love you just as you are" doesn't imply "hate for you to change".
After all, you are changing.
Okay, but P(doesn't want you to change | loves you just the way you are) is higher than P(doesn't want you to change | doesn't love you just the way you are), and in addition P(you won't change | you surround yourself with people who love you just the way you are) is higher than P(you won't change | you don't surround yourself with people who love you just the way you are).
This sounds almost horrifically dystopian, in a sort of Friendship is Optimal way.
I suppose it does, in as objective a measure something like 'harmony' is.
Karl Popper
There's a failure mode associated to this attitude worth watching out for, which is assuming that people who disagree with you are being irrational and so not bothering to check if you have arguments against what they say.
-- Seng-Ts'an
Does this mean something different than "Truth doesn't have a moral valence"?
Cause it seems like it is trying harder to sound deep than to sound insightful. Sigh - maybe I'm just jaded by various other trying-to-sound-deep-for-its-own-sake sayings. Aka seem deep vs. is deep issues.
My primary interpretation was "attaching yourself to arguments obstructs your ability to seek the truth." If you are interested in the truth, it does not matter if you or your interlocutor is wrong or right; it matters what the truth is.
Another interpretation is "is-thinking leads to accuracy, should-thinking leads to delusion."
A third interpretation is "moralistic thinking degrades morals." I don't consider that interpretation interesting enough to agree or disagree with it.
It doesn't seem to be clear whether Seng-Ts'an is talking about moral right and wrong, or the kind of "wrong" that is involved in "proving your opponent wrong" in debates. The first interpretation is just silly according to any philosophy that cares about ethics, but the second one does make a lot of sense.
This is probably a more plausible reading of the quote, but I think it is false. If I don't believe I am right, or at least making an important point (such as playing devil's advocate), I'm doubtful that my comments are relevant or helpful in figuring out what is true.
By contrast, your interpretation of the quote suggests that Professor Armstrong should be indifferent to whether particular x-risks that he has highlighted as "most dangerous" are actually the most dangerous x-risks.
Anyway, your second suggested reading is essentially my suggested reading, and I agree that your third suggested reading is not a very interesting assertion.
It may be worthwhile to consider the role of curiosity and questions.
The first interpretation sees 'right' and 'wrong' as the property of people, not ideas. Doing so is less helpful than seeing rightness as a property of ideas- the plain truth.
Thus, it suggests that the Professor should be indifferent to which x-risks he highlights as most dangerous, except for the criterion of danger. It would risk sorting his list incorrectly to confine himself by his opinion, his past statements on the issue, or those which avoid giving support to an enemy.
I was introduced to the poem by someone who was arguing against moralistic thinking, who knows much more about this sort of poetry than I do; I mention it for completeness, as it may have been the author's preferred interpretation.
Maybe it's a reference to the idea that you need something more important than The Truth, so that you keep testing/refining your answer when you think you've got to the truth.
i'm going to reply to the quote as if it means "Truth doesn't have a moral valence" and rebuttal that truth should be held more sacred then morals rather then simply outside of it. For example if there are two cases and case 1 leads to a morally "better" (in quotes because the word better is really a black box) outcome then case 2 but case 1 leads to hiding the truth (including hiding from it yourself) then I would have to think very specifically about it. In short I abide by the rule "That which can be destroyed by the Truth should be" but am weary that this breaks down practically in many situations. So when presented with a scenario where i would be tempted to break this principle for the "greater good" or the "morally better case" I would think long and hard about whether it is a rationalization or that i did not expend the mental effort to come up with a better third alternative.
Ozy Frantz - Brain Chemicals are not Fucking Magic
-- Martin Fowler
--Jovah's Angel by Sharon Shinn
maybe it's just my most recent physchem lecture talking, but my instant response to that was 'truth is a state function'. Or perhaps 'perceived truth', and 'should be'. (i.e., shouldn't depend on the history preceding current perceived truth)
@slicknet
You may or may not have noticed, but most people are biased. Whether bias counts as "dumb", "ignorant" or "making mistakes" is left as an exercise for the reader.
With apologies for double-commenting: "Don't assume others are ignorant" is likely to be read by a lot of people (including myself at first) as "Aim high and don't be easily be convinced of an inferential gap". Posts on underconfidence may also be relevant.
I would somewhat agree with this if the phrase "making mistakes" was removed. People generally have poor reasoning skills and make non-optimal choices >99% of the time. (Yes, I am including myself and you, the reader, in this generalization.)
In most situations there are multiple people other than yourself who each think the others are dumb, ignorant and making mistakes. Don't assume that the one you happen to be interacting with at the moment is right by default.
If we are in the business of making assumptions, there is no dichotomy, you can as well consider both hypotheticals. (Actually believing that either of these holds in general, or in any given case where you don't have sufficient information, would probably be dumb, ignorant, a mistake.)
This misses the point a bit due to an equivocation on "assume". In ordinary discourse, it usually means "assume for the purpose of action until you encounter contrary evidence". That's very different from the scientist's hypothetical assumptions that are made in order to figure out what follows from a hypothesis.
It's epistemically incorrect to adopt a belief "for the purpose of action", and permitting "contrary evidence" to correct the error doesn't make it a non-error.
I think what Creutzer is trying to mean is in ordinary discourse meaning everyday problems in which you are not always able to give the thought time it deserves, when you don't even have 5 minutes by the clock hand to think about the problem rationally, it is better to rely on the heuristic assume people are smart and some unknown context is causing problems then to rely on the heuristic people who make mistakes are dumb. this said heuristics are only good most of the time and may lead you to errors such as
in this case it is still technically an error but you are merely attempting to be "less wrong" about a case where you don't have time to be correct then assuming the heuristic until you encounter contrary evidence (or you have the time to think of better answers) follows closely the point of this website
Using a heuristic doesn't require believing that it's flawless. You are in fact performing some action, but that is also possible in the absence of careful understanding of the its effect. There is no point in doing the additional damage of accepting a belief for reasons other than evidence of its correctness.
Exactly, thanks for the clarification.
I believe that this statement, while correct, misses the point of preemptive debiasing. Yvain said it better.
The original quote draws attention to the mistake of not giving enough attention to the hypothetical where something appears to be wrong/stupid, but upon further investigation turns out to be correct/interesting. However, it confuses the importance of the hypothetical with its probability, and endorses increasing its level of certainty. I pointed out this error in the formulation, but didn't restate the lesson of the quote (i.e. my point didn't include the lesson, only the flaw in its presentation, so naturally it "misses" the point of the lesson by not containing it).
Also, consider the possibility that it is you who is dumb, ignorant, and making mistakes.
I don't consider it, I assume it.
But "dumb" and "ignorant" are not points on a line, they are relative positions.
To quote this bloke at a climbing gym I used to frequent "We all suck at our own level".
Or better yet, assume nothing, and reserve judgement until you have more information.
You always assume things, whether you are aware of it or not. At least by making your assumptions explicit and conscious, you have a better chance of noticing when they are wrong. And assuming "that people are dumb, ignorant, and making mistakes" is a common default subconscious failure mode.
-Alex Tabarrok
It seems to me that the same would apply to any in-group. The reasoning runs more-or-less as follows:
It is us (not me personally, but a group with which I strongly identify) that is treating this person badly; since we are doing it, then he must deserve it. Since he deserves it, he must be guilty. This is because if he did not deserve it, then I would be horrified at the actions of people I have always tried to emulate; and that, in turn, would mean that I had already given some support to an evil group, and had indeed put some significant effort into being a part of that group, taking up the group norms.
If the group is evil, or does evil actions, then I am evil by association.
And a good person does not want to reach that conclusion; therefore, the person being punished must be guilty. And thus, good people do evil things by not acknowledging evil being done in their name as what it is.
One amusing aspect is that assuming the person is justified in their belief that their church/country is ethical, the above is a valid inference.
Not necessarily. You don't punish people based on their likelihood of being guilty but based on severity of their crime.
If torture is used as tool to gain information instead of being used to punish it's even more questionable whether the likelihood of being guilty correlates with the severity of the torture. The fact that someone decides to torture to get more information suggests that they have an insuffienct amount of information.
If there a 50% chance that a person has information that can prevent a nuclear explosion, you can argue that it's ethical to torture to get that information.
After the bomb has exploded and you know for certain who did the crime, there not much need to torture anyone.
An interrigator that tortures is more likely to get false confession that implicate innocents. If he then goes and tortures those innocents, you see that people who torture are more likely to punish innocents than people who don't.
Even the first person who was tortured might be innocent or ignorant.
Yes, but that's besides the point I tried to make. Torturing in general produces a dynamic that makes you punish more innocent people.
I wouldn't be surprised if this has come up before:
―Kurt Vonnegut (attributed to Kilgore Trout), in Breakfast of Champions
Yep.
— Gaston Leroux
Only with very low probability.
and the human mind loves to find patterns even when the probabilities of the pattern being a rule are low. Coincidences are correlation.
Scott Adams
[Footnote to: "This was a most disturbing result. Niels Bohr (not for the first time) was ready to abandon the law of conservation of energy". The disturbing result refers to the observations of electron energies in beta-decay prior to hypothesizing the existence of neutrinos.]
-David Griffiths, Introduction to Elementary Particles, 2008 page 24
"For belief did not end with a public renunciation, a moment when one's brethren called one a heretic, and damned. Belief ended in solitude, and silence, the same way it began." -Robert V. S. Redick, The Night Of The Swarm
(I'm mid-way through the book, but perhaps I should instead say that I am mid-way through gur sryybjfuvc bs gur evat, juvpu unf sbe fbzr ernfba orra vafregrq vagb gur zvqqyr bs vg, pbzcyrgr jvgu eviraqryy, zvfgl zbhagnvaf, naq gur jvmneq qvfnccrnevat gb svtug n zbafgre).
-- David Brin
Klingon proverb.
Where is this from? I looked it up to see if the weird grammar was intended and couldn't find anything.
It's ... ahem ... non-canon. A different faction.
I thought it interesting that the near-inverse of a useful rationality quote can still be a useful rationality quote.
I don't think it's an inverse! The first one is saying you might not succeed in killing the person you're trying to kill and the second one is saying you might instead kill someone else that you don't want to kill! They're two properties of the same worst-case scenario. =]
I understood the second one as saying that that blind idiot with the knife might end up killing you, not necessarily intentionally, so be careful.
But also, if you're being a blind idiot waving your knife around, you could kill someone! So stop that. =]
So it's true what they say! The opposite of a Klingon proverb is also a Klingon proverb...
-- Randall Munroe
Definitely a double, but I can't link the others right now.
I thought that unlikely, because it's from last week's XKCD What If?
Maybe Randall has said it before (or borrowed it from someone else).
Earlier posting
OK thanks.
I don't know why I didn't see it - I tried searching the page for Icarus before posting :(
Well, that post was from the January thread. If you only Control-F'd this page, then it wouldn't have come up.
I searched on the entire quote. That's probably easier and more reliable than trying to pick out a keyword.
That seems unlikely; the quote above was only posted about three weeks ago and nothing about Icarus turns up in a search. Can anyone find a duplicate?
Two, in fact.
It was three, but I deleted mine.
Faramir, from Lord of the Rings on lost purposes and the thing that he protects
Except that a non-overwhelming love of a useful art may help you become better in the art, even though you would switch to another if it helped you optimize more.
another great quote for 2013
(Joseph Heath, The Efficient Society)
Heath is an excellent writer on economics/philosophy.
Francis Spufford, Red Plenty
Is it a good book? I was thinking of buying it, but I am very risk-averse when it comes to buying fiction.
I thought it was pretty good in its own way, although I expected (coming at it from Shalizi) much more math & science than it actually had.
I am only about one-third of the way through, but it is definitely a good book thus far.
I would not personally buy it, since I only purchase fiction that I am certain I will read more than once, but it is definitely worth reading.
I've just come across a fascinatingly compact observation by I. J. Good:
This is a beautifully simple recipe for a conflict of interest:
Considering absolute losses assuming failure and absolute gains conditioned on success, an adviser is incentivized to give the wrong advice, precisely when:
You can see this reflected in a lot of cases because the gains to an advisor often don't scale anywhere near as fast as the gains to society or a firm. It's the Fearful Committee Formula.
Which is not nearly as common as the reverse, the Reckless Adviser Formula, when the personal loss to the adviser is so low and the potential personal gain is so high, they recommend adoption even when the expected gain for the company is negative.
In general, this is referred to as the principal-agent problem.
Note that the adviser's ethical problem also exists if L/V > p/(1-p) > l/v.
Is the order also inverted in the original?
Fixed.
I. J. Good's original, which I've somewhat abridged, explicitly specifies that there are no competitors who cause visible losses/gains after the invention is rejected.
To clarify, this is a summary of what you've excluded in your quote, not a response to the other case where the ethical problem exists, correct?
It's a summary of what I excluded - I had actually misinterpreted, hence my quote indeed was not a valid reply! The other case is indeed real, sorry.
Name three?
The success of Market-Based Management / Koch Industries appears to be due at least in part to their focus on NPV at the managerial level. You get stories like (from memory, and thus subject to fuzz) the manager of a refining plant selling the land the plant was on to a casino which was moving to the area, which he was rewarded for doing because the land the plant was on was more valuable to the casino than the company, even after factoring in the time lost because the plant was shut down and relocated. The corporate culture (and pay incentive structure) rewarded that sort of lateral use of resources, whereas a culture which compartmentalized people and departments would have balked at the lost time and disruption.
(Sorry, I couldn't resist.)
Studies show that people who try to run behind a car frequently fail to keep up, while nobody who runs in front of a car fails more than once.
-- Noah Brand
I'd prefer if this quote ended with " ... and then I got done weeping and started working on my shoe budget," but oh wells.
That's really the entire point of the original quote that this quote is making fun of. The difference between the original and this one is that the author of the second has not updated his baseline expectation that he should have shoes, and that something is wrong if he doesn't.
Our baseline expectations determine what we consider a "loss", in the prospect theory sense, so if seeing someone else's problem helps you reset your baseline, it actually is a way to help you stop weeping and start working on the budget, as it were. What we call "getting perspective" on a situation is basically a name for updating your baseline expectation for how reality "ought to be" at the present moment.
(That isn't a perfect phrasing, because English doesn't really have distinct-enough words for different sorts of "oughts" or "shoulds". The kind I mean is the kind where reality feels awful or crushingly disappointing if it's not the way it "ought" to be, not the kind where you say that ideally, in a perfect world, things ought to be in thus and such a way, but you don't experience a bad feeling about it right now. It's a "near" sort of ought, not a "far" one. Believing the future should be a certain way doesn't cause this sort of problem, until the future actually arrives.)
“need”
Nope, the thing I'm talking about is closer to what the Buddhists would call an "attachment", and some Buddhist-influenced writers call an "addiction". (Others would call it a "desire", but IMO this is inaccurate: one can desire something without being attached to actually getting it.)
I agree that resetting your baseline is often important if you think that your lack of shoes is a soul-crushing awfulness. This quote is mainly arguing against the attitude that says "you have feet therefore your shoe problem is a non-problem, don't even bother feeling bad or working on it". It's comparatively very minor, but it should be fixed just like any other problem. This quote is arguing against resetting your baseline to the point where minor problems get no attention at all.
That may be, but the actual context of the quote it's arguing with is quite different, on a couple of fronts.
Harold Abbott, the author of the original 1934 couplet ("I had the blues because I had no shoes / Until upon the street, I met a man who had no feet"), wrote it to memorialize an encounter with a happy legless man, at a time when Abbott was dead broke and depressed. (Abbott was not actually lacking in shoes, nor the man only lacking in feet, but apparently in those days people took their couplet-writing seriously. ;-) )
Thing is, at the time he encountered the legless man (who smiled and said good morning), Abbott was actually walking to the bank in order to borrow money to go to Kansas City to look for a job. And not only did he not stop walking to the bank after the encounter, he decided to ask for twice as much money as he had originally intended to borrow. He had in fact raised his sights, rather than lowering them.
That is, the full story is not anything like, "other people have worse problems so STFU", but rather that your attitude is a choice, and there are probably people who have much worse circumstances than you, who nonetheless have a better attitude. Abbott wrote the couplet to put on his bathroom mirror, as an ongoing reminder to have a positive outlook and persist in the face of adversity.
Which is quite a different message than what Noah Brand's snarky quip would imply.
I think the problem that people are having with the quote is that it doesn't actually contain the full story, and when it is repeated outside that context, the meaning they get from parsing the words is "other people have worse problems so STFU", and it's not a good idea to go around repeating it if people are going to predictably lack the context and misinterpret it.
I guess I didn't quote the original article, and he was saying "I am pointing out this problem that is probably not as big or painful as this other problem, but can we please acknowledge its existence also?" And, as often happens with social issues, he was trying to preempt the inevitable "why would we care? we have it worse!" response.
I definitely agree that attitude is a choice! I wasn't quite aware of the original quote, but I would put it down as an instrumental rationality quote as well. 8) But it sounds like his shoelessness was a symptom of bigger/different problems?
I consider Noah Brand's quote a rationality quote because it's a reminder that problems require real solutions. Changing your attitude to be positive is useful, but changing your attitude to accept that something that sucks will continue to suck indefinitely is not the answer.
Yes, his business (a grocery store) had just failed, taking his entire life savings with it. (And the story doesn't actually say he was shoeless, anyway, just that the rhyme was something he posted on his mirror as a reminder of the encounter.)
Generally speaking, bigger problems tend to be cheaper to solve (i.e. solving them will yield more utilons per dollar); so if there is a painting in a museum that risks being sold, and there are people that risk dying from malaria, the existence of latter is a good indication that worrying about the former isn't the most effective use of a given amount of resources. (“Concentrate on the high-order bits” -- Umesh Vazirani.) But in this particular case, that heuristic doesn't seem to work (unless I'm overestimating the cost of prosthetics).
This. If only people realized that unpleasant facts do not cancel each other out, and pointing out one unpleasant fact in addition to another should never ever make us feel better, because it only leaves us in a worse world than we started out in. Compute the actual utilities. It's such a common and avoidable error.
What's an actual utility?
In the example above: the fact that you have no shoes equates to negative utility for you. If you're a normal human being who is generally well-intended and wants people to have both feet and shoes for those feet, you would feel upset if you saw someone without feet, hence more negative utility. Your negative utility from you having no shoes + negative utility from seeing someone have no feet can only amount to a more negative total score than just the one obtained by considering your own lack of shoes. Even in the case where you're a complete egoist for whom others' misfortunes have absolutely no impact on your own personal happiness, if you sum them up again you still end up with the same negative utility from having no shoes. Only if you're the kind of monster that rejoices in other people's suffering is it possible for your utility score to raise after seeing someone with no feet. Yet it seems that even people who aren't complete monsters seem to take comfort in the fact that someone else has it worse than them, and this seems intuitive for most people, and counter-intuitive for others, i.e. me, and the person who made the quote.
(Disclaimer: I haven't studied utilitarianism formally; probably I'm using more of an everyday definition of the word "utility", akin to "feel-good-ness" in a broad sense. The way I've thought about this problem stems purely from my intuitions.)
I think both your comment and the quote are forgetting the instrumental purpose of crying and/or feeling bad.
Unfortunately, I've met a lot of people who forget the instrumental purposes of crying and/or feeling bad. =[
I can't say I see your point. Mind explaining?
My guess: The purpose of crying is to make people around you more likely to help you.
So if you don't have shoes, there is a chance that crying in public will make someone give you money to buy the shoes. But if there is a person without feet nearby, your chances become smaller, because people will redirect their limited altruist budgets to that other person. Your crying becomes less profitable.
... Alright, but... that's a separate point to make altogether. It's not a quote about making yourself as likely as possible to get others to help you, and, I would say, it doesn't have to be; it's a quote about how other people's negative experiences influence the way you feel about yours.
But if you look at it other way, then pointing out unpleasant facts about other people's condition (that don't apply to us) is equivalent to pointing out good facts about our condition, which should make us feel better, as it leaves us in a better world than we started out in.
That's exactly the kind of thinking the world needs less of, and the kind that I was trying to warn readers against in the parent comment. Why? Just why would a worse world for someone else make for a better world for you, if that someone is not your mortal enemy? It just makes for a worse world, period.
The point isn't that you're taking pleasure in their misfortune, it's that you're taking pleasure in your own fortune. "I'm so lucky for having X." If you don't do that, then any improvements in your standard of living or situation in general will end up having no impact on your happiness, since you just get used to them and take them for granted and don't even realize that you would have a million reasons to be happy. And then (in the most extreme case) you'll end up feeling miserable because of something completely trivial, because you're ignoring everything that could make you happy and the only things that can have any impact on your state of mind are negative ones.
Someone commented above about the instrumental value of crying and feeling bad, and you're actually pointing out the case where crying and feeling bad fail at being instrumental. Basically, I'm for whatever attitude that gets you to stop crying and start fixing some problem, and if resetting your baseline helps, it's fair game! It definitely works for me in some cases.
I think this quote is trying to argue against the attitude that problems that are minor compared to other problems don't deserve any attention at all. That everyone without shoes should just wrench themselves into happiness and go around being grateful, rather than acknowledging that they keep stepping on snails and pointy things, which sucks, and making productive steps toward acquiring shoes.
I remember reading something about plastic surgeons getting kind of looked down upon because they're not proper heroic doctors that handle real medical problems.
... I think I see where you're coming from -- by realizing we're not at the far end of the unhappiness scale (since we have a counterexample to that), we should calibrate our feelings about our situation accordingly, yes?
It's still not the way I view things; I'd like to say I prefer judging these things according to an absolute standard, but it's likely that that would be less true for me than I want it to be. To the extent that it doesn't hold true for me, I think it's better to take into consideration better states as well as worse ones. Saying, "at least I don't have it as bad as X" just doesn't feel enough; everybody who doesn't have it as bad as X could say it, and people in this category can vary widely in their levels of satisfaction, the more so the worse X has it. It's more complete to say "Yes, but I don't have it as good as Y either" or, better yet, "I have it better/worse than my need requires".
Yes, pretty much.
Yes, yes, but now you are going into far more depth than the original quote. The idea behind the quote seems to have been (at least as I read it): "Be happy that you have feet, having feet is not something you should take for granted." The quote says nothing more than that. (Well, not quite. The point it makes is not only meant to be reserved for feet specifically, but rather seems to be meant as a comment on anything people take for granted.)
I think people just accidentally conflate keeping problems in perspective with the idea that the existence of bigger problems makes the small problems negligible and therefore equivalent to non-problems.
I've seen this happen with positive things too; sometimes you won't mind repeatedly doing small favors for someone and they start acting like you not minding means the favor is equivalent to doing nothing from your perspective, which is frustrating when your small but non-zero effort goes unacknowledged.
It's sort of like approximating sinθ as 0 for small angles. ^_^
Yep. Most people seem to behave as though the choice between spending $5 and spending $10 is a much bigger deal than the choice between spending $120 and spending $125, but if anything it's the other way round, because in the latter case you'll be left with less money. (That heuristic does have a point for acausal reasons analogous to these insofar as you'll have to make the first kind of choice much more often than the second, but people will still behave the same way in one-off situations.)
Another possible motivation for that heuristic: something that's a good buy for $5 might well be a bad buy for $10, but something that's a good buy for $120 is probably still a good buy for $125. If I find that a cheap item's twice the cost I thought it was, that's more likely to force me to re-do a utilitarian calculation than if I find an expensive item is 4% pricier than I thought it was.
Yes, but OTOH if I'm about to buy something for $125 it isn't that unlikely that if I looked more carefully I could found someone else selling the same thing for $120, whereas if I'm about to buy something for $10 it's somewhat unlikely that anyone else would sell the same thing for $5 (so looking around would most likely be a waste of time), and I'd guess these two effects would more-or-less cancel out.
I can often get a $10 good/service for $5 or less if I'm willing to delay consumption or find another seller (e.g. buying used books, not seeing films as soon as they come out, getting food at a canteen or fast food place instead of a pub or restaurant, using buses instead of trains). I might be atypical.
"...And then I remembered status is positional, felt superior to the footless man, and stopped weeping."
Shoes aren't just about positional social status, are they? (I mean, the difference between a $20 pair of shoes and a $300 pair of shoes mostly is, but the difference between a $20 pair of shoes and no shoes at all isn't, is it?)
Klingon proverb.
Randall Munroe, on updating on other people's beliefs.
The " every single person I know, many of them levelheaded and afraid of heights, abruptly went crazy at exactly the same time" scenario should be given some credence in human society; there is such a thing as puberty. The definition of puberty being " every single person I know abruptly went crazy at exactly the same time, including me".
Dilbert dunnit first!
(Seeing that strip again reminds me of an explanation for why teenagers in the US tend to take more risks than adults. It's not because the teenagers irrationally underestimate risks but because they see bigger benefits to taking risks.)
See also this Will_Newsome comment. (I incorrectly remembered that it said something like “If all your friends jumped off a bridge, would you jump too?” “If all of them survived, I probably would.”)
Let me just put the text string ‘xkcd’ in here, because I was going to add this if nobody else had, and it's lucky that I found it first.
Oh, and there's more text in the comic than what's quoted, and it's good too, so read the comic everybody!
-- Doc Scratch, Homestuck
Doc Scratch isn't exactly the best source for rationality quotes- a guy who already knows the truth has little need to overcome flawed cognitive processes for arriving at it. Which isn't to say the guy doesn't say some relevant stuff:
One can do these two things, but not to the exclusion of alternatives. One can make statements which are confused or nonsensical, that are not even false.
In any case, a statement doesn't have inherent truth value outside the way it's interpreted by the people who hear it. The statement that "If a tree falls in the forest, it does not make a sound" is true or false depending on the meanings understood by the audience and the person uttering it. It's entirely possible to convey false understandings by making statements which omit relevant information. To refuse to call a statement which is deliberately tailored to make its audience believe falsehoods a lie is using a distinction in an unhelpful way.
This.
It borders on arguing about the meaning of words, so I find it useful to describe what I mean by "lying", i.e. "conveying information that adjusts someone else's worldview away from reality". Funnily enough, that excludes most lies-to-children..
At that point whoever I'm talking to will either point out that his definition differs, or even decide to go with mine henceforth, and either way we can start getting some real work done.
Of course, he was lying (arguably by omission); Doc Scratch was not merely reticent or uncooperative, but intentionally deceptive.
(Must resist urge to watch Cascade again ...)
I'm not certain what lesson on rationality I'm expected to glean from this, unless it's "model your opponents as agents, not as executors of cached scripts" -- and that seems both strongly dependent on the opponents you're facing and a little on the trivial side.
From this recent talk
/clicks link, watches
... I can barely understand a single word this guy is saying. Is it just me or is the audio in that video really bad? I don't suppose it was transcribed anywhere?
It's not just you. It was comprehensible but annoying for approximately the first 10 minutes, and then it became completely muddy. I hope there's a transcript somewhere.
Aubrey de Grey being an immortalist himself, I'm assuming the irony to be unintentional?
Haha, didn't occur to me until I read your comment, so there's one data point for you.
I'm confused. I thought that deathpigeon's quote was downvoted because it was anti-deathism and not rationality, but this quote is similar in that way and it has lots of upvotes. Was deathpigeon's quote actually downvoted because it incorrectly attributed a line to ASoIaF instead of Game of Thrones? Seriously?
Or perhaps there are more criteria (aesthetic, informational, other) by which these quotes may be judged than whether they are anti-death or not.
And that other quote is neither ASoIaF nor TV series, it's a misquotation.
I wouldn't think so, but I wasn't expecting five upvotes on my comment saying so, either. Maybe we really are that pedantic.
This is only incidentally anti-deathist, though; its substance has more to do with popular reactions to controversial ideas. Which doesn't seem all that shiningly rational to me either, but perhaps I'm missing something.
Or we all secretly love anti-deathist quotes, and only downvote them when they have no rationality content because we feel it's our duty, but when we see one that can be interpreted as slightly rationalist, we seize the excuse to upvote it. Or our liking for a quote based on its anti-deathism enhances our appreciation for its insight into rationality, via the affect heuristic.
I cannot express how true this is, at least not without a lot of swear words.
Syrio Forel, Game of Thrones based on A Song of Ice and Fire by George R R Martin
It doesn't matter that much, but I'm pretty sure that line is original to the HBO series, not to the books.
(Not my downvotes, incidentally, but I'd speculate they come from a desire to separate rationality from anti-deathism.)
It's not from the TV series either.
The TV series quote would be this: "There is only one God. And his name is Death. And there is only thing we say to Death: 'Not today'."
Basically the grandparent post seems to be just a quote from memory, combining bits and pieces from both places, accurate to neither.
I could've sworn it was from both of them, and, thus, from the books originally...
It's not from the books; more generally, there isn't anything in the books directly suggesting a connection between Syrio and the Faceless Men.
Thanks. Fixed it.
Couldn't find it in the Arya chapters of my copy. Wasn't looking terribly hard, though.
I remembered it vaguely, and found the more exact quote on the ASOIAF Quotes page on TvTropes since I didn't want to search through the Arya chapters to find the exact quote, though I was prepared to.
-- Screwtape, The Screwtape Letters by C.S. Lewis
I kind of wish people did use the future more, sometimes. For example, in Australia at the moment, neither major political party supports gay marriage. And beyond all the direct arguments for/against the concept, I can't help but wonder if they really expect, in 50 years time, that we will live in a world of strictly hetrosexual marriages. What are they possibly hoping to achieve? Maybe that reasoning isn't the best way to decide to actively do a thing, but it surely counts towards the cessation of resistance to a thing.
Being elected at some point in the next 3 years. They aren't trying to achieve anything related to homosexual marriages. They don't care.
Um, I know this is classic Hansonian "X is not about X" cynicism, but I doubt it's actually true of most politicians. Sure, the need to get elected skews their priorities, but they do have policy preferences, which they are willing to pursue at cost if necessary.
FWIW, 20 years ago (when my now-husband and I first got together) I expected that I would live in a world of strictly heterosexual marriages all my life.
That didn't incline me to cease my opposition to that world.
So I can empathize with someone who expects to live in a world of increasing marriage equality but doesn't allow that expectation to alter their opposition to that world.
Here are a few things that have at one time or another been considered "obviously inevitable":
The spread of enlightened dictatorship on the Prussian model.
The spread of eugenics.
The control of the world economy by "rational" central planners.
My point is that you appear to be overestimating how well you can predict the future.
I don't think you really believe this argument. In particular if the success of something you opposed seemed inevitable, you'd still oppose it.
What I think is happening is that you support the "inevitable" outcome but are getting frustrated that the opposition just won't go away like they're "supposed" to.
Oppose in the sense of "actively work to stop it" or oppose in the sense of, "if asked about it, note that one dislikes it"? I dislike the increase of surveillance over the decades but look: Sensors get cheaper year by year. Computation gets cheaper year by year. I'm not happy to see more surveillance, but I see it as so close to inevitable, due to the dropping costs of the enabling technologies, that actively opposing it is a waste of time and effort.
To put it another way: In the original C.S.Lewis quote, Lewis includes in his own list of questions that he wants asked: "Is it possible?" I view most of the questions that Lewis disapproves of as just being ways of asking whether recent historical evidence make something look possible or impossible in the near future. In my view, usually, claims of historical inevitability are overstated, but, occasionally (as in the cheaper sensors example), I think there are situations where a fairly solid case for at least likely trends can be made.
-Joel Spolsky
And by the same author:
and
(because what counts after getting it out the door is how many people actually use it.)
That's Jeff Atwood. The quote is from Joel Spolsky. While the two both work together on Stack Exchange, they're different individuals.