- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
Loading…
Comments (379)
-- Eldest, by Christopher Paolini
(This is not a recommendation for the book series. The book has Science Elves, but they are not thought of rationally or worldbuilt to any logical conclusion whatsoever. The context of this quote is apparently a "science is good" professing/cheering without any actual understanding of how science or rationality works.)
(I would love a rational version of Eragon by way of steelmanning the Science Elves. But then you'd probably need to explain why they haven't taken over the world.)
-- Eragon and Angela, Brisingr, by the same author
Someone who says something like the first sentence generally means something like "questions that are significant and in an area I am concerned with". They don't mean "I don't know exactly how many atoms are in the moon, and I find that painful" (unless they have severe OCD based around the moon), and to interpret it that way is to deliberately misinterpret what the speaker is saying so that you can sound profound.
But then, I've been on the Internet. This sort of thing is an endemic problem on the Internet, except that it's not always clear how much is deliberate misinterpretation and how much is people who just don't comprehend context and implication.
(Notice how I've had to add qualifiers like 'generally' and "except for (unlikely case)" just for preemptive defense against that sort of thing.)
If you don't have any open questions in that category, then you aren't really living as an intellectual.
In science questions are like a hydra. After solving a scientific problem you often have more questions than you had when you started.
Schwartz's article on the issue is quite illustrative. If you can't deal with the emotional effects that come with looking at an open question and having it open for months and years you can't do science.
You won't contribute anything to the scientific world of ideas if you can only manage to concerned with an open question for an hour and not for months and years. Of course there are plenty person in the real world who don't face questions with curiosity but who in pain when dealing with them. To me that seems like a dull life to live. because the question doesn't concern themselves with living an intellectual life.
I'm not sure that's a critical part of any definition of the word "intellectual".
It's not sufficient to be an intellectual but if you don't care about questions that aren't solved in short amounts of time because that's very uncomfortable for you, you won't have a deep understanding of anything. You might memorise the teacher password in many domains but that's not what being an intellectual is about.
A shipowner was about to send to sea an emigrant-ship. He know that she was old, and not over-well built at the first; that she had seen many seas and climes, and often had needed repairs. Doubts had been suggested to him that possibly she was not seaworthy. These doubts preyed upon his mind and made him unhappy; he thought that perhaps he ought to have her thoroughly overhauled and refitted, even though this should put him to great expense. Before the ship sailed, however, he succeeded in overcoming these melancholy reflections. He said to himself that she had gone safely though so many voyages and weathered so many storms, that it was idle to suppose she would not come safely home from this trip also. He would put his trust in Providence, which could hardly fail to protect all these unhappy families that were leaving their fatherland to seek for better times elsewhere. He would dismiss from his mind all ungenerous suspicions about the honesty of builders and contractors. In such a way he acquired a sincere and comfortable conviction that his vessel was thoroughly safe and seaworthy; he watched her depature with a light heart, and benevolent wishes for the success of the exiles in their strange new home that was to be; and he got his insurance-money when she went down in mid-ocean and told no tales.
What shall we say of him? Surely this, that he was verily guilty of the death of those men. It is admitted that he did sincerely believe in the soundness of his ship, but the sincerity of his conviction can in nowise help him, because he had no right to believe on such evidence as was before him. He had acquired his belief not by honestly earning it in patient investigation, but by stifling his doubts.
An interesting quote. It essentially puts forward the "reasonable person" legal theory. But that's not what's interesting about it.
The shipowner is pronounced "verily guilty" solely on the basis of his thought processes. He had doubts, he extinguished them, and that's what makes him guilty. We don't know whether the ship was actually seaworthy -- only that the shipowner had doubts. If he were an optimistic fellow and never even had these doubts in the first place, would he still be guilty? We don't know what happened to the ship -- only that it disappeared. If the ship met a hurricane that no vessel of that era could survive, would the shipowner still be guilty? And, flipping the scenario, if solely by improbable luck the wreck of the ship did arrive unscathed to its destination, would the shipowner still be guilty?
I realize your questions may be rhetorical, but I'm going to attempt an answer anyways, because it illustrates a point:
The morality of the shipowner's actions do not depend on the realized outcomes: It can only depend on his prior beliefs about the probability of the outcomes, and on the utility function that he uses to evaluate them. If we insisted on making morality conditional on the future, causality is broken: It will be impossible for any ethical agent to make use of such ethics as a decision theory.
The problem here is that the Shipowner's "sincerely held beliefs" are not identical to his genuine extrapolated prior. It is not stated in the text, but I think he is able to convince himself about "the soundness of the ship" only by ignoring degrees of belief: If he was a proper Bayesian, he would have realized that having "doubts" and not updating your beliefs is not logically consistent
In any decision theory that is usable by agents making decisions in real time, the morality of his action is determined either at the time he allowed the ship to sail, or at the time he allowed his prior to get corrupted. I personally believe the latter. This quotation illustrates why I see rationality as a moral obligation, even when it feels like a memetic plague.
Part of the scenario is that the ship is in fact not seaworthy, and went down on account of it. Part is that the shipowner knew it was not safe and suppressed his doubts. These are the actus reus and the mens rea that are generally required for there to be a crime. These are legal concepts, but I think they can reasonably be applied to ethics as well. Intentions and consequences both matter.
If the emigrants do not die, he is not guilty of their deaths. He is still morally at fault for sending to sea a ship he knew was unseaworthy. His inaction in reckless disregard for their lives can quite reasonably be judged a crime.
This is not the whole story. In the quote
you're paying too much heed to the final clause and not enough to the clause that precedes it. The shipowner had doubts that, we are to understand, were reasonable on the available information. The key to the shipowner's... I prefer not to use the word "guilt", with its connotations of legal or celestial judgment -- let us say, blameworthiness, is that he allowed the way he desired the world to be to influence his assessment of the actual state of the world.
In your "optimistic fellow" scenario, the shipowner would be as blameworthy, but in that case, the blame would attach to his failure to give serious consideration to the doubts that had been expressed to him.
And going beyond what is in the passage, in my view, he would be equally blameworthy if the ship had survived the voyage! Shitty decision-making is shitty-decision-making, regardless of outcome. (This is part of why I avoided the word "guilt" -- too outcome-dependent.)
The next passage confirms that this is the author's interpretation as well:
And clearly what he is guilty of (or if you prefer, blameworthy) is rationalizing away doubts that he was obligated to act on. Given the evidence available to him, he should have believed the ship might sink, and he should have acted on that belief (either to collect more information which might change it, or to fix the ship). Even if he'd gotten lucky, he would have acted in a way that, had he been updating on evidence reasonably, he would have believed would lead to the deaths of innocents.
The Ethics of Belief is an argument that it is a moral obligation to seek accuracy in beliefs, to be uncertain when the evidence does not justify certainty, to avoid rationalization, and to help other people in the same endeavor. One of his key points is that 'real' beliefs are necessarily entangled with reality. I am actually surprised he isn't quoted here more.
Gordon Freeman, Freeman's Mind
-- CornChowdah, on reddit
Yay for personal finance, boo for ethics, which is liable to become a mere bully pulpit for teachers' own views.
Thinking back to my own religious high school education, I realize that the ethics component (though never called out as such, it was woven into the curriculum at every level) was indeed important; not so much because of the specific rules they taught and didn't teach; as simply in teaching me that ethics and morals were something to think about and discuss.
Then again, this was a Jesuit school; and Jesuit education has a reputation for being somewhat more Socratic and questioning than the typical deontological viewpoint of many schools.
But in any case, yay for personal finance.
It might be possible (and useful) to design an ethics curriculum that helps students to think more clearly about their own views, though, without giving their teachers much of an excuse to preach.
One of the key concepts in Common Law is that of the reasonable man. Re-reading A.P. Herbert, it struck me how his famously insulting description of the reasonable man bears a deep resemblance to that of the ideal rationalist:
A.P. Herbert, [Uncommon Law].(http://en.wikipedia.org/wiki/Uncommon_Law). Emphasis mine.
I imagine that something of a similar sentiment animates much of popular hostility to LessWrong-style rationalism.
I'm not convinced. I know a few folks who know about LW and actively dislike it; when I try to find out what it is they dislike about it, I've heard things like —
I wonder how these people who dislike LW feel about geeks/nerds in general.
Most of them are geeks/nerds in general, or at least have seen themselves as such at some point in their lives.
Yeesh. These people shouldn't let feelings or appearances influence their opinions of EY's trustworthiness -- or "morally repulsive" ideas like justifications for genocide. That's why I feel it's perfectly rational to dismiss their criticisms -- that and the fact that there's no evidence backing up their claims. How can there be? After all, as I explain here, Bayesian epistemology is central to LW-style rationality and related ideas like Friendly AI and effective altruism. Frankly, with the kind of muddle-headed thinking those haters display, they don't really deserve the insights that LW provides.
There, that's 8 out of 10 bullet points. I couldn't get the "manipulation" one in because "something sinister" is underspecified; as to the "censorship" one, well, I didn't want to mention the... thing... (ooh, meta! Gonna give myself partial credit for that one.)
Ab, V qba'g npghnyyl ubyq gur ivrjf V rkcerffrq nobir; vg'f whfg n wbxr.
That was pretty subtle, actually. You had my blood boiling at the end of the first paragraph and I was about to downvote. Luckily I decided to read the rest.
That makes me more curious; I have the feeling there's quite a bit of anti-geek/nerd sentiment among geeks/nerds, not just non-nerds.
(Not sure how to write the above sentence in a way that doesn't sound like an implicit demand for more information! I recognize you might be unable or unwilling to elaborate on this.)
Your theory may have some value. But let's note that I don't know what it means to cross an instrument 'a/c Payee only', and I'll wager most other people don't know. Do you think most UK citizens did in 1935?
The use of the word "instrument" makes the phrase more obscure than it needs to be, but it refers to the word "cheque" earlier in the sentence. I suspect most modern British people probably don't know what it means, but most will have noticed that all the cheques in a chequebook have "A/C Payee only" written vertically across the middle - or at least those old enough to have used cheques will! But people in 1935 would have most likely known what it meant, because 1) in those days cheques were extremely widespread (no credit or debit cards) and 2) unlike today, cheques were frequently written by hand on a standard piece of paper (although chequebooks did exist). The very fact that the phrase was used by a popular author writing for a mass audience (the cases were originally published in Punch and The Evening Standard) should incline you in that direction anyway.
Note incidentally that Herbert's most famous case is most likely The Negotiable Cow.
I don't know for sure, but judging from context I'd say it's probably instructions as to the disposition of a check -- like endorsing one and writing "For deposit only" on the back before depositing it into the bank, as a guarantee against fraud.
Granted, in these days of automatic scanning and electronic funds transfer that's starting to look a little cobwebby itself.
J.S. Mill
-- Boaz Keysar and Albert Costa, Our Moral Tongue, New York Times, June 20, 2014
This quote implies a connection from "people react less strongly to emotional expressions in a foreign language" to "dilemmas in a foreign language don't touch the very core of our moral being". Furthermore, it connects or equates being more willing to sacrifice one person for five and "touch[ing] the core of our moral being" less. All rational people should object to the first implication, and most should object to the second one. This is a profoundly anti-rational quote, not a rationality quote.
I think you're reading a lot into that one sentence. I assumed that just to mean "there should not be inconsistencies due to irrelevant aspects like the language of delivery". Followed by a sound explanation for the unexpected inconsistency in terms of system 1 / system 2 thinking.
(The final paragraph of the article begins with "Our research does not show which choice is the right one.")
I disagree with Jiro and Salemicus. Learning about how human brains work is entirely relevant to rationality.
Someone who characterized the results the way they characterize them in this quote has learned some facts, but failed on the analysis.
It's like a quote which says "(correct mathematical result) proves that God has a direct hand in the creation of the world". That wouldn't be a rationality quote just because they really did learn a correct mathematical result.
I agree with Jiro, this appears to be an anti-rationality quote. The most straightforward interpretation of the data is that people didn't understand the question as well when posed in a foreign language.
Chalk this one up not to emotion, but to deontology.
It's also possible that asking a different language causes subjects to think of the people in the dilemma as "not members of their tribe".
Possible that they understood the question, but hearing it in a foreign language meant cognitive strain, which meant they were already working in System 2. That's my read anyway.
Given to totally fluent second-language speakers, I bet the effect vanishes.
-- C. S. Lewis, A Grief Observed
Thomas Babington Macaulay, History of England
Frankly, the whole passage Steve Sailer quotes at the link is worth reading.
For those (I have some reason to think there are some) who would rather avoid giving Steve Sailer attention or clicks, or who would like more context than he provides, you can find the relevant chapter at Project Gutenberg along with the rest of volume 3 of Macaulay's History. (The other volumes are Gutenbergificated too, of course.) Macaulay's chapters are of substantial length; if you want just that section, search for "none of these sights" after following the link.
Nassim Taleb
I don't really get this. It seems like both types of prediction matter quite a bit.
The only way I can interpret it that makes sense to me is something like:
Is he giving advice about making correct predictions given that you just randomly feel like predicting stuff? Or is he giving advice about how to predict things you actually care about?
The latter. Specifically predicting high impact events.
"... Is it wrong to hold on to that kind of hope?"
[having poisoned her] "I have not come for what you hoped to do. I've come for what you did."
Given that you've said in another thread that you consider "blame" an incoherent concept, I don't understand what you think this quote means.
Steve Sailer
This tells me that the order of events is important, and not the actual dates themselves. It is true that, if I want to claim that X caused Y, I need to know that X happened before Y; but it does not make any difference whether they both happened in 1752 or 1923.
Great. I have approximately 6000 years worth of events here, happening across multiple continents, with overlapping events on every scale imaginable from "in this one village" to "world war." If you can keep the relationships between all those things in your memory consistently using no index value, go for it. If not, I might recommend something like a numerical system that puts those 6000 years in order.
I would not recommend putting "0" at a relatively arbitrary point several thousand years after the events in question have started.
I do agree that an index value is a very useful and intuitive-to-humans way to represent the order of events, especially given the sheer number of events that have taken place through history. However, I do think it's important to note that the index value is only present as a representation of the order of events (and of the distance between them, which, as other commentators have indicated, is also important) and has no intrinsic value in and of itself beyond that.
The time between them also matters. If X happened a year before Y it is more plausible that X caused Y then if X happened a century before Y.
Dates are a very convenient way of specifying the temporal order of many different events.
Agree with the general point, though I think people complaining about dates in history are referring to the kind of history that is "taught" in schools, in which you have to e.g. memorize that the Boston Massacre happened on March 5, 1770 to get the right answer on the test. You don't need that level of precision to form a working mental model of history.
You do need to know dates at close to that granularity if you're trying to build a detailed model of an event like a war or revolution. Knowing that the attack on Pearl Harbor and the Battle of Hong Kong both happened in 1941 tells you something; knowing that the former happened on 7 December 1941 and the latter started on 8 December tells you quite a bit more.
On the other hand, the details of wars and revolutions are probably the least useful part of history as a discipline. Motivations, schools of thought, technology, and the details of everyday life in a period will all get you further, unless you're specifically studying military strategy, and relatively few of us are.
A particularly stark example may be the exact dates of bombing of Hiroshima, Nagasaki, and official surrender. Helps deal with theories such as "they had to drop a bomb on Nagasaki because Japan didn't surrender".
Be careful. That sounds reasonable until you also learn that the Japanese war leadership didn't even debate Hiroshima or Nagasaki for more than a brief status update after they happened, yet talk of surrender and the actual declaration immediately folowed declaration of war by the Soviets and landing of troops in Mancheria and the Sakhalin islands. Japan, it seems, wanted to avoid the German post-war fate of a divided people.
The general problem with causation in history is that you often don't know what you don't know. (It's a tangential point, I know.)
I'm not necessarily saying this is wrong, but I don't think it can be shown to be significantly more accurate than the "bomb ended the war" theory by looking at dates alone. The Soviet declaration of war happened on 8 August, two days after Hiroshima. Their invasion of Manchuria started on 9 August, hours before the Nagasaki bomb was dropped, and most sources say that the upper echelons of the Japanese government decided to surrender within a day of those events. However, their surrender wasn't broadcast until 15 August, and by then the Soviets had opened several more fronts. (That is, that's when Emperor Hirohito publicized his acceptance of the Allies' surrender terms. It wasn't formalized until 2 September, after Allied occupation had begun.)
Dates aside, though, it's fascinating to read about the exact role the Soviets played in the end of the Pacific War. Stalin seems to have gotten away with some spectacularly Machiavellian moves.
That was my point. It can be shown to be significantly more accurate, but not by looking at the dates alone.
Or that the interval between X and Y is spacelike, and neither is in the other's forward light cone... :)
Some day the light speed delay might become an issue in historical investigations, but not quite yet :) Even then in the statement "if you claim that X caused Y, the minimum you need to know is that X came before Y, not afterwards" the term "before" implies that one event is in the causal future of the other.
Reminds me of Expecting Short Inferential Distances.
In the Great Learning (大學) by Confucius, translated by James Legge
Interestingly I found this in a piece about cancer treatment. An possibly underused well-application of Fluid Analogies.
A conversation between me and my 7-year-old cousin:
Her: "do you believe in God?"
Me: "I don't, do you?"
Her: "I used to but, then I never really saw any proof, like miracles or good people getting saved from mean people and stuff. But I do believe in the Tooth Fairy, because ever time I put a tooth under my pillow, I get money out in the morning."
Definitely getting her HPMOR for her 10th birthday :)
Steve Sailer
Alternatively:
Paul Graham
Paul Graham's quote is about a way to fight the trend Sailer describes, unfortunately that trend frequently ends up winning.
From a surprisingly insightful comic commenting on the whole notion of "saving the planet".
This framing is marginally saner, but the weird panicky eschatology of pop-environmentalism is still present. Apparently the author thinks that using up too many resources, or perhaps global warming, currently represent human extinction level threats?
Andrew Gelman
I would like this quote more if instead of “has a positive utility for getting” it said “wants to get”.
The context is specifically a description of the theory of utility and how it is inconsistent with the preferences people actually exhibit.
Penny Arcade takes on the question of the economic value of a sacred thing. Script:
Gabe: Can you believe Notch is gonna sell Minecraft to MS?
Tycho: Yes! I can!
Gabe: Minecraft is, like, his baby though!
Tycho: I would sell an actual baby for two billion dollars.
Tycho: I would sell my baby to the Devil. Then, I would enter my Golden Sarcophagus and begin the ritual.
Scott Adams
True or false, I'm trying but I really can't see how this is a rationality quote. It is simply a pithy and marginally funny statement about one topic.
I think it's time to add one new rule to the list, right at the top:
Can anyone say that in fewer words?
Living in the same house and coordinating lives isn't a method for ensuring that people stay in love; being able to is proof that they are already in love. An added social construct is a perfectly reasonable option to make it harder to change your mind.
The point of the quote is that it tends to make it harder to stay in love. Which is the opposite of what people want when they get married.
The idea that marriage is purely about love is a recent one.
Adams' lifestyle might work for a certain kind of wealthy high IQ rootless cosmopolitan but not for the other 95% of the world.
If this is a criticism, it's wide off the mark.
Note his disclaimer about "the best economic arrangement". And he certainly speaks about the US only.
And it speaks volumes that he views it as an "economic arrangement", like he's channeling Bryan Caplan.
I don't understand.
It looks to me as if Adams's whole point is that marriage isn't supposed to be primarily an economic arrangement, it's supposed to be an institution that provides couples with a stable context for loving one another, raising children, etc., but in fact (so he says) the only way in which it works well is economically, and in any other respect it's a failure.
It's as if I wrote "Smith's new book makes a very good doorstop, but in all other respects I have to say it seems to me an abject failure". Would you say it speaks volumes that I view Smith's book as a doorstop? Surely my criticism only makes sense because I think a book is meant to be other things besides a doorstop.
What if he wanted to make them stay in love?
Then he would let them work out a custom solution free of societal expectations, I suspect. Besides, an average romantic relationship rarely survives more than a few years, unless both parties put a lot of effort into "making it work", and there is no reason beyond prevailing social mores (and economic benefits, of course) to make it last longer than it otherwise would.
Just to clarify, you figure the optimal relationship pattern (in the absence of societal expectations, economic benefits, and I guess childrearing) is serial monogamy? (Maybe the monogamy is assuming too much as well?)
Certainly serial monogamy works for many people, since this is the current default outside marriage. I would not call it "optimal", it seems more like a decent compromise, and it certainly does not work for everyone. My suspicion is that those happy in a life-long exclusive relationship are a minority, as are polyamorists and such.
I expect domestic partnerships to slowly diverge from the legal and traditional definition of marriage. It does not have to be about just two people, about sex, or about child raising. If 3 single moms decide to live together until their kids grow up, or 5 college students share a house for the duration of their studies, they should be able to draw up a domestic partnership contract which qualifies them for the same assistance, tax breaks and next-of-kin rights married couples get. Of course, this is a long way away still.
To my mind, the giving of tax breaks etc. to married folks occurs because (rightly or wrongly) politicians have wanted to encourage marriage.
I agree that in principle there is nothing wrong with 3 single moms or 5 college students forming some sort of domestic partnership contract, but why give them the tax breaks? Do college kids living with each other instead of separately create some sort of social benefit that "we" the people might want to encourage? Why not just treat this like any other contract?
Apart from this, I think the social aspect of marriage is being neglected. Marriage for most people is not primarily about joint tax filing, but rather about publicly making a commitment to each other, and to their community, to follow certain norms in their relationship (e.g., monogamy; the specific norms vary by community). This is necessary because the community "thinks" pair bonding and childrearing are important/sacred/weighty things. In other words, "married" is a sort of honorific.
Needless to say, society does not think 5 college students sharing a house is an important/sacred/weighty thing that needs to be honoured.
This thick layer of social expectations is totally absent for the kind of arm's-length domestic partnership contract you propose, which makes me wonder why anybody would either want to call it marriage or frame it as being an alternative to marriage.
I recommend reading the whole Scott Adams post from which the quote came. The quote makes little sense standing by itself, it makes more sense within its context.
-- Max Tegmark, Our Mathematical Universe, Chapter 8. The Level III Multiverse, "The Joys of Getting Scooped"
Skeletor is Love
Steven Pinker, The New Republic 9/4/14
The rest of the article is also well worth the read.
Jane Austen, Sense and Sensibility.
Ambivalent about this one.
I like the idea of rational argument as a sign of intellectual respect, but I don't like things that are so easy to use as fully general debate stoppers, especially when they have a built-in status element.
But note that Elinor doesn't use it as a debate stopper, or to put down or belittle Ferrers. She simply chooses not to engage with his arguments, and agrees with him.
(I haven't read the book)
The way I usually come in contact with something like this is afterwards, when Elinor and her tribe are talking about those irrational greens, and how it's better to not even engage with them. They're just dumb/evil, you know, not like us.
Even without that part, this avoids opportunities for clearing up misunderstandings.
(anecdotally: some time ago a friend was telling me about discussions that are "just not worth having", and gave as an example "that time when we were talking about abortion and you said that X, I knew there was just no point in going any further". Turns out she had misunderstood me completely, and I actually had meant Y, with which she agrees. Glad we could clear that out - more than a year later, completely by accident. Which makes me wonder how many more of those misunderstandings are out there)
Katara: Do you think we'll really find airbenders?
Sokka: You want me to be like you, or totally honest?
Katara: Are you saying I'm a liar?
Sokka: I'm saying you're an optimist. Same thing, basically.
-Avatar: The Last Airbender
Kris Gunnars, Business Insider
A search brings up http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?fr=101.30 .
This seems to contradict the claim that "Sometimes there isn’t even any actual fruit in there, just chemicals that taste like fruit," since it would have to say "contains less than 1% juice" or not be described as juice at all.
Mostly correct, but only very loosely related to rationality.
Vitamins also are good stuff but they aren't taken out (or when they are they usually are put back in, AFAIK).
This Amazon.com review.
Steven Pinker
What about: "using the education system to collect forced labor as a 'lesson' in altruism teaches selfishness and fails at altruism"?
D.C. Dennett, Intuition Pumps and Other Tools for Thinking. Dennett himself is summarising Anatol Rapoport.
I don't see what to do about gaps in arguments. Gaps aren't random. There are little gaps where the original authors have chosen to use their limited word count on other, more delicate, parts of their argument, confident that charitable readers will be happy to fill the small gaps themselves in the obvious ways. There are big gaps where the authors have gone the other way, tip toeing around the weakest points in their argument. Perhaps they hope no-one else will notice. Perhaps they are in denial. Perhaps there are issues with the clarity of the logical structure that make it easy to whiz by the gap without noticing it.
The third perhaps is especially tricky. If you "re-express your target’s position ... clearly" you remove the obfuscation that concealed the gap. Now what? Leaving the gap in clear view creates a strawman. Attempting to fill it draws a certain amount of attention to it; you certainly fail the ideological Turing test because you are making arguments that you opponents don't make. Worse, big gaps are seldom accidental. They are there because they are hard to fill. Indeed it might be the difficulty of filling the gap that made you join the other side of the debate in the first place. What if your best effort to fill the gap is thin and unconvincing?
Example: Some people oppose the repeal of the prohibition of cannabis because "consumption will increase". When you try to make this argument clear you end up distinguishing between good-use and bad-use. There is the relax-on-a-Friday-night-after-work kind of use which is widely accepted in the case of alcohol and can be termed good-use. There is the behaviour that gets called "pissing your talent away" when it beer-based. That is bad-use.
When you try to bring clarity to the argument you have to replace "consumption will increase" by "bad-use will increase a lot and good-use will increase a little, leading to a net reduction in aggregate welfare." But the original "consumption will increase" was obviously true, while the clearer "bad+++, good+, net--" is less compelling.
The original argument had a gap (just why is an increase in consumption bad?). Writing more clearly exposes the gap. Your target will not say "Thanks for exposing the gap, I wish I'd put it that way.". But it is not an easy gap to fill convincingly. Your target is unlikely to appreciate your efforts on behalf of his case.
With regards to your example, you try to fix the gap between "consumption will increase" and "that will be a bad thing as a whole" by claiming little good use and much bad use. But I don't think that's the strongest way to bridge that gap.
Rather, I'd suggest that the good use has negligible positive utility - just another way to relax on a Friday night, when there are already plenty of ways to relax on a Friday night, so how much utility does adding another one really give you? - while bad use has significant negative utility (here I may take the chance to sketch the verbal image of a bright young doctor dropping out of university due to bad use). Then I can claim that even if good-use increases by a few orders of magnitude more than bad-use, the net result is nonetheless negative, because bad use is just that terrible; that the negative effects of a single bad-user outweigh the positive effects of a thousand good-users.
As to your main point - what to do when your best effort to fill the gap is thin and unconvincing - the simplest solution would appear to be to go back to the person proposing the position that you are critically commenting about (or someone else who shares his views on the subject), and simply asking. Or to go and look through his writings, and see whether or not he addresses precisely that point. Or to go to a friend (preferably also an intelligent debator) and asking for his best effort to fill the gap, in the hope that it will be a better effort.
Entirely within the example, not pertaining to rationality per se, and I'm not sure you even hold the position you were arguing about:
1) good use is not restricted to relaxing on a Friday. It also includes effective pain relief with minimal and sometimes helpful side-effects. Medical marijuana use may be used as a cover for recreational use but it is also very real in itself.
2) a young doctor dropping out of university is comparable and perhaps lesser disutility to getting sent to prison. You'd have to get a lot of doctors dropping out to make legalization worse than the way things stand now.
My actual position on the medical marijuana issue is best summarised as "I don't know enough to have developed a firm opinion either way". This also means that I don't really know enough to properly debate on the issue, unfortunately.
Though, looking it up, I see there's a bill currently going through parliament in my part of the world that - if it passes - would legalise it for medicinal use.
Have you read “Marijuana: Much More Than You Wanted To Know” on Slate Star Codex?
So, you go back to the person you're going to argue against, before you start the argument, and ask them about the big gap in their original position? That seems like it could carry the risk of kicking off the argument a little early.
I think the idea was, 'when you've gotten to this point, that's when your pre-discussion period is over, and it is time to begin asking questions'.
And yes, it is often a good idea to ask questions before taking a position!
"Pardon me, sir, but I don't quite understand how you went from Step A to Step C. Do you think you could possibly explain it in a little more detail?"
Accompanied, of course, by a very polite "Thank you" if they make the attempt to do so. Unless someone is going to vehemently lash out at any attempt to politely discuss his position, he's likely to either at least make an attempt (whether by providing a new explanation or directing you to the location of a pre-written one), or to plead lack of time (in which case you're no worse off than before).
Most of the time, he'll have some sort of explanation, that he considered inappropriate to include in the original statement (either because it is "obvious", or because the explanation is rather long and distracting and is beyond the scope of the original essay). Mind you, his explanation might be even more thin and unconvincing than the best you could come up with...
-- Cryptonomicon by Neal Stephenson
Neal Stephenson is good as a sci-fi writer, but I think he's almost as good as an ethnographer of nerds. Pretty much everything he writes has something like this in it, and most of it is spot-on.
On the other hand, he does occasionally succumb to a sort of mild geek-supremacist streak (best observed in Anathem, unless you're one of the six people besides me who were obsessed enough to read In The Beginning... Was The Command Line).
It's a well-known essay. It even has a Wikipedia article.
I just re-read, well, re-skimmed it. Ah, the nostalgia. It's very dated now. 15 years on, its prediction that proprietary operating systems would lose out to free software has completely failed to come true. Linux still ticks over, great for running servers and signalling hacker cred, but if it's so great, why isn't everyone using it? At most it's one of three major platforms: Windows, OSX, and Linux. Or two out of five if you add iOS and Android (which is based on Linux). OS domination by Linux is no closer, and although there's a billion people using Android devices, command lines are not part of their experience.
Stephenson wrote his essay (and I read it) before Apple switched to Unix in the form of OSX, but you can't really say that OSX is Unix plus a GUI, rather OSX is an operating system that includes a Unix interface. In other words, exactly what Stephenson asked for:
BeOS failed, and OSX appeared three years after Stephenson's essay. I wonder what he thinks of them now—both OSX and In the Beginning.
That's is a debatable point :-)
UNIX can be defined in many ways -- historically (what did the codebase evolve from), philosophically, technically (monolithic kernel, etc.), practically (availability and free access to the usual toolchains), etc.
I don't like OSX and Apple in general because I really don't like walled gardens and Apple operates on the "my way or the highway" principle. I generally run Windows for Office, Photoshop, games, etc. and Linux, nowadays usually Ubuntu, for heavy lifting. I am also a big fan of VMs which make a lot of things very convenient and, in particular, free you from having to make the big choice of the OS.
FYI: The 'you can't run this untrusted code' dialog is easy to get around.
I suspect I would be able to bludgeon OSX into submission but I don't see any reasons why I should bother. I don't have to work with Macs and am content not to.
Can't speak for Lumifer, but I was more annoyed by the fact that (the version I got of) OSX doesn't ship with a working developer toolchain, and that getting one requires either jumping through Apple's hoops and signing up for a paid developer account, or doing a lot of sketchy stuff to the guts of the OS. This on a POSIX-compliant system! Cygwin is less of a pain, and it's purely a bolt-on framework.
(ETA: This is probably an exaggeration or an unusual problem; see below.)
It was particularly frustrating in my case because of versioning issues, but those wouldn't have applied to most people. Or to me if I'd been prompt, which I hadn't.
You do not need to pay to get the developer tools. I have never paid for a compiler*, and I develop frequently.
*(other than LabView, which I didn't personally pay for but my labs did, and is definitely not part of XCode)
After some Googling, it seems that version problems may have been more central than I'd recalled. Xcode is free and includes command-line tools, but looking at it brings up vague memories of incompatibility with my OS at the time. The Apple developer website allows direct download of those tools but also requires a paid signup. And apparently trying to invoke gcc or the like from the command line should have brought up an update option, but that definitely didn't happen. Perhaps it wasn't an option in an OS build as old as mine, although it wouldn't have been older than 2009 or 2010. (I eventually just threw up my hands and installed an Ubuntu virt through Parallels.)
So, probably less severe than I'd thought, but the basic problem remains: violating Apple's assumptions is a bit like being a gazelle wending your way back to a familiar watering hole only to get splattered by a Hummer howling down the six-lane highway that's since been built in front of it.
You can get it through the app store, which means you need an account with Apple, but you do not need to pay to get this account. It really is free.
I would note that violating any operating system's assumptions makes bad things happen.
Yeah, I bought a hard copy in a non-technical bookstore. "Six people" was a joke based on its, er, specialized audience compared to the lines of Snow Crash; in terms of absolute numbers it's probably less obscure than, say, Zodiac.
If memory serves, Stephenson came out in favor of OSX a couple years after its release, comparing it to BeOS in the context of his essay. I can't find the cite now, though. Speaking for myself, I find OSX's ability to transition more-or-less seamlessly between GUI and command-line modes appealing, but its walled developer garden unspeakably annoying.
With some googling, I found this, a version of ITBWTCL annotated (by someone else) five years later, including a quote from Stephenson, saying that the essay "is now badly obsolete and probably needs a thorough revision". The quote is quoted in many places, but the only link I turned up for it on his own website was dead (not on the Wayback Machine either).
I think everyone who belongs to a certain age group and runs Linux has read In the Beginning was the Command Line. And yes, that's me admitting to having read it, and kinda believed the arguments at one point.
Of course I read In the Beginning was the Command Line. The supply of writing from witty bearded men talking to you about cool things isn't infinite, you know.
You say that like it's a bad thing.
You say that like it's a bad thing.
Randall Munroe on communicating with humans
Related: When (Not) To Use Probabilities:
For the opposite claim: If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics:
I tend to side with Yvain on this one, at least so long as your argument isn't going to be judged by its appearence. Specifically on the LHC thing, I think making up the 1 in 1000 makes it possible to substantively argue about the risks in a way that "there's a chance" doesn't.
A detailed reading provides room for these to coexist. Compare:
with
I'd agree with Randall Monroe more wholeheartedly if he had said “added a couple of zeros” instead.
-- Cryptonomicon by Neal Stephenson
This quote seems to me like it touches a common fallacy: that making "confident" probability estimates (close to 0 or 1) is the same as being a "confident" person. In fact, they're ontologically distinct.
Was the context one where Waterhouse was proving a conditional, "if axioms A, B, C, then theorem Z", or one where where he was trying to establish Z as a truth about the world, and therefore also had the burden of showing that axioms A, B, C were supported by experimental evidence?
The View from Hell from an article recommended by asd.
That or the extent of the human capacity for pareidolia on waking.
The easy way to make a convincing simulation is to disable the inner critic.
The inner critic that is disabled during regular dreaming turns back on during lucid dreaming. People who have them seem to be quite impressed by lucid dreams.
You still can't focus on stable details.
You can with training. It is a lot like training visualization: In the beginning, the easiest things to visualize are complex moving shapes (say a tree with wind going through it), but if you try for a couple of hours, you can get all the way down to simple geometric shapes.
I downvoted this and another comment further up for not being about anything but nerd pandering, which I feel is just ego-boosting noise. Not the type of content I want to see on here.
Well, if you think the quote doesn't say significantly more than "nerds are great" you are right to downvote it.
Contrast:
-- Feynman
One might even FTFY the first quote as:
"We see what we see for adaptive reasons, because it is the truth."
This part:
is contradicted by the context of the whole article. The article is in praise of insight porn (the writer's own words for it) as the cognitive experience of choice for nerds (the writer's word for them, in whom he includes himself and for whom he is writing) while explicitly considering its actual truth to be of little importance. He praises the experience of reading Julian Jaynes and in the same breath dismisses Jaynes' actual claims as "batshit insane and obviously wrong".
In other words, "Nerds ... want to see what's really going on" is, like the whole article, a statement of insight porn, uttered for the feeling of truthy insight it gives, "not because it is the truth".
How useful is this to someone who actually wants "to see what's really going on"?
-- David Malki !
People who often misunderstand others: 6% of geniuses, 94% of garden-variety nonsense-spouters.
I know that. People are so lame. Not me though. I am one of the genius ones.
Nassim N. Taleb
Opportunity costs?
I would say it should be the one with best expected returns. But I guess Taleb thinks the possibility of a very bad black swan overrides everything else - or at least that's what I gathered from his recent crusade against GMOs.
True, but not as easy to follow as Taleb's advice. In the extreme we could replace every piece of advice with "maximize your utility".
Not quite, as most people are risk-averse and care about the width about the distribution of the expected returns, not only about its mean.
If you measure "returns" in utility (rather than dollars, root mean squared error, lives, whatever) then the definition of utility (and in particular the typical pattern of decreasing marginal utility) takes care of risk aversion. But since nobody measures returns in utility your advice is good.
His point is that the upside is bounded much more than the downside.
This is not always true (as Taleb himself points out in The Black Swan): in investing the worst that can happen is you loss all of your principle, the best that can happen is unbounded.
Yes, but my point is that this is also true for, say, leaving the house to have fun.
Yogi Berra, on Timeless Decision Theory.
If only I cared about who goes to my funeral.
--Megan McArdle
Hmmm... let's try filling something else in there.
"I don't understand how anyone could support ISIS/Bosnian genocide/North Darfur."
While I think a person is indeed more effective at life for being able to perform the cognitive contortions necessary to bend their way into the mindset of a murderous totalitarian (without actually believing what they're understanding), I don't consider normal people lacking for their failure to understand refined murderous evil of the particularly uncommon kind -- any more than I expect them to understand the appeal of furry fandom (which I feel a bit guilty for picking out as the canonical Ridiculously Uncommon Weird Thing).
You don't have to share a taste for, or approval of "...refined murderous evil of the particularly uncommon kind..." It can be explained as a reaction to events or conditions, and history is full of examples. HOWEVER. We have this language that we share, and it signifies. I understand that a rapist has mental instability and other mental health issues that cause him to act not in accordance with common perceptions of minimum human decency. But I can't say out loud, "I understand why some men rape women." It's an example of a truth that is too dangerous to say because emotions prevent others from hearing it.
I like this and agree that usually or at least often the people making these "I don't understand how anyone could ..." statements aren't interested in actually understanding the people they disagree with. But I also liked Ozy's comment:
We could charitably translate "I don't understand how anyone could X" as "I notice that my model of people who X is so bad, that if I tried to explain it, I would probably generate a strawman".
Hacker School has a set of "social rules [...] designed to curtail specific behavior we've found to be destructive to a supportive, productive, and fun learning environment." One of them is "no feigning surprise":
I think this is a good rule and when I find out someone doesn't know something that I think they "should" already know, I instead try to react as in xkcd 1053 (or by chalking it up to a momentary maladaptive brain activity change on their part, or by admitting that it's probably not that important that they know this thing). But I think "feigning surprise" is a bad name, because when I'm in this situation, I'm never pretending to be surprised in order to demonstrate how smart I am, I am always genuinely surprised. (Surprise means my model of the world is about to get better. Yay!)
I don't think that sort of surprise is necessarily feigned. However, I do think it's usually better if that surprise isn't mentioned.
I am imagining the following exchange:
"I don't understand how anyone could believe X!"
"Great, the first step to understanding is noticing that you don't understand. Now, let me show you why X is true..."
I suspect that most people saying the first line would not take well to hearing the second.
I suspect the same, but still think "I can't understand why anyone would believe X" is probably better than "people who believe X or say they believe X only do so because they hate [children / freedom / poor people / rich people / black people / white people / this great country of ours / etc.]"
Or add a fourth laying: I think that I will rise in status by publically signalling to my facebook friends: "I lack the ability or willingness to attempt even a basic understanding of the people who disagree with me."
People do lots of silly things to signal commitment; the silliness is part of the point. This is a reason initiation rituals are often humiliating, and why members of minor religions often wear distinctive clothing or hairstyles. (I think I got this from this podcast interview with Larry Iannaccone.)
I think posts like the ones to which McArdle is referring, and the beliefs underlying them, are further examples of signaling attire. "I'm so committed, I'm even blind to whatever could be motivating the other side."
A related podcast is with Arnold Kling on his e-book (which I enjoyed) The Three Languages of Politics. It's about (duh) politics--specifically, American politics--but it also contains an interesting and helpful discussion on seeing things from others' point of view, and explicitly points to commitment-signaling (and its relation to beliefs) as a reason people fail to see eye to eye.
While I agree with your actual point, I note with amusement that what's worse is the people who claim they do understand: "I understand that you want to own a gun because it's a penis-substitute", "I understand that you don't want me to own a gun because you live in a fantasy world where there's no crime", "I understand that you're talking about my beauty because you think you own me", "I understand that you complain about people talking about your beauty as a way of boasting about how beautiful you are."... None of these explanations are anywhere near true.
It would be a sign of wisdom if someone actually did post "I'm stupid: I can hardly ever understand the viewpoint of anyone who disagrees with me."
Ah, but would it be, though?
It probably would. Usually a person who writes something like this is looking for an explanation.
it would probably be some kind of weird signalling game, maybe. On the other hand, posting:"I don't understand how etc etc, please, somebody explain to me the reasoning behind it" would be a good strategy to start debating and opening an avenue to "convert" others
Now repeat the same statement, only instead of abortions and carbon taxes, substitute the words "believe in homeopathy". (Creationism also works.)
People do say that--yet it doesn't mean any of the things the quote claims it means (at least not in a nontrivial sense).
Then what does it mean in those cases? Because the only ones I can think of are the three Megan described.
If you mean "I can't imagine how anyone could be so stupid as to believe in homeopathy/creationism", which is my best guess for what you mean, that's a special of the second meaning.
"I don't understand how someone could believe X" typically means that the speaker doesn't understand how someone could believe in X based on good reasoning. Understanding how stupidity led someone to believe X doesn't count.
Normal conversation cannot be parsed literally. It is literally true that understanding how someone incorrectly believes X is a subclass of understanding how someone believes in X; but it's not what those words typically connote.
-- David Russo
I don't understand what he wanted to say by this. Could somebody explain?
Instead of giving your employees $100 raise, give them $1200 bonus once in a year. It's the same money, but it will make them more happy, because they will keep noticing it for years.
It'll also be easier to reduce a bonus (because of poor performance on the part of the employee or company) than it will be to reduce a salary.
I say give them smaller raises more frequently. After the first annual bonus, it becomes expected.
Intermittent reward for the win.
http://en.wikipedia.org/wiki/Hedonic_treadmill
Basically what Lumifer said.
I speaks to anchoring and evaluating incentives relative to an expected level.
Basically, receiving a raise is seen as a good thing because you are getting more money than a month ago (anchor). But after a while you will be getting the same amount of money as a month ago (the anchor has moved) so there is no cause for joy.
While you are getting a raise you might be more motivated to work. However after a while your new salary becomes new salary and you would need a new raise to get additional motivation.
-- Scott Lynch, "The Lies of Locke Lamora", page 150.
If I remember the book correctly, this part comes from a scene where Locke Lamora is attempting to pull a double con on the speaking character by both impersonating the merchant and a spy/internal security agent (Salvara) investigating the merchant. So while the don's character acts "rationally" here - he is doing so while being deceived because of his assumptions - showing the very same error again
-- Freeman Dyson
Airplanes may not work on fusion or weigh millions of tons, but still, substituting a few words in I could say similar things about airplanes. Or electrical grids. Or smallpox vaccination. But nobody does.
Hypothesis: he has an emotional reaction to the way nuclear weapons are used--he thinks that is arrogant--and he's letting those emotions bleed into his reaction to nuclear weapons themselves.
Are you sure? I looked for just a bit and found
http://inventors.about.com/od/wstartinventors/a/Quotes-Wright-Brothers.htm
I imagine if inventors have bombastic things to say about the things they invent, then frequently keep those thoughts to oneself to avoid sounding arrogant (e.g. I don't think it would have gone over well if Edison had started referring to himself as "Edison, the man who lit the world of the night").
I meant that nobody accuses people awed by airplanes of being arrogant; I didn't mean that nobody is awed by airplanes.
(BTW, I wouldn't be surprised if Edison did say something similar; he was notorious for self-promotion.)
-- The Righteous Mind Ch 3, Jonathan Haidt
I wonder if anyone who needs to make important judgments a lot makes an actual effort to maintain affective hygiene. It seems like a really good idea, but poor signalling.
Don't go before a hungry judge.
~Jennifer Diane "Chatoyance" Reitz, Friendship Is Optimal: Caelum Est Conterrens
A couple of those (specifically lines 2, 5, and 11) should probably be "I'm" rather than "I am" to preserve the rhythm.
I disagree with you on 5; it works better as I am than I'm.
EDIT: Also, 9 works better as "I'm"
Really? Huh. I'm counting from "I am the playing..." = 1, and I really can't read line 5 with "I am" so it scans - I keep stumbling over "animal".
Perceiving magic is precisely the same thing as perceiving the limits of your own understanding.
-Jaron Lanier, Who Owns the Future?, (e-reader does not provide page number)
You could also be perceiving something way way past the limits of your own understanding, or alternately perceiving something which would be well within the limits of your understanding if you were looking at it from a different angle
The percept of magic, given its possible hallucination or implantation, is not necessarily an instance of limited understanding; certainly not in the relevant sense here, at least.
That doesn't seem quite true... if I'm confused while reading a textbook, I may be perceiving the limits of my understanding but not perceiving magic.
Agreed. I think what Lanier should have said that a perception of magic is a subset of things one doesn't understand, rather than claiming that they are equal. Bugs that I am currently hunting but haven't nailed down are things I don't understand, but they certainly don't seem magical.
At least you hope not.