- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
Loading…
Comments (379)
~Jennifer Diane "Chatoyance" Reitz, Friendship Is Optimal: Caelum Est Conterrens
A couple of those (specifically lines 2, 5, and 11) should probably be "I'm" rather than "I am" to preserve the rhythm.
I disagree with you on 5; it works better as I am than I'm.
EDIT: Also, 9 works better as "I'm"
Really? Huh. I'm counting from "I am the playing..." = 1, and I really can't read line 5 with "I am" so it scans - I keep stumbling over "animal".
I'm counting the same way. With stress in italics,
sounds much better to me than
I should probably note that I read most of the lines with an approximately syllable-sized pause before 'but', and the animal line without that pause. The poem feels to me like it's written mainly in dactylls with some trochees and a final stressed syllable on each line.
Compare with
While I'm at this, how I read lines 9-11 as written
Which definitely break up the rhythm of the first half entirely, which is probably intentional, but particularly line 9 is awkward, which I didn't catch the first pass. If I was trying to keep that rhythm, I'd read it this way:
And be unhappy that "What'ver" is no longer reasonable English, even for poetry.
Perhaps you want whate'er? It sounds a bit archaic, but not wrong.
I don't know much about historical stress patterns, but when I pronounce "whate'er", the stress moves to the second syllable (wut-air), which doesn't improve things.
-- The Righteous Mind Ch 3, Jonathan Haidt
I wonder if anyone who needs to make important judgments a lot makes an actual effort to maintain affective hygiene. It seems like a really good idea, but poor signalling.
Don't go before a hungry judge.
Perceiving magic is precisely the same thing as perceiving the limits of your own understanding.
-Jaron Lanier, Who Owns the Future?, (e-reader does not provide page number)
That doesn't seem quite true... if I'm confused while reading a textbook, I may be perceiving the limits of my understanding but not perceiving magic.
Agreed. I think what Lanier should have said that a perception of magic is a subset of things one doesn't understand, rather than claiming that they are equal. Bugs that I am currently hunting but haven't nailed down are things I don't understand, but they certainly don't seem magical.
At least you hope not.
The percept of magic, given its possible hallucination or implantation, is not necessarily an instance of limited understanding; certainly not in the relevant sense here, at least.
You could also be perceiving something way way past the limits of your own understanding, or alternately perceiving something which would be well within the limits of your understanding if you were looking at it from a different angle
--Megan McArdle
Or, (4), "I keep asking, but they won't say"....
Does that happen?
It does to me.Have you tried getting sense out of an NRx or HBD.er?
Haven't tried it myself, but it seems to work for Scott Alexander
NRx are so bad at communicating their position in language inline can understand that they refer to Scotts ANTI reaction faq to explain it. This is the guy who steelmanned Gene "Timecube" Ray. He has superpowers.
“Reactionary Philosophy In An Enormous, Planet-Sized Nutshell” is where he explain what the NR position is and “The Anti-Reactionary FAQ” is where he explains why he disagrees with it. The former is what neoreactionaries have linked to to explain it.
Yes. That's why I'm somewhat surprised he seems to interpret “reptilian aliens” literally.
Yes, what they say frequently makes a lot more sense than the mainstream position on the issue in question.
I completely disagree. Their grasp of politics is largely based on meta-contrarianism, and has failed to "snap back" into basing one's views on a positive program whose goodness and rationality can be argued for with evidence.
Huh? HBD'ers are making observations about the world, they do not have a "positive program". As for NRx, they do have a positive program do use evidence to argue for it, see the NRx thread and the various blogs linked there for some examples.
There no reason to use those nonstandard abbreviations. Neither of them are in Urban dictionary.
NRx is probably neoreactionism but doesn't make it into the first 10 Google results. HBD.er in that spelling seems to be wrong as HBD'er is found when you Google it.
Bracket neoreaction for the time being. I get that you disagree with HBD positions, but do you literally have trouble comprehending their meaning?
Now repeat the same statement, only instead of abortions and carbon taxes, substitute the words "believe in homeopathy". (Creationism also works.)
People do say that--yet it doesn't mean any of the things the quote claims it means (at least not in a nontrivial sense).
Then what does it mean in those cases? Because the only ones I can think of are the three Megan described.
If you mean "I can't imagine how anyone could be so stupid as to believe in homeopathy/creationism", which is my best guess for what you mean, that's a special of the second meaning.
"I don't understand how someone could believe X" typically means that the speaker doesn't understand how someone could believe in X based on good reasoning. Understanding how stupidity led someone to believe X doesn't count.
Normal conversation cannot be parsed literally. It is literally true that understanding how someone incorrectly believes X is a subclass of understanding how someone believes in X; but it's not what those words typically connote.
I don't think that applies here. Your addition "based on good reasoning" is not a non-literal meaning, but a filling in of omitted detail. Gricean implicature is not non-literality, and the addition does not take the example outside McArdle's analysis.
As always, confusion is a property of the confused person, not of the thing outside themselves that they are confused about. If a person says they cannot understand how anyone could etc., that is, indeed, literally true. That person cannot understand the phenomenon; that is their problem. Yet their intended implication, which McArdle is pointing out does not follow, is that all of the problem is in the other person. Even if the other person is in error, how can one engage with them from the position of "I cannot understand how etc."? The words are an act of disengagement, behind a smokescreen that McArdle blows away..
Sure it is. The qualifier changes the meaning of the statement. By definition, if the sentence lacks the qualifier but is to be interpreted as if it has one, it is to be interpreted differently than its literal words. Having to be interpreted as containing detail that is not explicitly written is a type of non-literalness.
No, it's not. I understand how someone can believe in creationism: they either misunderstand science (probably due to religious bias) or don't actually believe science works at all when it conflicts with religion. Saying "I don't understand how someone can believe in creationism" is literally false--I do understand how.
What it means is "I don't understand how someone can correctly believe in creationism." I understand how someone can believe in creationism, but my understanding involves the believer making mistakes. The statement communicates that I don't know of a reason other than making mistakes, not that I don't know any reason at all.
Because "I don't understand how" is synonymous, in ordinary conversation, with "the other person appears to be in error." It does not mean that I literally don't understand, but rather that I understand it as an error, so it is irrelevant that literally not understanding is an act of disengagement.
Non-literality isn't a get-out-of-your-words-free card. There is a clear difference between saying "you appear to be in error" and "I can't understand how anyone could think that", and the difference is clearly expressed by the literal meanings of those words.
And to explicate "I don't understand etc." with "Of course I do understand how you could think that, it's because you're ignorant or stupid" is not an improvement.
Non-literalness is a get-out-of-your-words-free card when the words are normally used in conversation, by English speakers in general, to mean something non-literal. Yes, if you just invented the non-literal meaning yourself, there are limits to how far from the literal meaning you can be and still expect to be understood, but these limits do not apply when the non-literal meaning is already established usage.
The original quote gives the intended meaning as "I am such a superior moral being that I cannot even imagine the cognitive errors or moral turpitude that could lead someone to..." In other words, the original rationality quote explicitly excludes the possibility of "I understand you believe it because you're ignorant or stupid". It misinterprets the statement as literally claiming that you don't understand in any way whatsoever.
The point is that the quote is a bad rationality quote because it makes a misinterpretation. Whether the statement that it misinterprets is itself a good thing to say is irrelevant to the question of whether it is being misinterpreted.
Established by whom? You are the one claiming that
These two expressions mean very different things. Notice that I am claiming that you are in error, but not saying, figuratively or literally, that I cannot understand how you could possibly think that.
That is not how figurative language works. I could expand on that at length, but I don't think it's worth it at this point.
"A is synonymous with B" doesn't mean "every time someone said B, they also said A". "You've made more mistakes than a zebra has stripes" is also synonymous with "you're in error" and you clearly didn't say that, either.
(Of course, "is synonymous with" means "makes the same assertion about the main topic", not "is identical in all ways".)
Now I just thought of this, so maybe I'm wrong, but I don't think "I don't understand how someone can think X" is really meant as any sort of piece of reasonable logic, or a substitution for one. I suspect this is merely the sort of stuff people come up with when made to think about it.
Rather, "I don't understand how..." is an appeal to the built in expectation that things make obvious sense. If I want to claim that "what you're saying is nontribal and I have nothing to do with it", stating that you're not making sense to me works whether or not I can actually follow your reasoning. Since if you really were not making sense to me with minimum effort on my part, this would imply bad things about you and what you're saying. It's just a rejection that makes no sense if you think about it, but it's not meant to be thought about - it's really closer to "la la la I am not listening to you".
Am I making sense?
Yes.
This is close, but I don't think it captures everything. I used the examples of creationism and homeopathy because they are unusual examples where there isn't room for reasonable disagreement. Every person who believes in one of those does so because of bias, ignorance, or error. This disentangles the question of "what is meant by the statement" and "why would anyone want to say what is meant by the statement".
You have correctly identified why, for most topics, someone would want to say such a thing. Normally, "there's no room for reasonable disagreement; you're just wrong" is indeed used as a tribal membership indicator. But the statement doesn't mean "what you're saying is nontribal", it's just that legitimate, nontribal, reasons to say "you are just wrong" are rare.
Well that's true for every false belief anyone has. So what's so special about those examples?
You say "there isn't room for reasonable disagreement", which taken literally is just another way of phrasing "I don't understand how anyone could believe X". In any case, could you expand on what you mean by "not room for reasonable disagreement" since in context it appears to mean "all the tribes present agree with it".
You're being literal again. Every person who believes in one of those primarily does so because of major bias, ignorance, or error. You can't just distrust a single source you should have trusted, or make a single bad calculation, and end up believing in creationism or homeopathy. Your belief-finding process has to contain fundamental flaws for that.
And "it has three sides" is just another way of phrasing "it is a triangle", but I can still explain what a triangle is by describing it as something with three sides. If it wasn't synonymous, it wouldn't be an explanation.
(Actually, it's not quite synonymous, for the same reason that the original statement wasn't correct: if you're taking it literally, "I don't understand how anyone could believe X" excludes cases where you understand that someone makes a mistake, and "there isn't room for reasonable disagreement" includes such cases.)
You can describe anything which is believed by some people and not others in terms of tribes believing it. But not all such descriptions are equally useful; if the tribes fall into categories, it is better to specify the categories.
Most people who say: "I don't understand how someone could believe X" would fail a reverse Turing test that position. They often literally don't understand how someone comes to believe X.
While I agree with your actual point, I note with amusement that what's worse is the people who claim they do understand: "I understand that you want to own a gun because it's a penis-substitute", "I understand that you don't want me to own a gun because you live in a fantasy world where there's no crime", "I understand that you're talking about my beauty because you think you own me", "I understand that you complain about people talking about your beauty as a way of boasting about how beautiful you are."... None of these explanations are anywhere near true.
It would be a sign of wisdom if someone actually did post "I'm stupid: I can hardly ever understand the viewpoint of anyone who disagrees with me."
Ah, but would it be, though?
it would probably be some kind of weird signalling game, maybe. On the other hand, posting:"I don't understand how etc etc, please, somebody explain to me the reasoning behind it" would be a good strategy to start debating and opening an avenue to "convert" others
It probably would. Usually a person who writes something like this is looking for an explanation.
People do lots of silly things to signal commitment; the silliness is part of the point. This is a reason initiation rituals are often humiliating, and why members of minor religions often wear distinctive clothing or hairstyles. (I think I got this from this podcast interview with Larry Iannaccone.)
I think posts like the ones to which McArdle is referring, and the beliefs underlying them, are further examples of signaling attire. "I'm so committed, I'm even blind to whatever could be motivating the other side."
A related podcast is with Arnold Kling on his e-book (which I enjoyed) The Three Languages of Politics. It's about (duh) politics--specifically, American politics--but it also contains an interesting and helpful discussion on seeing things from others' point of view, and explicitly points to commitment-signaling (and its relation to beliefs) as a reason people fail to see eye to eye.
Or add a fourth laying: I think that I will rise in status by publically signalling to my facebook friends: "I lack the ability or willingness to attempt even a basic understanding of the people who disagree with me."
I like this and agree that usually or at least often the people making these "I don't understand how anyone could ..." statements aren't interested in actually understanding the people they disagree with. But I also liked Ozy's comment:
Hacker School has a set of "social rules [...] designed to curtail specific behavior we've found to be destructive to a supportive, productive, and fun learning environment." One of them is "no feigning surprise":
I think this is a good rule and when I find out someone doesn't know something that I think they "should" already know, I instead try to react as in xkcd 1053 (or by chalking it up to a momentary maladaptive brain activity change on their part, or by admitting that it's probably not that important that they know this thing). But I think "feigning surprise" is a bad name, because when I'm in this situation, I'm never pretending to be surprised in order to demonstrate how smart I am, I am always genuinely surprised. (Surprise means my model of the world is about to get better. Yay!)
I don't think that sort of surprise is necessarily feigned. However, I do think it's usually better if that surprise isn't mentioned.
I am imagining the following exchange:
"I don't understand how anyone could believe X!"
"Great, the first step to understanding is noticing that you don't understand. Now, let me show you why X is true..."
I suspect that most people saying the first line would not take well to hearing the second.
I suspect the same, but still think "I can't understand why anyone would believe X" is probably better than "people who believe X or say they believe X only do so because they hate [children / freedom / poor people / rich people / black people / white people / this great country of ours / etc.]"
We could charitably translate "I don't understand how anyone could X" as "I notice that my model of people who X is so bad, that if I tried to explain it, I would probably generate a strawman".
Hmmm... let's try filling something else in there.
"I don't understand how anyone could support ISIS/Bosnian genocide/North Darfur."
While I think a person is indeed more effective at life for being able to perform the cognitive contortions necessary to bend their way into the mindset of a murderous totalitarian (without actually believing what they're understanding), I don't consider normal people lacking for their failure to understand refined murderous evil of the particularly uncommon kind -- any more than I expect them to understand the appeal of furry fandom (which I feel a bit guilty for picking out as the canonical Ridiculously Uncommon Weird Thing).
-- David Russo
I don't understand what he wanted to say by this. Could somebody explain?
While you are getting a raise you might be more motivated to work. However after a while your new salary becomes new salary and you would need a new raise to get additional motivation.
I speaks to anchoring and evaluating incentives relative to an expected level.
Basically, receiving a raise is seen as a good thing because you are getting more money than a month ago (anchor). But after a while you will be getting the same amount of money as a month ago (the anchor has moved) so there is no cause for joy.
http://en.wikipedia.org/wiki/Hedonic_treadmill
Basically what Lumifer said.
Instead of giving your employees $100 raise, give them $1200 bonus once in a year. It's the same money, but it will make them more happy, because they will keep noticing it for years.
It'll also be easier to reduce a bonus (because of poor performance on the part of the employee or company) than it will be to reduce a salary.
-- Freeman Dyson
Airplanes may not work on fusion or weigh millions of tons, but still, substituting a few words in I could say similar things about airplanes. Or electrical grids. Or smallpox vaccination. But nobody does.
Hypothesis: he has an emotional reaction to the way nuclear weapons are used--he thinks that is arrogant--and he's letting those emotions bleed into his reaction to nuclear weapons themselves.
Are you sure? I looked for just a bit and found
http://inventors.about.com/od/wstartinventors/a/Quotes-Wright-Brothers.htm
I imagine if inventors have bombastic things to say about the things they invent, then frequently keep those thoughts to oneself to avoid sounding arrogant (e.g. I don't think it would have gone over well if Edison had started referring to himself as "Edison, the man who lit the world of the night").
I meant that nobody accuses people awed by airplanes of being arrogant; I didn't mean that nobody is awed by airplanes.
(BTW, I wouldn't be surprised if Edison did say something similar; he was notorious for self-promotion.)
-- Scott Lynch, "The Lies of Locke Lamora", page 150.
If I remember the book correctly, this part comes from a scene where Locke Lamora is attempting to pull a double con on the speaking character by both impersonating the merchant and a spy/internal security agent (Salvara) investigating the merchant. So while the don's character acts "rationally" here - he is doing so while being deceived because of his assumptions - showing the very same error again
Yogi Berra, on Timeless Decision Theory.
If only I cared about who goes to my funeral.
Nassim N. Taleb
Opportunity costs?
I would say it should be the one with best expected returns. But I guess Taleb thinks the possibility of a very bad black swan overrides everything else - or at least that's what I gathered from his recent crusade against GMOs.
His point is that the upside is bounded much more than the downside.
Yes, but my point is that this is also true for, say, leaving the house to have fun.
This is not always true (as Taleb himself points out in The Black Swan): in investing the worst that can happen is you loss all of your principle, the best that can happen is unbounded.
True, but not as easy to follow as Taleb's advice. In the extreme we could replace every piece of advice with "maximize your utility".
Not quite, as most people are risk-averse and care about the width about the distribution of the expected returns, not only about its mean.
If you measure "returns" in utility (rather than dollars, root mean squared error, lives, whatever) then the definition of utility (and in particular the typical pattern of decreasing marginal utility) takes care of risk aversion. But since nobody measures returns in utility your advice is good.
What? He's crusading against GMOs? Can you give me some references?
I like his writing a lo, but I remember noting the snide way he dismissed doctors who "couldn't imagine" that there could be medicinal benefit to mother's milk, as if they were arrogant fools.
My source were his tweets. Sorry if I can't give anything concrete right now, but "Taleb GMO" apparently gets a lot of hits on google. I didn't really dive into it, but as I understood it he takes the precautionary principle (the burden of proof of safety is on GMOs, not of danger on opponents) and adds that nobody can ever really know the risks, so the burden of proof hasn't and can't be met.
"They're arrogant fools" seems to be Taleb's charming way of saying "they don't agree with me".
I like him too. I loved The Black Swan and Fooled by Randomness back when I read them. But I realized I didn't quite grok his epistemology a while back, when I found him debating religion with Dennett, Harris and Hitchens. Or rather, debating against them, for religion, as a Christian, as far as I can tell based on a version of "science can't know everything". (www.youtube.com/watch?v=-hnqo4_X7PE)
I've been meaning to ask Less Wrong about Taleb for a while, because this just seems kookish to me, but it's entirely possible that I just don't get something.
I think that Taleb has one really good insight -- the Black Swan book -- and then he decided to become a fashionable French philosopher...
"Can't know" is misses the point. Doesn't know, is much more about what Taleb speaks about.
Robin Hanson lately wrote a post against being a rationalist. The core of Nassim arguments is to focus your skepticism where it matters. The cost of mistakenly being a Christian is low. The cost of mistakenly believing that your retirement portfolio is secure is high. According to Taleb people like the New Atheists should spend more of their time on those beliefs that actually matter.
It's also worth noting that the new atheists aren't skeptics in the sense that they believe it's hard to know things. Their books are full of statements of certainity. Taleb on the other hand is a skeptic in that sense.
For him religion also isn't primarily about believing in God but about following certain rituals. He doesn't believe in cutting Chelstrons fence with Ockham's razor.
That's not self-evident to me at all.
It's not self-evident, but the new atheists don't make a good argument that it has a high cost. Atheist scientists in good standing like Rob Baumeister say that being religious helps with will power.
Being a Mormon correlates with characteristics and therefore Mormon sometimes recognize other Mormons. Scientific investigation found that the use marker of being healthy for doing so and those markers can't be used for identifying Mormons.
There's some data that being religious correlates with longevity.
Of course those things aren't strong evidence that being religious is beneficial, but that's where Chesterton's fence comes into play for Taleb. He was born Christian so he stays Christian.
While my given name is Christian, I wasn't raised a Christian or believed in God at any point in my life and the evidence doesn't get my to start being a Christian but I do understand Taleb's position. Taleb doesn't argue that atheists should become Christians either.
(If there is something called "Chelston's Fence" (which my searches did not turn up), apologies.)
Chesterton's Fence isn't about inertia specifically, but about suspecting that other people had reasons for their past actions even though you currently can't see any, and finding out those reasons before countering their actions. In Christianity's case the reasons seem obvious enough (one of the main ones: trust in a line of authority figures going back to antiquity + antiquity's incompetence at understanding the universe) that Chesterton's Fence is not very applicable. Willpower and other putative psychological benefits of Christianity are nowhere in the top 100 reasons Taleb was born Christian.
Sorry for the typo.
If Christianity would lower the willpower of it's members then it would be at a disadvantage in memetic competition against other worldviews that increase willpower.
Predicting complex systems like memetic competition over the span of centuries between different memes is very hard. In cognitive psychology experiments frequently invalidate basic intuitions about the human mind.
Trust bootstrapping is certainly one of the functions of religion but it's not clear that's bad. Bootstrapping trust is generally a hard problem. Trust makes people cooperate. If I remember right Taleb makes somewhere the point that the word believe derives from a word that means trust.
As far as "antiquity's incompetence at understanding the universe" goes, understanding the universe is very important to people like the New Atheists but it's for Taleb it's not the main thing religion is about. For him it's about practically following a bunch of rituals such as being at church every Sunday.
I often see this argument from religions themselves or similar sources, not from those opposed to religion. Not this specific argument, but this type of argument--the idea of using the etymology of a word to prove something about the concept represented by the word. As we know or should know, a word's etymology may not necessarily have much of a connection to what it means or how it is used today. ("malaria" means "bad air" because of the belief that it was caused by that. "terrific" means something that terrifies.)
Also consider that by conservation of expected evidence if the etymology of the word is evidence for your point, if that etymology were to turn out to be false, that would be evidence against your point. Would you consider it to be evidence against your point if somehow that etymology were to be shown false?
I feel like it should be pointed out that being kookish and being a source of valuable insight are not incompatible.
-- David Malki !
I know that. People are so lame. Not me though. I am one of the genius ones.
People who often misunderstand others: 6% of geniuses, 94% of garden-variety nonsense-spouters.
The View from Hell from an article recommended by asd.
Contrast:
-- Feynman
One might even FTFY the first quote as:
"We see what we see for adaptive reasons, because it is the truth."
This part:
is contradicted by the context of the whole article. The article is in praise of insight porn (the writer's own words for it) as the cognitive experience of choice for nerds (the writer's word for them, in whom he includes himself and for whom he is writing) while explicitly considering its actual truth to be of little importance. He praises the experience of reading Julian Jaynes and in the same breath dismisses Jaynes' actual claims as "batshit insane and obviously wrong".
In other words, "Nerds ... want to see what's really going on" is, like the whole article, a statement of insight porn, uttered for the feeling of truthy insight it gives, "not because it is the truth".
How useful is this to someone who actually wants "to see what's really going on"?
It's a useful sketch of a type of experience. The experience is given a name. Armed with that name, you can choose to avoid it or not.
Insight porn, in other words?
I downvoted this and another comment further up for not being about anything but nerd pandering, which I feel is just ego-boosting noise. Not the type of content I want to see on here.
Well, if you think the quote doesn't say significantly more than "nerds are great" you are right to downvote it.
I think the comment in this thread would have been equally relevant and possibly better without the last sentence, but don't see how the Cryptonomicon quote (which I assume to be the one you meant?) as nerd-pandering, since it doesn't imply value judgments from it about being or identifying as a nerd.
The Cryptonomicron quote was great, I was talking about its child comment.
The easy way to make a convincing simulation is to disable the inner critic.
The inner critic that is disabled during regular dreaming turns back on during lucid dreaming. People who have them seem to be quite impressed by lucid dreams.
You still can't focus on stable details.
You can with training. It is a lot like training visualization: In the beginning, the easiest things to visualize are complex moving shapes (say a tree with wind going through it), but if you try for a couple of hours, you can get all the way down to simple geometric shapes.
That or the extent of the human capacity for pareidolia on waking.
Randall Munroe on communicating with humans
Related: When (Not) To Use Probabilities:
For the opposite claim: If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics:
I tend to side with Yvain on this one, at least so long as your argument isn't going to be judged by its appearence. Specifically on the LHC thing, I think making up the 1 in 1000 makes it possible to substantively argue about the risks in a way that "there's a chance" doesn't.
A detailed reading provides room for these to coexist. Compare:
with
I'd agree with Randall Monroe more wholeheartedly if he had said “added a couple of zeros” instead.
-- Cryptonomicon by Neal Stephenson
Was the context one where Waterhouse was proving a conditional, "if axioms A, B, C, then theorem Z", or one where where he was trying to establish Z as a truth about the world, and therefore also had the burden of showing that axioms A, B, C were supported by experimental evidence?
This quote seems to me like it touches a common fallacy: that making "confident" probability estimates (close to 0 or 1) is the same as being a "confident" person. In fact, they're ontologically distinct.
-- Cryptonomicon by Neal Stephenson
Neal Stephenson is good as a sci-fi writer, but I think he's almost as good as an ethnographer of nerds. Pretty much everything he writes has something like this in it, and most of it is spot-on.
On the other hand, he does occasionally succumb to a sort of mild geek-supremacist streak (best observed in Anathem, unless you're one of the six people besides me who were obsessed enough to read In The Beginning... Was The Command Line).
You say that like it's a bad thing.
You say that like it's a bad thing.
Of course I read In the Beginning was the Command Line. The supply of writing from witty bearded men talking to you about cool things isn't infinite, you know.
I think everyone who belongs to a certain age group and runs Linux has read In the Beginning was the Command Line. And yes, that's me admitting to having read it, and kinda believed the arguments at one point.
It's a well-known essay. It even has a Wikipedia article.
I just re-read, well, re-skimmed it. Ah, the nostalgia. It's very dated now. 15 years on, its prediction that proprietary operating systems would lose out to free software has completely failed to come true. Linux still ticks over, great for running servers and signalling hacker cred, but if it's so great, why isn't everyone using it? At most it's one of three major platforms: Windows, OSX, and Linux. Or two out of five if you add iOS and Android (which is based on Linux). OS domination by Linux is no closer, and although there's a billion people using Android devices, command lines are not part of their experience.
Stephenson wrote his essay (and I read it) before Apple switched to Unix in the form of OSX, but you can't really say that OSX is Unix plus a GUI, rather OSX is an operating system that includes a Unix interface. In other words, exactly what Stephenson asked for:
BeOS failed, and OSX appeared three years after Stephenson's essay. I wonder what he thinks of them now—both OSX and In the Beginning.
Yeah, I bought a hard copy in a non-technical bookstore. "Six people" was a joke based on its, er, specialized audience compared to the lines of Snow Crash; in terms of absolute numbers it's probably less obscure than, say, Zodiac.
If memory serves, Stephenson came out in favor of OSX a couple years after its release, comparing it to BeOS in the context of his essay. I can't find the cite now, though. Speaking for myself, I find OSX's ability to transition more-or-less seamlessly between GUI and command-line modes appealing, but its walled developer garden unspeakably annoying.
With some googling, I found this, a version of ITBWTCL annotated (by someone else) five years later, including a quote from Stephenson, saying that the essay "is now badly obsolete and probably needs a thorough revision". The quote is quoted in many places, but the only link I turned up for it on his own website was dead (not on the Wayback Machine either).
That's is a debatable point :-)
UNIX can be defined in many ways -- historically (what did the codebase evolve from), philosophically, technically (monolithic kernel, etc.), practically (availability and free access to the usual toolchains), etc.
I don't like OSX and Apple in general because I really don't like walled gardens and Apple operates on the "my way or the highway" principle. I generally run Windows for Office, Photoshop, games, etc. and Linux, nowadays usually Ubuntu, for heavy lifting. I am also a big fan of VMs which make a lot of things very convenient and, in particular, free you from having to make the big choice of the OS.
FYI: The 'you can't run this untrusted code' dialog is easy to get around.
Can't speak for Lumifer, but I was more annoyed by the fact that (the version I got of) OSX doesn't ship with a working developer toolchain, and that getting one requires either jumping through Apple's hoops and signing up for a paid developer account, or doing a lot of sketchy stuff to the guts of the OS. This on a POSIX-compliant system! Cygwin is less of a pain, and it's purely a bolt-on framework.
(ETA: This is probably an exaggeration or an unusual problem; see below.)
It was particularly frustrating in my case because of versioning issues, but those wouldn't have applied to most people. Or to me if I'd been prompt, which I hadn't.
You do not need to pay to get the developer tools. I have never paid for a compiler*, and I develop frequently.
*(other than LabView, which I didn't personally pay for but my labs did, and is definitely not part of XCode)
After some Googling, it seems that version problems may have been more central than I'd recalled. Xcode is free and includes command-line tools, but looking at it brings up vague memories of incompatibility with my OS at the time. The Apple developer website allows direct download of those tools but also requires a paid signup. And apparently trying to invoke gcc or the like from the command line should have brought up an update option, but that definitely didn't happen. Perhaps it wasn't an option in an OS build as old as mine, although it wouldn't have been older than 2009 or 2010. (I eventually just threw up my hands and installed an Ubuntu virt through Parallels.)
So, probably less severe than I'd thought, but the basic problem remains: violating Apple's assumptions is a bit like being a gazelle wending your way back to a familiar watering hole only to get splattered by a Hummer howling down the six-lane highway that's since been built in front of it.
You can get it through the app store, which means you need an account with Apple, but you do not need to pay to get this account. It really is free.
I would note that violating any operating system's assumptions makes bad things happen.
I suspect I would be able to bludgeon OSX into submission but I don't see any reasons why I should bother. I don't have to work with Macs and am content not to.
D.C. Dennett, Intuition Pumps and Other Tools for Thinking. Dennett himself is summarising Anatol Rapoport.
I don't see what to do about gaps in arguments. Gaps aren't random. There are little gaps where the original authors have chosen to use their limited word count on other, more delicate, parts of their argument, confident that charitable readers will be happy to fill the small gaps themselves in the obvious ways. There are big gaps where the authors have gone the other way, tip toeing around the weakest points in their argument. Perhaps they hope no-one else will notice. Perhaps they are in denial. Perhaps there are issues with the clarity of the logical structure that make it easy to whiz by the gap without noticing it.
The third perhaps is especially tricky. If you "re-express your target’s position ... clearly" you remove the obfuscation that concealed the gap. Now what? Leaving the gap in clear view creates a strawman. Attempting to fill it draws a certain amount of attention to it; you certainly fail the ideological Turing test because you are making arguments that you opponents don't make. Worse, big gaps are seldom accidental. They are there because they are hard to fill. Indeed it might be the difficulty of filling the gap that made you join the other side of the debate in the first place. What if your best effort to fill the gap is thin and unconvincing?
Example: Some people oppose the repeal of the prohibition of cannabis because "consumption will increase". When you try to make this argument clear you end up distinguishing between good-use and bad-use. There is the relax-on-a-Friday-night-after-work kind of use which is widely accepted in the case of alcohol and can be termed good-use. There is the behaviour that gets called "pissing your talent away" when it beer-based. That is bad-use.
When you try to bring clarity to the argument you have to replace "consumption will increase" by "bad-use will increase a lot and good-use will increase a little, leading to a net reduction in aggregate welfare." But the original "consumption will increase" was obviously true, while the clearer "bad+++, good+, net--" is less compelling.
The original argument had a gap (just why is an increase in consumption bad?). Writing more clearly exposes the gap. Your target will not say "Thanks for exposing the gap, I wish I'd put it that way.". But it is not an easy gap to fill convincingly. Your target is unlikely to appreciate your efforts on behalf of his case.
With regards to your example, you try to fix the gap between "consumption will increase" and "that will be a bad thing as a whole" by claiming little good use and much bad use. But I don't think that's the strongest way to bridge that gap.
Rather, I'd suggest that the good use has negligible positive utility - just another way to relax on a Friday night, when there are already plenty of ways to relax on a Friday night, so how much utility does adding another one really give you? - while bad use has significant negative utility (here I may take the chance to sketch the verbal image of a bright young doctor dropping out of university due to bad use). Then I can claim that even if good-use increases by a few orders of magnitude more than bad-use, the net result is nonetheless negative, because bad use is just that terrible; that the negative effects of a single bad-user outweigh the positive effects of a thousand good-users.
As to your main point - what to do when your best effort to fill the gap is thin and unconvincing - the simplest solution would appear to be to go back to the person proposing the position that you are critically commenting about (or someone else who shares his views on the subject), and simply asking. Or to go and look through his writings, and see whether or not he addresses precisely that point. Or to go to a friend (preferably also an intelligent debator) and asking for his best effort to fill the gap, in the hope that it will be a better effort.
So, you go back to the person you're going to argue against, before you start the argument, and ask them about the big gap in their original position? That seems like it could carry the risk of kicking off the argument a little early.
"Pardon me, sir, but I don't quite understand how you went from Step A to Step C. Do you think you could possibly explain it in a little more detail?"
Accompanied, of course, by a very polite "Thank you" if they make the attempt to do so. Unless someone is going to vehemently lash out at any attempt to politely discuss his position, he's likely to either at least make an attempt (whether by providing a new explanation or directing you to the location of a pre-written one), or to plead lack of time (in which case you're no worse off than before).
Most of the time, he'll have some sort of explanation, that he considered inappropriate to include in the original statement (either because it is "obvious", or because the explanation is rather long and distracting and is beyond the scope of the original essay). Mind you, his explanation might be even more thin and unconvincing than the best you could come up with...
I think the idea was, 'when you've gotten to this point, that's when your pre-discussion period is over, and it is time to begin asking questions'.
And yes, it is often a good idea to ask questions before taking a position!
Entirely within the example, not pertaining to rationality per se, and I'm not sure you even hold the position you were arguing about:
1) good use is not restricted to relaxing on a Friday. It also includes effective pain relief with minimal and sometimes helpful side-effects. Medical marijuana use may be used as a cover for recreational use but it is also very real in itself.
2) a young doctor dropping out of university is comparable and perhaps lesser disutility to getting sent to prison. You'd have to get a lot of doctors dropping out to make legalization worse than the way things stand now.
My actual position on the medical marijuana issue is best summarised as "I don't know enough to have developed a firm opinion either way". This also means that I don't really know enough to properly debate on the issue, unfortunately.
Though, looking it up, I see there's a bill currently going through parliament in my part of the world that - if it passes - would legalise it for medicinal use.
Have you read “Marijuana: Much More Than You Wanted To Know” on Slate Star Codex?
Steven Pinker
What about: "using the education system to collect forced labor as a 'lesson' in altruism teaches selfishness and fails at altruism"?
This Amazon.com review.
Kris Gunnars, Business Insider
Mostly correct, but only very loosely related to rationality.
Vitamins also are good stuff but they aren't taken out (or when they are they usually are put back in, AFAIK).
Rationality involves having accurate beliefs. If lots of people share a mistaken belief that causes them to take harmful actions then pointing out this mistake is rationality-enhancing.
The way giving someone a fish is fishing skill-enhancing, I'd guess...
Well, not quite. This particular mistake has a general lesson of ‘what you know about what foods are healthy may be wrong’ and an even more general one ‘beware the affect heuristic’, but there probably are more effective ways to teach the latter.
But the quote isn't attempting to teach a general lesson, it's attempting to improve one particular part of peoples' mental maps. If lots of people have an error in their map, and this error causes many of them to make a bad decision, then pointing out this error is rationality-enhancing.
A search brings up http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?fr=101.30 .
This seems to contradict the claim that "Sometimes there isn’t even any actual fruit in there, just chemicals that taste like fruit," since it would have to say "contains less than 1% juice" or not be described as juice at all.
Katara: Do you think we'll really find airbenders?
Sokka: You want me to be like you, or totally honest?
Katara: Are you saying I'm a liar?
Sokka: I'm saying you're an optimist. Same thing, basically.
-Avatar: The Last Airbender
Jane Austen, Sense and Sensibility.
Ambivalent about this one.
I like the idea of rational argument as a sign of intellectual respect, but I don't like things that are so easy to use as fully general debate stoppers, especially when they have a built-in status element.
But note that Elinor doesn't use it as a debate stopper, or to put down or belittle Ferrers. She simply chooses not to engage with his arguments, and agrees with him.
(I haven't read the book)
The way I usually come in contact with something like this is afterwards, when Elinor and her tribe are talking about those irrational greens, and how it's better to not even engage with them. They're just dumb/evil, you know, not like us.
Even without that part, this avoids opportunities for clearing up misunderstandings.
(anecdotally: some time ago a friend was telling me about discussions that are "just not worth having", and gave as an example "that time when we were talking about abortion and you said that X, I knew there was just no point in going any further". Turns out she had misunderstood me completely, and I actually had meant Y, with which she agrees. Glad we could clear that out - more than a year later, completely by accident. Which makes me wonder how many more of those misunderstandings are out there)
I see the point, but on the other hand it leads to "Lie back and think of England" situations...
Somehow I doubt that this argument is meant to be limitless in strength. It's more of a 'don't feed the trolls' guidance.
Steven Pinker, The New Republic 9/4/14
The rest of the article is also well worth the read.
Skeletor is Love
-- Max Tegmark, Our Mathematical Universe, Chapter 8. The Level III Multiverse, "The Joys of Getting Scooped"
Penny Arcade takes on the question of the economic value of a sacred thing. Script:
Gabe: Can you believe Notch is gonna sell Minecraft to MS?
Tycho: Yes! I can!
Gabe: Minecraft is, like, his baby though!
Tycho: I would sell an actual baby for two billion dollars.
Tycho: I would sell my baby to the Devil. Then, I would enter my Golden Sarcophagus and begin the ritual.
Andrew Gelman
I would like this quote more if instead of “has a positive utility for getting” it said “wants to get”.
Scott Adams
What if he wanted to make them stay in love?
Then he would let them work out a custom solution free of societal expectations, I suspect. Besides, an average romantic relationship rarely survives more than a few years, unless both parties put a lot of effort into "making it work", and there is no reason beyond prevailing social mores (and economic benefits, of course) to make it last longer than it otherwise would.
Just to clarify, you figure the optimal relationship pattern (in the absence of societal expectations, economic benefits, and I guess childrearing) is serial monogamy? (Maybe the monogamy is assuming too much as well?)
I recommend reading the whole Scott Adams post from which the quote came. The quote makes little sense standing by itself, it makes more sense within its context.
Certainly serial monogamy works for many people, since this is the current default outside marriage. I would not call it "optimal", it seems more like a decent compromise, and it certainly does not work for everyone. My suspicion is that those happy in a life-long exclusive relationship are a minority, as are polyamorists and such.
I expect domestic partnerships to slowly diverge from the legal and traditional definition of marriage. It does not have to be about just two people, about sex, or about child raising. If 3 single moms decide to live together until their kids grow up, or 5 college students share a house for the duration of their studies, they should be able to draw up a domestic partnership contract which qualifies them for the same assistance, tax breaks and next-of-kin rights married couples get. Of course, this is a long way away still.
The idea that marriage is purely about love is a recent one.
Adams' lifestyle might work for a certain kind of wealthy high IQ rootless cosmopolitan but not for the other 95% of the world.
If this is a criticism, it's wide off the mark.
Note his disclaimer about "the best economic arrangement". And he certainly speaks about the US only.
And it speaks volumes that he views it as an "economic arrangement", like he's channeling Bryan Caplan.
Living in the same house and coordinating lives isn't a method for ensuring that people stay in love; being able to is proof that they are already in love. An added social construct is a perfectly reasonable option to make it harder to change your mind.
The point of the quote is that it tends to make it harder to stay in love. Which is the opposite of what people want when they get married.
True or false, I'm trying but I really can't see how this is a rationality quote. It is simply a pithy and marginally funny statement about one topic.
I think it's time to add one new rule to the list, right at the top:
Can anyone say that in fewer words?
This is how:
The rest of the logic in the link I gave is even more interesting (and "rational").
Making one's point in a memorable way is a rationality technique.
As for your rule, it appears to me so subjective as to be completely useless. For one where one sees "what to believe" another sees "how to think".
Steve Sailer
-AC Grayling
A conversation between me and my 7-year-old cousin:
Her: "do you believe in God?"
Me: "I don't, do you?"
Her: "I used to but, then I never really saw any proof, like miracles or good people getting saved from mean people and stuff. But I do believe in the Tooth Fairy, because ever time I put a tooth under my pillow, I get money out in the morning."
From a surprisingly insightful comic commenting on the whole notion of "saving the planet".
In the Great Learning (大學) by Confucius, translated by James Legge
Interestingly I found this in a piece about cancer treatment. An possibly underused well-application of Fluid Analogies.
Reminds me of Expecting Short Inferential Distances.
Steve Sailer
Or that the interval between X and Y is spacelike, and neither is in the other's forward light cone... :)
"Dateless history" can be interesting without being accurate or informative. As long as I don't use it to inform my opinions on the modern world either way, it can be just as amusing and useful as a piece of fiction.
Agree with the general point, though I think people complaining about dates in history are referring to the kind of history that is "taught" in schools, in which you have to e.g. memorize that the Boston Massacre happened on March 5, 1770 to get the right answer on the test. You don't need that level of precision to form a working mental model of history.
You do need to know dates at close to that granularity if you're trying to build a detailed model of an event like a war or revolution. Knowing that the attack on Pearl Harbor and the Battle of Hong Kong both happened in 1941 tells you something; knowing that the former happened on 7 December 1941 and the latter started on 8 December tells you quite a bit more.
On the other hand, the details of wars and revolutions are probably the least useful part of history as a discipline. Motivations, schools of thought, technology, and the details of everyday life in a period will all get you further, unless you're specifically studying military strategy, and relatively few of us are.
A particularly stark example may be the exact dates of bombing of Hiroshima, Nagasaki, and official surrender. Helps deal with theories such as "they had to drop a bomb on Nagasaki because Japan didn't surrender".