- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
Loading…
Comments (379)
-- Eldest, by Christopher Paolini
(This is not a recommendation for the book series. The book has Science Elves, but they are not thought of rationally or worldbuilt to any logical conclusion whatsoever. The context of this quote is apparently a "science is good" professing/cheering without any actual understanding of how science or rationality works.)
(I would love a rational version of Eragon by way of steelmanning the Science Elves. But then you'd probably need to explain why they haven't taken over the world.)
That's not true. Logic doesn't protect you from GIGO (garbage-in-garbage-out). Actually knowing something about the subject one is interacting with is very important.
-- Eragon and Angela, Brisingr, by the same author
Someone who says something like the first sentence generally means something like "questions that are significant and in an area I am concerned with". They don't mean "I don't know exactly how many atoms are in the moon, and I find that painful" (unless they have severe OCD based around the moon), and to interpret it that way is to deliberately misinterpret what the speaker is saying so that you can sound profound.
But then, I've been on the Internet. This sort of thing is an endemic problem on the Internet, except that it's not always clear how much is deliberate misinterpretation and how much is people who just don't comprehend context and implication.
(Notice how I've had to add qualifiers like 'generally' and "except for (unlikely case)" just for preemptive defense against that sort of thing.)
I just liked seeing the usually-untouchable hero called out on his completely empty boast of how tirelessly curious and inquiring he was.
If you don't have any open questions in that category, then you aren't really living as an intellectual.
In science questions are like a hydra. After solving a scientific problem you often have more questions than you had when you started.
Schwartz's article on the issue is quite illustrative. If you can't deal with the emotional effects that come with looking at an open question and having it open for months and years you can't do science.
You won't contribute anything to the scientific world of ideas if you can only manage to concerned with an open question for an hour and not for months and years. Of course there are plenty person in the real world who don't face questions with curiosity but who in pain when dealing with them. To me that seems like a dull life to live. because the question doesn't concern themselves with living an intellectual life.
"You must spend every waking hour in mortal agony, for life is full of unanswerable questions." carries the connotation that someone cannot answer large numbers of every day questions, not that they can't answer a few questions in specialized areas.
But the original statement about unanswered questions being painful, in context, does connote that they are referring to a few questions in specialized areas.
In this case it illustrates how the character in question couldn't really imagine living a life without unanswered questions. Given that it's a Science Elf that fits.
For him daily life is about deep questions.
"Unanswered questions" connotes different things in the two different places, though. In one place it connotes "all unanswered questions of whatever kind" and in another it connotes "important unanswered questions". The "cleverness" of the quote relies on confusing the two.
Important depends on whether you care about something. If you have a scientific mindset than you care about a lot of questions and want answers for them.
But you don't care about the huge number of questions needed to make the response on target.
I'm not sure that's a critical part of any definition of the word "intellectual".
It's not sufficient to be an intellectual but if you don't care about questions that aren't solved in short amounts of time because that's very uncomfortable for you, you won't have a deep understanding of anything. You might memorise the teacher password in many domains but that's not what being an intellectual is about.
A shipowner was about to send to sea an emigrant-ship. He know that she was old, and not over-well built at the first; that she had seen many seas and climes, and often had needed repairs. Doubts had been suggested to him that possibly she was not seaworthy. These doubts preyed upon his mind and made him unhappy; he thought that perhaps he ought to have her thoroughly overhauled and refitted, even though this should put him to great expense. Before the ship sailed, however, he succeeded in overcoming these melancholy reflections. He said to himself that she had gone safely though so many voyages and weathered so many storms, that it was idle to suppose she would not come safely home from this trip also. He would put his trust in Providence, which could hardly fail to protect all these unhappy families that were leaving their fatherland to seek for better times elsewhere. He would dismiss from his mind all ungenerous suspicions about the honesty of builders and contractors. In such a way he acquired a sincere and comfortable conviction that his vessel was thoroughly safe and seaworthy; he watched her depature with a light heart, and benevolent wishes for the success of the exiles in their strange new home that was to be; and he got his insurance-money when she went down in mid-ocean and told no tales.
What shall we say of him? Surely this, that he was verily guilty of the death of those men. It is admitted that he did sincerely believe in the soundness of his ship, but the sincerity of his conviction can in nowise help him, because he had no right to believe on such evidence as was before him. He had acquired his belief not by honestly earning it in patient investigation, but by stifling his doubts.
An interesting quote. It essentially puts forward the "reasonable person" legal theory. But that's not what's interesting about it.
The shipowner is pronounced "verily guilty" solely on the basis of his thought processes. He had doubts, he extinguished them, and that's what makes him guilty. We don't know whether the ship was actually seaworthy -- only that the shipowner had doubts. If he were an optimistic fellow and never even had these doubts in the first place, would he still be guilty? We don't know what happened to the ship -- only that it disappeared. If the ship met a hurricane that no vessel of that era could survive, would the shipowner still be guilty? And, flipping the scenario, if solely by improbable luck the wreck of the ship did arrive unscathed to its destination, would the shipowner still be guilty?
Part of the scenario is that the ship is in fact not seaworthy, and went down on account of it. Part is that the shipowner knew it was not safe and suppressed his doubts. These are the actus reus and the mens rea that are generally required for there to be a crime. These are legal concepts, but I think they can reasonably be applied to ethics as well. Intentions and consequences both matter.
If the emigrants do not die, he is not guilty of their deaths. He is still morally at fault for sending to sea a ship he knew was unseaworthy. His inaction in reckless disregard for their lives can quite reasonably be judged a crime.
That is just not true. The author of the quote certainly knew how to say "the ship was not seaworthy" and "the ship sank because it was not seaworthy". The author said no such things.
You are mistaken. Suppressing your own doubts is not actus reus -- you need an action in physical reality. And, legally, there is a LOT of difference between an act and an omission, failing to act.
The author said:
and more, which you have already read. This is clear enough to me.
In this case, an inaction.
In general there is, but not when the person has a duty to perform an action, knows it is required, knows the consequences of not doing it, and does not. That is the situation presented.
I realize your questions may be rhetorical, but I'm going to attempt an answer anyways, because it illustrates a point:
The morality of the shipowner's actions do not depend on the realized outcomes: It can only depend on his prior beliefs about the probability of the outcomes, and on the utility function that he uses to evaluate them. If we insisted on making morality conditional on the future, causality is broken: It will be impossible for any ethical agent to make use of such ethics as a decision theory.
The problem here is that the Shipowner's "sincerely held beliefs" are not identical to his genuine extrapolated prior. It is not stated in the text, but I think he is able to convince himself about "the soundness of the ship" only by ignoring degrees of belief: If he was a proper Bayesian, he would have realized that having "doubts" and not updating your beliefs is not logically consistent
In any decision theory that is usable by agents making decisions in real time, the morality of his action is determined either at the time he allowed the ship to sail, or at the time he allowed his prior to get corrupted. I personally believe the latter. This quotation illustrates why I see rationality as a moral obligation, even when it feels like a memetic plague.
I am not sure -- I see your point, but completely ignoring the actual outcome seems iffy to me. There are, of course, many different ways of judging morality and, empirically, a lot of them do care about realized outcomes.
I don't know what a "genuine extrapolated prior" is.
Well, behaving according to the "reasonable person" standard is a legal obligation :-)
That's because we live in a world where people's inner states are not apparent, perhaps not even to themselves. So we revert to (a) what would a reasonable person believe, (b) what actually happened. The latter is unfortunate in that it condemns many who are merely morally unlucky and acquits many who are merely morally lucky, but that's life. The actual bad outcomes serve as "blameable moments". What can I say - it's not great, but better than speculating on other people's psychological states.
In a world where mental states could be subpoenaed, Clifford would have both a correct and an actionable theory of the ethics of belief; as it is I think it correct but not entirely actionable.
That which would be arrived at by a reasonable person (not necessarily a Bayesian calculator, but somebody not actually self-deceptive) updating on the same evidence.
A related issue is sincerity; Clifford says the shipowner is sincere in his beliefs, but I tend to think in such cases there is usually a belief/alief mismatch.
I love this passage from Clifford and I can't believe it wasn't posted here before. By the way, William James mounted a critique of Clifford's views in an address you can read here; I encourage you to do so as James presents some cases that are interesting to think about if you (like me) largely agree with Clifford.
That's not self-evident to me. First, in this particular case as you yourself note, "Clifford says the shipowner is sincere in his belief". Second, in general, what are you going to do about, basically, stupid people who quite sincerely do not anticipate the consequences of their actions?
That would be a posterior, not a prior.
I think Clifford was wrong to say the shipowner was sincere in his belief. In the situation he describes, the belief is insincere - indeed such situations define what I think "insincere belief" ought to mean.
Good question. Ought implies can, so in extreme cases I'd consider that to diminish their culpability. For less extreme cases - heh, I had never thought about it before, but I think the "reasonable man" standard is implicitly IQ-normalized. :)
Sure.
This is called fighting the hypothetical.
While that may be so, the Clifford approach relying on the subpoenaed mental states relies on mental states and not on any external standard (including the one called "resonable person").
I wanted to put something like this idea into my own response to Lumifer, but I couldn't find the words. Thanks for expressing the idea so clearly and concisely.
This is not the whole story. In the quote
you're paying too much heed to the final clause and not enough to the clause that precedes it. The shipowner had doubts that, we are to understand, were reasonable on the available information. The key to the shipowner's... I prefer not to use the word "guilt", with its connotations of legal or celestial judgment -- let us say, blameworthiness, is that he allowed the way he desired the world to be to influence his assessment of the actual state of the world.
In your "optimistic fellow" scenario, the shipowner would be as blameworthy, but in that case, the blame would attach to his failure to give serious consideration to the doubts that had been expressed to him.
And going beyond what is in the passage, in my view, he would be equally blameworthy if the ship had survived the voyage! Shitty decision-making is shitty-decision-making, regardless of outcome. (This is part of why I avoided the word "guilt" -- too outcome-dependent.)
Pretty much everyone does that almost all the time. So, is everyone blameworthy?
Of course, if everyone is blameworthy then no one is.
I would say that I don't do that, but then I'd pretty obviously be allowing the way I desire the world to be to influence my assessment of that actual state of the world. I'll make a weaker claim -- when I'm engaging conscious effort in trying to figure out how the world is and I notice myself doing it, I try to stop. Less Wrong, not Absolute Perfection.
That's a pretty good example of the Fallacy of Gray right there.
How do you know?
Especially since falsely holding that belief would be an example.
Lumifer wrote, "Pretty much everyone does that almost all the time." I just figured that given what we know of heuristics and biases, there exists a charitable interpretation of the assertion that makes it true. Since the meat of the matter was about deliberate subversion of a clear-eyed assessment of the evidence, I didn't want to get into the weeds of exactly what Lumifer meant.
The next passage confirms that this is the author's interpretation as well:
And clearly what he is guilty of (or if you prefer, blameworthy) is rationalizing away doubts that he was obligated to act on. Given the evidence available to him, he should have believed the ship might sink, and he should have acted on that belief (either to collect more information which might change it, or to fix the ship). Even if he'd gotten lucky, he would have acted in a way that, had he been updating on evidence reasonably, he would have believed would lead to the deaths of innocents.
The Ethics of Belief is an argument that it is a moral obligation to seek accuracy in beliefs, to be uncertain when the evidence does not justify certainty, to avoid rationalization, and to help other people in the same endeavor. One of his key points is that 'real' beliefs are necessarily entangled with reality. I am actually surprised he isn't quoted here more.
It's not quite clear to me that the judgments being made here are solely about the owner's thought processes, though I agree that facts about behavior and thought processes are intermingled in this narrative in such a way as to make it unclear what conclusions are based on which facts.
Still... the owner had doubts suggested about the ship's seaworthiness, we're told, and this presumably is a fact about events in the world. The generally agreed-upon credibility of the sources of those suggestions is presumably also something that could be investigated without access to the owner's thoughts. Further, we can confirm that the owner didn't overhaul the ship, for example, nor retain the services of trained inspectors to determine the ship's seaworthiness (or, at least, we have no evidence that he did so, in situations where evidence would be expected if he had).
All of those are facts about behavior. Are those behaviors sufficient to hold the owner liable for the death of the sailors? Perhaps not; perhaps without the benefit of narrative omniscience we'd give the owner the benefit of the doubt. But... so what? In this case, we are being given additional data. In this case we know the owner's thought process, through the miracle of narrative.
You seem to be trying to suggest, through implication and leading questions, that using that additional information in making a judgment in this case is dangerous... perhaps because we might then be tempted to make judgments in real-world cases as if we knew the owner's thoughts, which we don't.
And, well, I agree that to make judgments in real-world cases as if we knew someone's thoughts is problematic... though sometimes not doing so is also problematic.
Anyway, to answer your question: given the data provided above I consider the shipowner negligent, regardless of whether the ship arrived safely at its destination, or whether it was destroyed by some force no ship could survive.
Do you disagree?
In absence of applicable regulations I think the veil of ignorance of sorts can help here. Would the shipowner make the same decision were he or his family one of the emigrants? What if it was some precious irreplaceable cargo on it? What if it was regular cargo but not fully insured? If the decision without the veil is significantly difference from the one with, then one can consider him "verily guilty", without worrying about his thoughts overmuch.
Well, yes, I agree, but I'm not sure how that helps.
We're now replacing facts about his thoughts (which the story provides us) with speculations about what he might have done in various possible worlds (which seem reasonably easy to infer, either from what we're told about his thoughts, or from our experience with human nature, but are hardly directly observable).
How does this improve matters?
I don't think they are pure speculations. This is not the shipowner's first launch, so the speculations over possible worlds can be approximated by observations over past decisions.
(nods) As I say, reasonably easy to infer.
But I guess I'm still in the same place: this narrative is telling us the shipowner's thoughts.
I'm judging the shipowner accordingly.
That being said, if we insist on instead judging a similar case where we lack that knowledge... yeah, I dunno. What conclusion would you arrive at from a Rawlsian analysis and does it differ from a common-sense imputation of motive? I mean, in general, "someone credibly suggested the ship might be unseaworthy and Sam took no steps to investigate that possibility" sounds like negligence to me even in the absence of Rawlsian analysis.
No, I'm just struck by how the issue of guilt here turns on mental processes inside someone's mind and not at all on what actually happened in physical reality.
Keep in mind that this parable was written specifically to make you come to this conclusion :-)
But yes, I disagree. I consider the data above to be insufficient to come to any conclusions about negligence.
Mental processes inside someone's mind actually happen in physical reality.
Just kidding; I know that's not what you mean. My actual reply is that it seems manifestly obvious that a person in some set of circumstances that demand action can make decisions that careful and deliberate consideration would judge to be the best, or close to the best, possible in prior expectation under those circumstances, and yet the final outcome could be terrible. Conversely, that person might make decisions that that careful and deliberate consideration would judge to be terrible and foolish in prior expectation, and yet through uncontrollable happenstance the final outcome could be tolerable.
So, I disagreed with this claim the first time you made it, since the grounds cited combine both facts about the shipowners thoughts and facts about physical reality (which I listed). You evidently find that objection so uncompelling as to not even be worth addressing, but I don't understand why. If you chose to unpack your reasons, I'd be interested.
But, again: even if it's true, so what? If we have access to the mental processes inside someone's mind, as we do in this example, why shouldn't we use that data in determining guilt?
I read the story as asserting three facts about the physical reality: the ship was old, the ship was not overhauled, the ship sank in the middle of the ocean. I don't think these facts lead to the conclusion of negligence.
But we don't. We're talking about the world in which we live. I would presume that the morality in the world of telepaths would be quite different. Don't do this.
When judging this story, we do.
We know what was going on in this shipowner's mind, because the story tells us.
I'm not generalizing. I'm making a claim about my judgment of this specific case, based on the facts we're given about it, which include facts about the shipowner's thoughts.
What's wrong with that?
As I said initially... I can see arguing that if we allow ourselves to judge this (fictional) situation based on the facts presented, we might then be tempted to judge other (importantly different) situations as if we knew analogous facts, when we don't. And I agree that doing so would be silly.
But to ignore the data we're given in this case because in a similar real-world situation we wouldn't have that data seems equally silly.
Gordon Freeman, Freeman's Mind
-- CornChowdah, on reddit
Yay for personal finance, boo for ethics, which is liable to become a mere bully pulpit for teachers' own views.
Thinking back to my own religious high school education, I realize that the ethics component (though never called out as such, it was woven into the curriculum at every level) was indeed important; not so much because of the specific rules they taught and didn't teach; as simply in teaching me that ethics and morals were something to think about and discuss.
Then again, this was a Jesuit school; and Jesuit education has a reputation for being somewhat more Socratic and questioning than the typical deontological viewpoint of many schools.
But in any case, yay for personal finance.
It might be possible (and useful) to design an ethics curriculum that helps students to think more clearly about their own views, though, without giving their teachers much of an excuse to preach.
One of the key concepts in Common Law is that of the reasonable man. Re-reading A.P. Herbert, it struck me how his famously insulting description of the reasonable man bears a deep resemblance to that of the ideal rationalist:
A.P. Herbert, [Uncommon Law].(http://en.wikipedia.org/wiki/Uncommon_Law). Emphasis mine.
I imagine that something of a similar sentiment animates much of popular hostility to LessWrong-style rationalism.
I'm not convinced. I know a few folks who know about LW and actively dislike it; when I try to find out what it is they dislike about it, I've heard things like —
I wonder how these people who dislike LW feel about geeks/nerds in general.
Most of them are geeks/nerds in general, or at least have seen themselves as such at some point in their lives.
That makes me more curious; I have the feeling there's quite a bit of anti-geek/nerd sentiment among geeks/nerds, not just non-nerds.
(Not sure how to write the above sentence in a way that doesn't sound like an implicit demand for more information! I recognize you might be unable or unwilling to elaborate on this.)
Yeesh. These people shouldn't let feelings or appearances influence their opinions of EY's trustworthiness -- or "morally repulsive" ideas like justifications for genocide. That's why I feel it's perfectly rational to dismiss their criticisms -- that and the fact that there's no evidence backing up their claims. How can there be? After all, as I explain here, Bayesian epistemology is central to LW-style rationality and related ideas like Friendly AI and effective altruism. Frankly, with the kind of muddle-headed thinking those haters display, they don't really deserve the insights that LW provides.
There, that's 8 out of 10 bullet points. I couldn't get the "manipulation" one in because "something sinister" is underspecified; as to the "censorship" one, well, I didn't want to mention the... thing... (ooh, meta! Gonna give myself partial credit for that one.)
Ab, V qba'g npghnyyl ubyq gur ivrjf V rkcerffrq nobir; vg'f whfg n wbxr.
That was pretty subtle, actually. You had my blood boiling at the end of the first paragraph and I was about to downvote. Luckily I decided to read the rest.
Your theory may have some value. But let's note that I don't know what it means to cross an instrument 'a/c Payee only', and I'll wager most other people don't know. Do you think most UK citizens did in 1935?
The use of the word "instrument" makes the phrase more obscure than it needs to be, but it refers to the word "cheque" earlier in the sentence. I suspect most modern British people probably don't know what it means, but most will have noticed that all the cheques in a chequebook have "A/C Payee only" written vertically across the middle - or at least those old enough to have used cheques will! But people in 1935 would have most likely known what it meant, because 1) in those days cheques were extremely widespread (no credit or debit cards) and 2) unlike today, cheques were frequently written by hand on a standard piece of paper (although chequebooks did exist). The very fact that the phrase was used by a popular author writing for a mass audience (the cases were originally published in Punch and The Evening Standard) should incline you in that direction anyway.
Note incidentally that Herbert's most famous case is most likely The Negotiable Cow.
Just fyi, my checks don't say anything like that, and the closest I can find on Google Images just says, "Account Payee."
I don't know for sure, but judging from context I'd say it's probably instructions as to the disposition of a check -- like endorsing one and writing "For deposit only" on the back before depositing it into the bank, as a guarantee against fraud.
Granted, in these days of automatic scanning and electronic funds transfer that's starting to look a little cobwebby itself.
J.S. Mill
-- Boaz Keysar and Albert Costa, Our Moral Tongue, New York Times, June 20, 2014
I disagree with Jiro and Salemicus. Learning about how human brains work is entirely relevant to rationality.
Someone who characterized the results the way they characterize them in this quote has learned some facts, but failed on the analysis.
It's like a quote which says "(correct mathematical result) proves that God has a direct hand in the creation of the world". That wouldn't be a rationality quote just because they really did learn a correct mathematical result.
There are a lot of senses of 'quote' which I agree this does not fit well, but in the 'excerpt from an interesting article' sense I think it is, well, interesting.
I agree with Jiro, this appears to be an anti-rationality quote. The most straightforward interpretation of the data is that people didn't understand the question as well when posed in a foreign language.
Chalk this one up not to emotion, but to deontology.
It's also possible that asking a different language causes subjects to think of the people in the dilemma as "not members of their tribe".
Possible that they understood the question, but hearing it in a foreign language meant cognitive strain, which meant they were already working in System 2. That's my read anyway.
Given to totally fluent second-language speakers, I bet the effect vanishes.
This quote implies a connection from "people react less strongly to emotional expressions in a foreign language" to "dilemmas in a foreign language don't touch the very core of our moral being". Furthermore, it connects or equates being more willing to sacrifice one person for five and "touch[ing] the core of our moral being" less. All rational people should object to the first implication, and most should object to the second one. This is a profoundly anti-rational quote, not a rationality quote.
What do we suppose is meant by 'the very core of our moral being'? If people react differently depending on language, isn't that evidence that there is a connection? Or at least that the moral core is doing something different?
I think you're reading a lot into that one sentence. I assumed that just to mean "there should not be inconsistencies due to irrelevant aspects like the language of delivery". Followed by a sound explanation for the unexpected inconsistency in terms of system 1 / system 2 thinking.
(The final paragraph of the article begins with "Our research does not show which choice is the right one.")
Leftover Soup
Duplicate (May 2013).
-- C. S. Lewis, A Grief Observed
Thomas Babington Macaulay, History of England
Frankly, the whole passage Steve Sailer quotes at the link is worth reading.
For those (I have some reason to think there are some) who would rather avoid giving Steve Sailer attention or clicks, or who would like more context than he provides, you can find the relevant chapter at Project Gutenberg along with the rest of volume 3 of Macaulay's History. (The other volumes are Gutenbergificated too, of course.) Macaulay's chapters are of substantial length; if you want just that section, search for "none of these sights" after following the link.
Nassim Taleb
I don't really get this. It seems like both types of prediction matter quite a bit.
The only way I can interpret it that makes sense to me is something like:
Is he giving advice about making correct predictions given that you just randomly feel like predicting stuff? Or is he giving advice about how to predict things you actually care about?
The latter. Specifically predicting high impact events.
People predicted the housing bubble collapse using Taleb's reasoning.
Did someone other that Eugene Nier downvote this? If so, how is the parent not a concrete example of "how to predict things you actually care about?"
It didn't really bear on whether one should try to predict events of uncertain occurence.
The parent is a concrete example of selection (or survivor) bias. Picking post factum one case which turned out to be right (and ignoring unknown but possibly large number of cases which turned out to be wrong and faded into the dark pit of obscurity) does not help you predict anything.
Consider a forecast: the stock market will crash. No idea when, but at some point it will. It is a safe prediction to make? Yes, it is. Is it a useful prediction? No, it is not.
Taleb's advice is good for burnishing one's reputation as a psychic. It's not so good for making actionable forecasts.
To recall a well-known remark by Paul Samuelson,
ETA: So the guy sold his Washington DC condo in 2004? That looks to have been a pretty poor decision.
Did you just link to the change in the housing market over the past year? Washington Post:
My link:
Let's subtract the $1000 he paid for the best argument against the existence of a housing bubble. On the face of it, you appear to be arguing with a man who made $39,000 cash by betting on the obvious - though the actual number may of course be less.
Your general argument seems to deny the usefulness of hedging.
There is a multi-year graph of real estate prices on that web page, if you click the "Max" button you will get a plot of prices from August 2004 till today.
No, my general argument denies the usefulness of forecasts which don't provide time estimates other than "at some point in the future".
Let me offer you three more examples of such forecasts:
The website does work when I enable cookies, and it says he sold his apartment for much more than the median price. I think it also supports the claim that after buying a house, he had a profit left of roughly 10 percent of that house's value (the amount of equity he supposedly said he wouldn't mind losing post-purchase).
Your general argument seems to misrepresent Taleb. Again, we have here a case of someone doing pretty well by focusing on the predictions you can make. (His profit was likely sub-optimal, but that sounds like an example of a prediction you can't make.) And hedging can indeed protect you against the events you keep weirdly suggesting are useless to think about.
If I may point you to the first paragraph of this post..?
I don't believe I said anything at all about what's useful or useless to think about.
Of course it's a useful bloody prediction! It means you shouldn't put yourself in a position where any stock market crash will kill you or drastically lower your standard of living.
LOL. In that case I have a lot of useful predictions to make:
...I can easily continue...
P.S. Maybe you should mention your advice to all the financial gurus on LW who insist that the only place to put your money into is an equities index fund X-D
"... Is it wrong to hold on to that kind of hope?"
[having poisoned her] "I have not come for what you hoped to do. I've come for what you did."
Given that you've said in another thread that you consider "blame" an incoherent concept, I don't understand what you think this quote means.
That people will judge your morality by your actions without regard to your intentions. I don't claim that V is particularly rational, but he embodies (exaggerated versions of) traits that real people have. Our moral decisions have consequences in how we are treated.
This is what most people mean by "blame".
Blame is not the action of treating someone differently because of their moral choices, it's the rationale for doing so. I think the rationale is incoherent, but the actions still exist.
Possibly in the eyes of the future, if there is one, we'll all look like brain-damaged children who aren't morally to blame for much of anything. Our actions still have consequences (for example, they might determine whether humanity has a future).
Steve Sailer
This tells me that the order of events is important, and not the actual dates themselves. It is true that, if I want to claim that X caused Y, I need to know that X happened before Y; but it does not make any difference whether they both happened in 1752 or 1923.
Great. I have approximately 6000 years worth of events here, happening across multiple continents, with overlapping events on every scale imaginable from "in this one village" to "world war." If you can keep the relationships between all those things in your memory consistently using no index value, go for it. If not, I might recommend something like a numerical system that puts those 6000 years in order.
I would not recommend putting "0" at a relatively arbitrary point several thousand years after the events in question have started.
I do agree that an index value is a very useful and intuitive-to-humans way to represent the order of events, especially given the sheer number of events that have taken place through history. However, I do think it's important to note that the index value is only present as a representation of the order of events (and of the distance between them, which, as other commentators have indicated, is also important) and has no intrinsic value in and of itself beyond that.
It's not just the order but the distance that matters. If you want to say that X caused Y, but X happened a thousand years before Y, chances are that you're at the very least ignoring a lot of additional causes.
In the end, I think, dates are important. It's only the arbitrary positioning of a starting date (e.g. Christian vs. Jewish vs. Chinese calendar) that genuinely doesn't matter; but even that much is useful for us to talk about historical events. I.e. it doesn't really matter where we put year 0, but it matters that we agree to put it somewhere. (Ideally we would have put it somewhat further back in time, maybe nearer the beginning of recorded history, so we didn't have to routinely do BCE/CE conversions in our heads, but that ship has sailed.)
The time between them also matters. If X happened a year before Y it is more plausible that X caused Y then if X happened a century before Y.
Dates are a very convenient way of specifying the temporal order of many different events.
Agree with the general point, though I think people complaining about dates in history are referring to the kind of history that is "taught" in schools, in which you have to e.g. memorize that the Boston Massacre happened on March 5, 1770 to get the right answer on the test. You don't need that level of precision to form a working mental model of history.
You do need to know dates at close to that granularity if you're trying to build a detailed model of an event like a war or revolution. Knowing that the attack on Pearl Harbor and the Battle of Hong Kong both happened in 1941 tells you something; knowing that the former happened on 7 December 1941 and the latter started on 8 December tells you quite a bit more.
On the other hand, the details of wars and revolutions are probably the least useful part of history as a discipline. Motivations, schools of thought, technology, and the details of everyday life in a period will all get you further, unless you're specifically studying military strategy, and relatively few of us are.
A particularly stark example may be the exact dates of bombing of Hiroshima, Nagasaki, and official surrender. Helps deal with theories such as "they had to drop a bomb on Nagasaki because Japan didn't surrender".
Be careful. That sounds reasonable until you also learn that the Japanese war leadership didn't even debate Hiroshima or Nagasaki for more than a brief status update after they happened, yet talk of surrender and the actual declaration immediately folowed declaration of war by the Soviets and landing of troops in Mancheria and the Sakhalin islands. Japan, it seems, wanted to avoid the German post-war fate of a divided people.
The general problem with causation in history is that you often don't know what you don't know. (It's a tangential point, I know.)
I'm not necessarily saying this is wrong, but I don't think it can be shown to be significantly more accurate than the "bomb ended the war" theory by looking at dates alone. The Soviet declaration of war happened on 8 August, two days after Hiroshima. Their invasion of Manchuria started on 9 August, hours before the Nagasaki bomb was dropped, and most sources say that the upper echelons of the Japanese government decided to surrender within a day of those events. However, their surrender wasn't broadcast until 15 August, and by then the Soviets had opened several more fronts. (That is, that's when Emperor Hirohito publicized his acceptance of the Allies' surrender terms. It wasn't formalized until 2 September, after Allied occupation had begun.)
Dates aside, though, it's fascinating to read about the exact role the Soviets played in the end of the Pacific War. Stalin seems to have gotten away with some spectacularly Machiavellian moves.
That was my point. It can be shown to be significantly more accurate, but not by looking at the dates alone.
"Dateless history" can be interesting without being accurate or informative. As long as I don't use it to inform my opinions on the modern world either way, it can be just as amusing and useful as a piece of fiction.
Or that the interval between X and Y is spacelike, and neither is in the other's forward light cone... :)
Some day the light speed delay might become an issue in historical investigations, but not quite yet :) Even then in the statement "if you claim that X caused Y, the minimum you need to know is that X came before Y, not afterwards" the term "before" implies that one event is in the causal future of the other.
Reminds me of Expecting Short Inferential Distances.
In the Great Learning (大學) by Confucius, translated by James Legge
Interestingly I found this in a piece about cancer treatment. An possibly underused well-application of Fluid Analogies.
From a surprisingly insightful comic commenting on the whole notion of "saving the planet".
This framing is marginally saner, but the weird panicky eschatology of pop-environmentalism is still present. Apparently the author thinks that using up too many resources, or perhaps global warming, currently represent human extinction level threats?
A conversation between me and my 7-year-old cousin:
Her: "do you believe in God?"
Me: "I don't, do you?"
Her: "I used to but, then I never really saw any proof, like miracles or good people getting saved from mean people and stuff. But I do believe in the Tooth Fairy, because ever time I put a tooth under my pillow, I get money out in the morning."
Interesting that she seems to mentally classify God and the tooth fairy in the same category.
Well, she's only 7.
I'm not sure what you mean. I personally have a mental category of "mythical beings that don't exist but some people believe exist", which includes God, the tooth fairy, Santa, unicorns, etc. This girl appears to have the same mental category, even though she believes in God but doesn't believe in the tooth fairy.
Definitely getting her HPMOR for her 10th birthday :)
-AC Grayling
Steve Sailer
Alternatively:
Paul Graham
Paul Graham's quote is about a way to fight the trend Sailer describes, unfortunately that trend frequently ends up winning.
Scott Adams
True or false, I'm trying but I really can't see how this is a rationality quote. It is simply a pithy and marginally funny statement about one topic.
I think it's time to add one new rule to the list, right at the top:
Can anyone say that in fewer words?
This is how:
The rest of the logic in the link I gave is even more interesting (and "rational").
Making one's point in a memorable way is a rationality technique.
As for your rule, it appears to me so subjective as to be completely useless. For one where one sees "what to believe" another sees "how to think".
Assume for the sake of argument, the statement is correct.
This quote does not expose a fallacy, that is an error in reasoning. There is nothing in this quote to indicate the rationality shortcoming that causes people to believe the incorrect statement. Rather this exposes an error of fact. The rationality question is why do people come to believe errors of fact and how we can avoid that.
You may be reading the sunk cost fallacy into this quote, or it may be in an unquoted part of the original article, but I don't see it here. If the rest of the article better elucidates rationality techniques that led Adams to come to this conclusion, then likely the wrong extract from the article was selected to quote.
Making one's point in a memorable (including humorous) way may be an instrumental rationality technique. That is, it helps to convince other people of your beliefs. However in my experience it is a very bad epistemic rationality technique. In particular it tends to overweight the opinions of people like Adams who are very talented at being funny, while underweighting the opinions of genuine experts in a field, who are somewhat dry and not nearly as amusing.
Living in the same house and coordinating lives isn't a method for ensuring that people stay in love; being able to is proof that they are already in love. An added social construct is a perfectly reasonable option to make it harder to change your mind.
The point of the quote is that it tends to make it harder to stay in love. Which is the opposite of what people want when they get married.
The idea that marriage is purely about love is a recent one.
Adams' lifestyle might work for a certain kind of wealthy high IQ rootless cosmopolitan but not for the other 95% of the world.
If this is a criticism, it's wide off the mark.
Note his disclaimer about "the best economic arrangement". And he certainly speaks about the US only.
And it speaks volumes that he views it as an "economic arrangement", like he's channeling Bryan Caplan.
I don't understand.
It looks to me as if Adams's whole point is that marriage isn't supposed to be primarily an economic arrangement, it's supposed to be an institution that provides couples with a stable context for loving one another, raising children, etc., but in fact (so he says) the only way in which it works well is economically, and in any other respect it's a failure.
It's as if I wrote "Smith's new book makes a very good doorstop, but in all other respects I have to say it seems to me an abject failure". Would you say it speaks volumes that I view Smith's book as a doorstop? Surely my criticism only makes sense because I think a book is meant to be other things besides a doorstop.
What if he wanted to make them stay in love?
Then he would let them work out a custom solution free of societal expectations, I suspect. Besides, an average romantic relationship rarely survives more than a few years, unless both parties put a lot of effort into "making it work", and there is no reason beyond prevailing social mores (and economic benefits, of course) to make it last longer than it otherwise would.
Just to clarify, you figure the optimal relationship pattern (in the absence of societal expectations, economic benefits, and I guess childrearing) is serial monogamy? (Maybe the monogamy is assuming too much as well?)
Certainly serial monogamy works for many people, since this is the current default outside marriage. I would not call it "optimal", it seems more like a decent compromise, and it certainly does not work for everyone. My suspicion is that those happy in a life-long exclusive relationship are a minority, as are polyamorists and such.
I expect domestic partnerships to slowly diverge from the legal and traditional definition of marriage. It does not have to be about just two people, about sex, or about child raising. If 3 single moms decide to live together until their kids grow up, or 5 college students share a house for the duration of their studies, they should be able to draw up a domestic partnership contract which qualifies them for the same assistance, tax breaks and next-of-kin rights married couples get. Of course, this is a long way away still.
To my mind, the giving of tax breaks etc. to married folks occurs because (rightly or wrongly) politicians have wanted to encourage marriage.
I agree that in principle there is nothing wrong with 3 single moms or 5 college students forming some sort of domestic partnership contract, but why give them the tax breaks? Do college kids living with each other instead of separately create some sort of social benefit that "we" the people might want to encourage? Why not just treat this like any other contract?
Apart from this, I think the social aspect of marriage is being neglected. Marriage for most people is not primarily about joint tax filing, but rather about publicly making a commitment to each other, and to their community, to follow certain norms in their relationship (e.g., monogamy; the specific norms vary by community). This is necessary because the community "thinks" pair bonding and childrearing are important/sacred/weighty things. In other words, "married" is a sort of honorific.
Needless to say, society does not think 5 college students sharing a house is an important/sacred/weighty thing that needs to be honoured.
This thick layer of social expectations is totally absent for the kind of arm's-length domestic partnership contract you propose, which makes me wonder why anybody would either want to call it marriage or frame it as being an alternative to marriage.
It reduces the demand for real estate, which lowers its price. Of course this is a pecuniary externality so the benefit to tenants is exactly counterbalanced by the harm to landlords, but given that landlords are usually much wealthier than tenants...
Yes and the social benefit is already captured by the roommates in the form of paying less rent.
I don't think anyone suggested that?
Some marriages are of convenience, and the honorific sense doesn't apply as well to people who don't fit the romantic ideal of marriage.
I could make exactly the same argument about divorce-able marriage and wonder why would anyone call this get-out-whenever-you-want-to arrangement "marriage" :-D
The point is, the "thick layer of social expectations" is not immutable.
Agreed, no fault divorce laws were a huge mistake.
From which point of view?
If traditional marriage is a sparrow, then marriage with no-fault divorce is a penguin, and 5 college kids sharing a house is a centipede. Type specimen, non-type specimen, wrong category.
Social expectations are mutable, yes - what of it? Do you think it's desirable or inevitable that marriage just become a fancy historical legal term for income splitting on one's tax return? Do you think sharing a house in college is going to be, or ought to be, hallowed and encouraged?
I recommend reading the whole Scott Adams post from which the quote came. The quote makes little sense standing by itself, it makes more sense within its context.
Andrew Gelman
I would like this quote more if instead of “has a positive utility for getting” it said “wants to get”.
The context is specifically a description of the theory of utility and how it is inconsistent with the preferences people actually exhibit.
Penny Arcade takes on the question of the economic value of a sacred thing. Script:
Gabe: Can you believe Notch is gonna sell Minecraft to MS?
Tycho: Yes! I can!
Gabe: Minecraft is, like, his baby though!
Tycho: I would sell an actual baby for two billion dollars.
Tycho: I would sell my baby to the Devil. Then, I would enter my Golden Sarcophagus and begin the ritual.
-- Max Tegmark, Our Mathematical Universe, Chapter 8. The Level III Multiverse, "The Joys of Getting Scooped"
Skeletor is Love
Steven Pinker, The New Republic 9/4/14
The rest of the article is also well worth the read.
Jane Austen, Sense and Sensibility.
I see the point, but on the other hand it leads to "Lie back and think of England" situations...
Somehow I doubt that this argument is meant to be limitless in strength. It's more of a 'don't feed the trolls' guidance.
Exactly.
Ferrers is arguing - at great length! - that there is just as much space in a small cottage as in a much larger house. He is plainly ridiculous. Elinor sees that there is no point trying to correct him or engage someone so foolish in reasonable conversation, but she is far too well-bred to mock or insult him. So she does the correct thing in this situation, and agrees with his nonsense until it blows over.
She's certainly not going to take his advice, and knock down a stately home to build a cottage.
Ambivalent about this one.
I like the idea of rational argument as a sign of intellectual respect, but I don't like things that are so easy to use as fully general debate stoppers, especially when they have a built-in status element.
But note that Elinor doesn't use it as a debate stopper, or to put down or belittle Ferrers. She simply chooses not to engage with his arguments, and agrees with him.
(I haven't read the book)
The way I usually come in contact with something like this is afterwards, when Elinor and her tribe are talking about those irrational greens, and how it's better to not even engage with them. They're just dumb/evil, you know, not like us.
Even without that part, this avoids opportunities for clearing up misunderstandings.
(anecdotally: some time ago a friend was telling me about discussions that are "just not worth having", and gave as an example "that time when we were talking about abortion and you said that X, I knew there was just no point in going any further". Turns out she had misunderstood me completely, and I actually had meant Y, with which she agrees. Glad we could clear that out - more than a year later, completely by accident. Which makes me wonder how many more of those misunderstandings are out there)
Katara: Do you think we'll really find airbenders?
Sokka: You want me to be like you, or totally honest?
Katara: Are you saying I'm a liar?
Sokka: I'm saying you're an optimist. Same thing, basically.
-Avatar: The Last Airbender
Kris Gunnars, Business Insider
A search brings up http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?fr=101.30 .
This seems to contradict the claim that "Sometimes there isn’t even any actual fruit in there, just chemicals that taste like fruit," since it would have to say "contains less than 1% juice" or not be described as juice at all.
Mostly correct, but only very loosely related to rationality.
Vitamins also are good stuff but they aren't taken out (or when they are they usually are put back in, AFAIK).
Rationality involves having accurate beliefs. If lots of people share a mistaken belief that causes them to take harmful actions then pointing out this mistake is rationality-enhancing.
The way giving someone a fish is fishing skill-enhancing, I'd guess...
Well, not quite. This particular mistake has a general lesson of ‘what you know about what foods are healthy may be wrong’ and an even more general one ‘beware the affect heuristic’, but there probably are more effective ways to teach the latter.
But the quote isn't attempting to teach a general lesson, it's attempting to improve one particular part of peoples' mental maps. If lots of people have an error in their map, and this error causes many of them to make a bad decision, then pointing out this error is rationality-enhancing.
No, that makes it a useful factoid. I don't consider my personal rationality enhanced whenever I learn a new fact, even if it is useful, unless it will reliably improve my ability to distinguish true beliefs from false ones in the future.
This Amazon.com review.
Steven Pinker
What about: "using the education system to collect forced labor as a 'lesson' in altruism teaches selfishness and fails at altruism"?
I have to ask, do people ever really believe that these sorts of thing are actually about helping people? I seem to recall my own ragpicking was pitched mainly in terms of how it would help my CV to have done some volunteering. That said, I can't tell if I'm just falling to hindsight bias and reinterpreting past events in favour of my current understanding of altruism, which is why I'm asking.
Makes me wonder how things would look if schools had a lesson on effective altruism a few times a year. Surely not everyone would agree, but the waterline might raise a little.
D.C. Dennett, Intuition Pumps and Other Tools for Thinking. Dennett himself is summarising Anatol Rapoport.
I don't see what to do about gaps in arguments. Gaps aren't random. There are little gaps where the original authors have chosen to use their limited word count on other, more delicate, parts of their argument, confident that charitable readers will be happy to fill the small gaps themselves in the obvious ways. There are big gaps where the authors have gone the other way, tip toeing around the weakest points in their argument. Perhaps they hope no-one else will notice. Perhaps they are in denial. Perhaps there are issues with the clarity of the logical structure that make it easy to whiz by the gap without noticing it.
The third perhaps is especially tricky. If you "re-express your target’s position ... clearly" you remove the obfuscation that concealed the gap. Now what? Leaving the gap in clear view creates a strawman. Attempting to fill it draws a certain amount of attention to it; you certainly fail the ideological Turing test because you are making arguments that you opponents don't make. Worse, big gaps are seldom accidental. They are there because they are hard to fill. Indeed it might be the difficulty of filling the gap that made you join the other side of the debate in the first place. What if your best effort to fill the gap is thin and unconvincing?
Example: Some people oppose the repeal of the prohibition of cannabis because "consumption will increase". When you try to make this argument clear you end up distinguishing between good-use and bad-use. There is the relax-on-a-Friday-night-after-work kind of use which is widely accepted in the case of alcohol and can be termed good-use. There is the behaviour that gets called "pissing your talent away" when it beer-based. That is bad-use.
When you try to bring clarity to the argument you have to replace "consumption will increase" by "bad-use will increase a lot and good-use will increase a little, leading to a net reduction in aggregate welfare." But the original "consumption will increase" was obviously true, while the clearer "bad+++, good+, net--" is less compelling.
The original argument had a gap (just why is an increase in consumption bad?). Writing more clearly exposes the gap. Your target will not say "Thanks for exposing the gap, I wish I'd put it that way.". But it is not an easy gap to fill convincingly. Your target is unlikely to appreciate your efforts on behalf of his case.
Quote: "The third perhaps is especially tricky. If you "re-express your target’s position ... clearly" you remove the obfuscation that concealed the gap. Now what? Leaving the gap in clear view creates a strawman. Attempting to fill it draws a certain amount of attention to it; you certainly fail the ideological Turing test because you are making arguments that you opponents don't make."
Just no. An argument is an argument. It is complete or not. If there is a gap in the argument, in most cases there are two eventualities: (a) the leap is a true one assuming what others would find obvious, or (b) either an honest error in the argument or an attempt to cover up a flaw in the argument.
If there is a way to "fill in" the argument that is the only way it could be filled in, you are justified in doing so, while pointing out that you are doing so. If either of the (b) cases hold, however, you must still point them out, in order to maintain your own credibility. Especially if you are refuting an argument, the gap should be addressed and not glossed over.
You might treat the (b) situations differently, perhaps politely pointing out that the original author made an error there, or perhaps not-so-politely pointing out that something is amiss. But you still address the issue. If you do not, the onus is now on you, because you have then "adopted" that incomplete or erroneous argument.
For example: your own example argument has a rather huge and glaring hole in it: "bad-use will increase a lot and good-use will increase a little". However, history and modern examples both show this to be false: in the real world, decriminalization has increased bad-use only slightly if at all, and good-use more. (See the paper "The Portugal Experiment" for one good example.)
Was there any problem there with my treatment of this rather gaping "gap" in your argument?
With regards to your example, you try to fix the gap between "consumption will increase" and "that will be a bad thing as a whole" by claiming little good use and much bad use. But I don't think that's the strongest way to bridge that gap.
Rather, I'd suggest that the good use has negligible positive utility - just another way to relax on a Friday night, when there are already plenty of ways to relax on a Friday night, so how much utility does adding another one really give you? - while bad use has significant negative utility (here I may take the chance to sketch the verbal image of a bright young doctor dropping out of university due to bad use). Then I can claim that even if good-use increases by a few orders of magnitude more than bad-use, the net result is nonetheless negative, because bad use is just that terrible; that the negative effects of a single bad-user outweigh the positive effects of a thousand good-users.
As to your main point - what to do when your best effort to fill the gap is thin and unconvincing - the simplest solution would appear to be to go back to the person proposing the position that you are critically commenting about (or someone else who shares his views on the subject), and simply asking. Or to go and look through his writings, and see whether or not he addresses precisely that point. Or to go to a friend (preferably also an intelligent debator) and asking for his best effort to fill the gap, in the hope that it will be a better effort.
Entirely within the example, not pertaining to rationality per se, and I'm not sure you even hold the position you were arguing about:
1) good use is not restricted to relaxing on a Friday. It also includes effective pain relief with minimal and sometimes helpful side-effects. Medical marijuana use may be used as a cover for recreational use but it is also very real in itself.
2) a young doctor dropping out of university is comparable and perhaps lesser disutility to getting sent to prison. You'd have to get a lot of doctors dropping out to make legalization worse than the way things stand now.
My actual position on the medical marijuana issue is best summarised as "I don't know enough to have developed a firm opinion either way". This also means that I don't really know enough to properly debate on the issue, unfortunately.
Though, looking it up, I see there's a bill currently going through parliament in my part of the world that - if it passes - would legalise it for medicinal use.
Have you read “Marijuana: Much More Than You Wanted To Know” on Slate Star Codex?
No, I have not.
So, you go back to the person you're going to argue against, before you start the argument, and ask them about the big gap in their original position? That seems like it could carry the risk of kicking off the argument a little early.
I think the idea was, 'when you've gotten to this point, that's when your pre-discussion period is over, and it is time to begin asking questions'.
And yes, it is often a good idea to ask questions before taking a position!
"Pardon me, sir, but I don't quite understand how you went from Step A to Step C. Do you think you could possibly explain it in a little more detail?"
Accompanied, of course, by a very polite "Thank you" if they make the attempt to do so. Unless someone is going to vehemently lash out at any attempt to politely discuss his position, he's likely to either at least make an attempt (whether by providing a new explanation or directing you to the location of a pre-written one), or to plead lack of time (in which case you're no worse off than before).
Most of the time, he'll have some sort of explanation, that he considered inappropriate to include in the original statement (either because it is "obvious", or because the explanation is rather long and distracting and is beyond the scope of the original essay). Mind you, his explanation might be even more thin and unconvincing than the best you could come up with...
-- Cryptonomicon by Neal Stephenson