Rationality Quotes October 2012
Here's the new thread for posting quotes, with the usual rules:
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself
- Do not quote comments/posts on LW/OB
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (298)
--Teller (source)
My experience has been that when people try to understand what went into a magic trick, they usually come up with explanations more complex than the true mechanism. Oftentimes a trick can be done either through an obvious but laborious method, or through an easy method, and people don't realize that the latter exists. (For instance, people posit elaborate mirror setups, or "moving the hand quicker than the eye", or armies of confederates, when in fact simple misdirection, forcing, palming, etc. suffice.)
--Michael Lewis' profile of Barack Obama
Possibly also explaining this trend in the world of academia.
I'm assuming many are already aware of this, but he's talking about decision fatigue here.
Hastie & Dawes, Rational Choice in an Uncertain World, pp. 67-8.
Related:
Dawes, in JUU:HB p. 392.
-Seth Godin
Spoken like a true cat.
I'm going to adopt at different social strategy and not be the obnoxiously nosy guy with no boundaries. Some things I'm curious about really aren't my business and actively seeking to uncover information that people try to keep secret is usually a personal (and often legal) violation. The terms 'industrial espionage' and 'stalking' both spring to mind.
Curiosity didn't kill the cat. The redneck with the gun killed it for tresspassing.
Jem and Tessa, Clockwork Angel by Cassandra Clare
-- Paul Graham
Thanks. That article (link) is very relevant to me after a discussion I just had on LW. Good advice, too, as far as I can tell.
Karl Popper, The Open Society and its Enemies
--Bertrand Russell (Google Books attributes this to In praise of idleness and other essays, pg 133)
Upvoted for entertainment value, but could someone enlighten me on the rationality value?
Belief in belief in the wild?
--Eminem, "The Real Slim Shady"
Eminem seeks his comparative advantage and avoids self-handicapping.
I wonder how many other Rationality Quotes we can find in rap lyrics...
Terry Pratchett, Wintersmith
Greg Egan, Diaspora
--Eric Hoffer, on Near/Far
Invertible fact alert!
It's a lot easier to hate Creationists than to hate my landlady.
Mad libs:
It is a lot easier to <strong emotion> <vaguely defined group> than to <same strong emotion> <actual acquaintance>.
And sometimes it's true with s/easier/harder/. ("feel compassion for".) Hence invertibility.
Well, yes, but the invertibility is conditional.
Compassion is easier with a concrete person for a target. As is... idk. There's probably some (respect? romantic love? Loyalty?).
Hate is easier with a diffuse target. As is, say, idolizing love, disgust, contempt, superiority, etc.
The invertibility isn't in that you can flip "harder" to "easier" and then have it make just as much sense. You have to change the emotion too, which signifies that there is a categorization of emotions: useful!
If you insist that this is invertible wisdom, then I must say you are misapplying the heuristic.
Depends. A klansman may find it easy to hate "niggers" but much harder to hate his black neighbour. A literary critic who values her tolerance may it find difficult to hate an abstract group but can passionately hate her mother-in-law. I am not sure whether the difference stems from there being two different types of hate, or only from different causes of the same sort of hate.
It is easier to <far-mode emotion> <vaguely defined group> than to <same far-mode emotion> <specific person>.
It is harder to <near-mode emotion> <vaguely defined group> than to <same near-mode emotion> <specific person>.
I don't think hate is necessarily easier with a diffuse target. People hold personal grudges well. There's also the fact that there are sometimes legitimate reasons to hate specific people, but there are basically never legitimate reasons to hate entire groups of people.
Can you summarize your understanding of legitimate reasons for hate?
I'm not asking for examples, but rather for the principles that those examples would exemplify.
It is easier to control how you relate to a theoretical group than a concrete individual. If you believe it is proper to hate Creationists, you can do so with little difficulty. If you change your mind and think it is better to pity them, you can do that.
But if you landlady has actually helped or hurt you, and you know a strong emotional response isn't actually called for, you're going to have a very hard time not liking or hating her.
Linus van Pelt
-Dana Scully, The X-Files, Season 1, Episode 17
--Terry Pratchett, Hogfather
Pretty sure the lies are out there too. I think I prefer Scully.
The quote can be said to mean that reality ("out there") doesn't lie -- falsehoods are in the map, not in the territory. But truth is what corresponds to reality...
Other people's maps are part of my territory.
Quibble: "Your" territory?
This point is also relevant to Eliezer's post on truth as correspondance. A belief can start unentangled with reality, but once people talk about it, the belief itself becomes part of the territory.
Yes, this.
Other people's expressions of verbal symbols that are not even part of their map are also part of the territory.
-- G. K. Chesterton, "The Appetite of Tyranny", arguing against pretending to be wise
--Ambrose Bierce, The Devil's Dictionary
Two WAITWs don't make a right.
In this quotation, Chesterton writes against people who compare war to vigilante justice. But his argument is not that this is a poor comparison, but that instead the analogy doesn't go far enough. So, he compounds the error of his opponents with an error of his own.
There's also some scenario slippage -- in the peacenik argument, the citizen "avenges" himself, but by the time Chesterton gets to him, the dead man was just "standing there within reach of the hatchet." That alone gives you a hint about you what kind of hearing the accused is likely to get in Chesterton's court.
The international equivalent is not a police and justice system, it's vigilante justice. Doing nothing is not much worse than killing the attacker, being killed by the attacker's friends who believe the victim had started it, and starting a vendetta. How do you arrest a state? Ask the UN for permission to carpet-bomb it?
Under the assumption that a lesser power is unable to punish injustice done by a greater power, the three possible alternatives at any level of power are "Injustice is dealt with by a greater power", "Injustice is dealt with by peers", and "Injustice is dealt with by nobody". The first system sounds nice, except that infinite regression is impossible, and so eventually you end up at the greatest level of power, choosing between systems two and three. In that case, system two seems preferable, "vigilante" connotations notwithstanding.
cogentanalysis
This sounds like it ought to mean something, but every time I try to think what it might be I fail. Is it just clever?
-- The Lord of the Rings: The Two Towers (extended edition)
What Faramir says contains wisdom but so do Frodo's words. The enemy is trying to destroy the world with some kind of epic high fantasy apocalypse. Frodo does not terminally value the death (heh) of specific foot soldiers. They may be noble and virtuous and their deaths a tragic waste. But Frodo has something to protect and also has baddass allies who return from the (mostly) dead with a wardrobe change. But he doesn't have enough power to give himself a batman-like self-handicap of using non-lethal force. Killing those who get in his way (but lamenting the necessity) is the right thing for him to do and so yes, people would do well not to hinder him.
Agreed. Though of course, I don't really see Faramir as disagreeing -- it was, after all, the Rangers of Ithilien who ambushed the Haradrim and killed the soldier they're talking about.
I'm a little bit proud that I don't know who all these people are.
downvoted. You're saying you don't know anything about the context provided by a story that is apparently of interest to (at least) several readers here, and you're proud of not sharing the context. Doesn't seem like something to crow about without first finding out if the content is frivolous.
No I wasn't. I could give you an analysis of likely outcomes of a battle between Mirkwood and Lorien archers depending on terrain. It isn't often that my knowledge of utterly useless details of fantasy stories is outclassed. I may as well enjoy the experience.
I'd ding you for having confessed to being proud of your ignorance, except that what you confessed ignorance of was not, technically speaking, a fact.
I'm never quite sure what to think about being proud of not knowing a fact. On one hand, knowledge itself almost certainly has positive value, even if that value is very small. On the other hand, making the effort to acquire very low-usefulness knowledge generally has negative expected utility, so I can understand prioritized a particular body of knowledge as "not worth it."
Of course, pride is really about signaling, so it makes sense to look at what sort of signal one's pride is sending. If someone seems particularly knowledgeable about a low-status topic, such as celebrity gossip, I judge them negatively for it. I assume most people do this, though with different lists of which topics are low-status (or am I just projecting?).
Ultimately, I think the questions to consider are: 1. As an individual, does prideful ignorance of a topic you consider not worthwhile send a signal you want to send, and 2. As a community, is this the sort of signal we want to encourage?
Noah Smith
--Will Wilkinson
That comment did move Intrade shares by around 10 percentage points, I think, though I'm only going on personal before-and-after comparisons. The good Will may have picked the wrong time to criticize his instincts.
So? That just means that some of the people who trade on intrade also made the mistake Will alludes to.
Nate Silver's model also moved toward Obama, so it's probably reflecting something real to some extent.
But the gains have been already cancelled by Romney's better performance in the first debate. You could spin this in two ways. One one hand, you could argue that the "47%" comment did move the polls, and that ceteris paribus it would have reduced significantly Romney's chances of winning. On the other hand, you could say that ceteris should not be expected to be paribus; polls are expected to shift back and forth, and regress to the mean (where "the mean" is dictated by the fundamentals--incumbency, state of the economy, etc), and that if 47% and the debate hadn't happened, other similar things would have.
-- Mark Schone
A customer-facing skills course I went on, many years ago, used the word "tape" to describe pervasive habits of speech.
--Kruschke 2010, Doing Bayesian Data Analysis, pg56-57
-Steven Kaas (via)
It always irritates me slightly that Holmes says "whatever remains, however improbable, must be the truth", when multiple incompatible hypotheses will remain.
My Holmes says, "When you have eliminated the possible, you must expand your conception of what is possible."
You have an inequality symbol missing at the end of the quote (between i and j). That made it slightly difficult for me to parse it on my first read-through ("Why does it say 'for all i, j' when the only index in the expression is 'i'?").
I don't know if you know, but just in case you (or someone else) don't: There is no inequality symbol on the computer keyboard, so he used a typical programmer's inequality symbol which is "!=". So yes, it is not easily readable (i! is a bad combination...) but totally correct.
The symbol wasn't there when I wrote my comment. It was edited in afterwards.
The way to handle that is whitespace:
i != 0. (I once was teased by my tendency to put whitespace in computer code around all operators which would be spaced in typeset mathematical formulas.)EDIT: I also use italics for variables, boldface for vectors, etc. when handwriting. Whenever I get a new pen I immediately check whether it's practical to do boldface with it.
A space between variable & operator would help.
Of course, an infitesimal prior dominating the posterior pdf might also be a hint that your model needs adjustment.
From the stories I expected the world to be sad
And it was.
And I expected it to be wonderful.
It was.
I just didn't expect it to be so big.
-- xkcd: Click and Drag
--Randal Munroe, A Mole of Moles
It's called "show, don't tell".
Did Munroe add that? It's incorrect. There are lots of situations in which it's reasonable to calculate while throwing away an occasional factor of 2.2.
Nate Silver
From the introduction to The Signal and the Noise: Why So Many Predictions Fail - But Some Don't, section entitled "The Prediction Solution".
--Ta-Nehisi Coates, "A Muscular Empathy"
Richard Posner, Catastrophe: Risk and Response
Well they're maybe a little more admirable than some other types of worker. Let's not go overboard here.
Yet a policymaker for science must either be a scientist (ish), or a Pointy-Haired Boss.
Plenty of dogberts get in on the action as well.
-Franz Kafka (quoted in Joy of Clojure)
-Antonio Machado
Translation:
-- mme_n_b
Jeff Bezos
-- Harry Potter and the Natural 20
This is a clever little exchange, and I'm generally all about munchkinry as a rationalist's tool. But as a lawyer, this specific example bothers me because it relies on and reinforces a common misunderstanding about law -- the idea that courts interpret legal documents by giving words a strict or literal meaning, rather than their ordinary meaning. The maxim that "all text must be interpreted in context" is so widespread in the law as to be a cliche, but law in fiction rarely acknowledges this concept.
So in the example above, courts would never say "well, you did 'attend' this school on one occasion, and the law doesn't say you have to 'attend' more than once, so yeah, you're off the hook." They would say "sorry, but the clear meaning of 'attend school' in this context is 'regular attendance,' because everyone who isn't specifically trying to munchkin the system understands that these words refer to that concept." Lawyers and judges actually understand the notion of words not having fixed meanings better than is generally understood.
Yes, but the setting in question is a D&D universe and many things work differently, rules-in-general most certainly included.
Well, a great many D&D players / DMs would argue that Jay_Schweikert's explanation applies equally well to the rules of role-playing games.
Not the fun ones.
Ah, fair enough. I suppose the title of the work and the idea of an actual course on Munchkinry should have been clues about the setting.
In Italy, IIRC, some kind of rule explicitly specifies the maximum number of days, and the maximum number of consecutive days, a school child can be absent (except for health reason). Otherwise, would going to school four days a week count as “attending”? Natural language's fuzziness is a feature in normal usage, but a bug if you have a law and you need to decide how to handle borderline cases.
Well, that was a fun way to spend my Saturday. I haven't had a fanfic monopolize my time this much since Friendship Is Optimal.
Best part so far:
Émile Zola
Eric Schwitzgebel
what is left are the data points that align with your narrative about yourself.
Indeed, which together with the quote implies "you" = "your narrative about yourself". See also Dennett's "The Self as a Center of Narrative Gravity".
I resent this attitude. People often assume that I don't care about the things that I forget. Really, I am tired of a whole host of prejudices against people with poor memories. People assume that I am just like them, and that if I fail to remember something they would have remembered, it was deliberate.
Nevertheless; for any given person, the more he cares about something, the less likely he is to forget about it.
Can we just agree that English doesn't have a working definition for "self", and that different definitions are helpful in different contexts? I don't think there's anything profound in proposing definitions for words that fuzzy.
Yagyu Munenori, The Life-Giving Sword (translated by William Scott Wilson).
Commentary: I see this in the martial/kinesthetic context as acting without conscious censorship of your action, using the skills you have leaned through conscious censorship; and similarly in a LW context of approaching questions in Near Mode, without consciously adjusting for bias.
Neil deGrasse Tyson, “Atheist or Agnostic?”
...
--Richard Dawkins on the ontological argument for theism, from The God Delusion, pages 81-82.
That sounds like the sort of thing you'd say if you'd never heard of mathematics.
And that sounds like the sort of thing you might say if you were unaware of countless examples of analytic-synthetic distinction in actually applying math (say, which geometry do you live in right now? And what axioms did you deduce it from, exactly?).
He has a point. It isn't obvious that Dawkins' objection doesn't apply to math. The ontological argument probably has more real-world assumptions used in it than does arithmetic.
Do people who've never heard of mathematics often say such things?
A good article on Slate.com by Daniel Engber
This thread needs a mention of this saying: "Correlation correlates with causation because causation causes correlation." (I don't know if anyone knows who came up with this.)
xkcd said it better:
I found the article rather confused. He begins by criticising the slogan as over-used, but by the end says that we do need to distinguish correlation from causation and the problem with the slogan is that it's just a slogan. His history of the idea ends in the 1940s, and he appears completely unaware of the work that has been done on this issue by Judea Pearl and others over the last twenty years -- unaware that there is indeed more, much more, than just a slogan. Even the basic idea of performing interventions to detect causality is missing. The same superficiality applies to the other issue he covers, of distinguishing statistical significance from importance.
I'd post a comment at the Slate article to that effect, but the comment button doesn't seem to do anything.
ETA: Googling /correlation causation/ doesn't easily bring the modern work to light either. The first hit is the Wikipedia article on the slogan, which actually does have a reference to Pearl, but only in passing. Second is the xkcd about correlation waggling its eyebrows suggestively, third is another superficial article on stats.org, fourth is a link to the Slate article, and fifth is the Slate article itself. Further down is RationalWiki's take on it, which briefly mentions interventions as the way to detect causality but I think not prominently enough. One has to get to the Wikipedia page on causality to find the meat of the matter.
I have a lot of sympathy for the article, though I agree it's not very focused. In my experience, "correlation does not imply causation" is mostly used as some sort of magical talisman in discussion, wheeled out by people who don't really understand it in the hope that it may do something.
I've been considering writing a discussion post on similar rhetorical talismans, but I'm not sure how on-topic it would end up being.
I would like to see an article which advised you on how you could:
I think I have a pretty good idea of when I'm doing it. It's a similar sensation to guessing the teacher's password; that 'I don't really understand this, but I'm going to try it anyway to see if it works' feeling.
--Frank Herbert, The Tactful Saboteur
A good heuristic. Barack Obama limits his wardrobe choices, Feynman decides to just always order chocolate ice cream for dessert. Leaves more time and energy for important stuff.
When I was a kid, removing my niggling and nagging choices, distractions, and petty inabilites sounded grand. It kinda backfired at first because I started over-planning the details of my daily activities, like ya do. And anything I actually took an interest in, to quell my confusion and streamline my time, drew people towards me for my arcane skills.
Is there any honor in hiding your abilities (when it's not your job) so people don't ask for help with simple stuff?
I was... uh... the family IT guy. My dad still needs the computer's power button pointed out to him.
Place a notebook next to the computer. When you tell someone how to do something, tell them to write it down, every step, in the notebook. Tell them to write it down so that they will be able to understand it later. Next time they ask you the same question, refer them to the notebook. If this fails to help, consider insisting on some minor cost (such as 'buy me a chocolate' - nothing expensive, more an irritant than anything else, merely a cost for the sake of having a cost) for reiterating anything that has been written in the notebook.
It may or may not help, but if it doesn't help, then at least you'll get a certain amount of chocolate out of it.
Richard Feynman
(Partially quoted here, but never given in a Rationality Quotes thread before.)
I recently read Surely You're Joking, are there other good Feynman autobiographies, or other scientist autobiographies, that I should check out? I don't want anything that gets too technical, but neither do I want things that are totally descriptive and biographical. I want insights into the overall way that good scientists think, but I also want to avoid specifics and technical concepts insofar as that is possible.
I think there's a sequel to Surely You're Joking, but I'm not sure what it's called.
What Do You Care What Other People Think?
Scott Adams
While I don't ever feel that way, I understand that many people have such internal verbal or non-verbal conversations with one or more other "selves". These are also common in fiction, probably in part as a literary device, but also probably as a reflection of the author's mind. Hmm, maybe it is worth a poll.
Lucky him - his internal persons are friends.
-- Robert H. Thouless, Straight and Crooked Thinking
Pierre Simon, Marquis de Laplace, "A Philosophical Essay On Probabilities", quoted here. (Hat tip.)
James Stephens
Is bravery a mental state (or something) that conquers fear, or is it bravery to conquer fear (e.g. because you're curious)?
What testable predictions does this make and have they been tested? The typical interactions of various emotions with each other is something we should be able to find out but I'm not sure if the message of the quote is supposed to be anything to do with making a claim about reality.
Purely anecdotal: I was a lot more frightened of spiders before I got a book about them out of the library and read it. They are pretty interesting little creatures. Mind you, I live where there are no actually dangerous ones.
Speculation: To the extent that a lot of fear is fear of the unknown, and curiosity attracts us to the unknown so we can know more of it, I can see how curiosity would help reduce fear.
Testable prediction: someone who reports a fear of something they don't know much about will report less fear of it after they can be encouraged to express and follow up some amount of curiosity about it. It's conditional on such curiosity actually existing. Possibly extreme phobias shut off any attempts to discuss or think about the object of fear.
Curiosity probably even helps with well-founded fears of things like bears, hydrofluoric acid or blue-glowing bits of metal. I'm content not to conquer well-founded fears I think.
Another anecdote: I was much less bothered by thunder (admittedly, I was distressed rather than panicked) when I found out that there were people who made a hobby of recording thunder. This caused me to listen to it rather than just be upset by the loud noise.
This could be partially due to the idea of safety margins. If I meet something that I know very little about, I give it a wide safety margin, and get nervous if I am closer to it than my safety margin allows; once I know more about it, I can narrow down my safety margins considerably.
For example, if I know very little about snakes except that some are poisonous, and I happen to find a house snake in my house, my reaction would be one of caution (or, if I come across it very suddenly, even of fear) - with the aim of extricating myself from the situation unbitten. However, if I know enough about snakes to identify the snake as a member of a nonvenomous species, which apparently make reasonable pets, and that their first reaction to danger is to flee, then I would be a lot less afraid of the snake in question (though a cobra would still trigger a panic-run-away response).
Experience trumps brilliance.
— Thomas Sowell
This belief seems to me very convenient for the brilliant, implying that they got where they are by hard work and properly deserve everything they have. Of course brilliant people also have to put in hard work, but their return on investment is much higher than many other contenders who may have put in even more work for lower total returns. Just-world hypothesis; life is not this fair. And while I do go about preaching the virtue of Hufflepuff, I also go about saying that people should try to Huffle where they have comparative advantage.
My reading of the quote is that empiricism is superior to rationalism (the old philosophical schools, not the sort we discuss here). If I have a proof that my bridge will hold a thousand pounds, and it breaks under a hundred, then the experiment trumps the proof.
In practical terms, though, experience does frequently trump brilliance. This does not mean this is a good thing to have happened, only that it does. Experience makes one more likely to be good at competition.
Genius is one percent inspiration, ninety-nine percent perspiration.
-- Thomas Edison
Lacking sufficient inspiration, I shall reduce my perspiration until recommended ratio is met.
Have you considered LSD, for the inspiration? I mean, if the sources don't matter, just the ratio...
Unfortunately, this will produce only a very small quantity of genius.
Yes, but it's the best you can do sometimes. And the excess sweat would otherwise be wasted.
Not necessarily. You can always apply your excess perspiration to someone else's excess inspiration (and then claim 99% of any resultant profits - assuming that you provide all the perspiration, of course).
Anecdotally, I seem to observe more excess inspiration than excess perspiration, so I don't think that excess inspiration will be hard to find.
Hmm. Corollary:
Lacking sufficient perspiration, I shall reduce my inspiration until recommended ratio is met.
Eh. Doesn't sound quite as awesome.
No, it doesn't, but might be almost equally wise. Just as it doesn't make sense to keep working hard without something worth working hard on, it probably doesn't make sense to keep trying to come up with brilliant ideas if you're already so awash in brilliant ideas that you can't implement them all.
Caveat 1: If you can find better inspiration into which to direct your limited supply of perspiration, and don't further deplete your capacity for perspiration in the process, it may still be a good idea to go for more inspiration.
Caveat 2: If you have a good way to sell your excess inspiration or buy more perspiration, and you have a strong comparative advantage in inspiration, you may want to do that, but selling inspiration is hard, as is buying good quality perspiration.
I think that Caveat #1 is extremely important here. Considering the amount of perspiration needed to turn inspiration into genius, it's probably best to spend a bit of extra time searching for the best possible inspiration to which to direct your available supply of perspiration.
A true genius would do nothing and then steal the results of other people's inspiration and perspiration.
OWAIT
— Kozma Prutkov
(translation mine)
Frankenweenie
The new one, or the original?
Definitely in the new one. I haven't seen the original.
Milan Cirkovic
So... the formal FAI theory will only be developed after an AI fooms? Makes perfect sense to me... We are all doomed!!
Arthur Schopenhauer
That made me think of this:
The world stands out on either side
No wider than the heart is wide;
Above the world is stretched the sky -
No higher than the soul is high.
Edna St. Vincent Millay
That follows in a fairly straightforward way from his central theme in his dissertation, The World as Will and Representation, which is that the world is, well, the title spoiled it.
People, even regular people, are never just any one person with one set of attributes. It's not that simple. We're all at the mercy of the limbic system, clouds of electricity drifting through the brain. Every man is broken into twenty-four-hour fractions, and then again within those twenty-four hours. It's a daily pantomime, one man yielding control to the next: a backstage crowded with old hacks clamoring for their turn in the spotlight. Every week, every day. The angry man hands the baton over to the sulking man, and in turn to the sex addict, the introvert, the conversationalist. Every man is a mob, a chain gang of idiots.
This is the tragedy of life. Because for a few minutes of every day, every man becomes a genius. Moments of clarity, insight, whatever you want to call them. The clouds part, the planets get in a neat little line, and everything becomes obvious. I should quit smoking, maybe, or here's how I could make a fast million, or such and such is the key to eternal happiness. That's the miserable truth. For a few moments, the secrets of the universe are opened to us. Life is a cheap parlor trick.
But then the genius, the savant, has to hand over the controls to the next guy down the pike, most likely the guy who just wants to eat potato chips, and insight and brilliance and salvation are all entrusted to a moron or a hedonist or a narcoleptic.
The only way out of this mess, of course, is to take steps to ensure that you control the idiots that you become. To take your chain gang, hand in hand, and lead them.
From Memento Mori by Jonathan Nolan
-- Tom Murphy
What would be a better way to teach young children about the nuances of the scientific method? This isn't meant as a snarky reply. I'm reasonably confident that Tom Murphy is onto something here, and I doubt most elementary school science fairs are optimized for conveying scientific principles with as much nuance as possible.
But it's not clear to me what sort of process would be much better, and even upon reading the full post, the closest he comes to addressing this point is "don't interpret failure to prove the hypothesis as failure of the project." Good advice to be sure, but it doesn't really go to the "dynamic interplay" that he characterizes as so important. Maybe instruct that experiments should occur in multiple rounds, and that participants will be judged in large part by how they incorporate results from previous rounds into later ones? That would probably be better, although I imagine you'd start brushing up pretty quickly against basic time and energy constraints -- how many elementary schools would be willing and able to keep students participating in year-long science projects?
That's not to say we shouldn't explore options here, but it might be that, especially for young children, traditional one-off science fairs do a decent enough job of teaching the very basic idea that beliefs are tested by experiment. Maybe that's not so bad, akin to why Mythbusters is a net positive for science.
Well, doing experiments to test which of several plausible hypotheses is more accurate, rather than those where you can easily guess what's going to happen beforehand, would be a start. (Testing whether light can travel through the dark? Seriously, WTF?)
That is a large part of the reason why we have problems like the file drawer effect and data dredging.
I don't think that thinking categorically and mechanically would be feasibly productive.
It's a reality that we have to think messily in order to solve problems quickly, even if that efficiency also causes biases.
However, we should at least be aware of what the proper way to do it would be.
Yeah. But I think there are different levels of propriety, and that is what the quote is getting at. We should mention that the ideal form of science would look very rigid and modular and be without bias. Then, we should talk about how actual science inevitably involves biases and errors, and that these biases to a limited extent are sometimes compensated by increased efficiency. Then, we should talk about how to minimize biases while maximizing the efficiency of our thought processes.
Level One: Ideal
Level Two: Reality
Level Three: Pragmatic Ideal
A class or book on Level Three would be very useful to me and I'm not aware of any. Anyone have suggestions? Less Wrong seems to cover Level One very well and Level Two is obvious to anyone who is a human being but Level Three is what I would really like to work on.
—George Polya, How to Solve It
Fred de Martines, a pork farmer who does direct marketing
"Anything that real people do in the world is by definition interesting. By 'interesting', I mean worthy of the kind of investigation that puts curiosity and honesty well before judgment. Judgment may come, but only after you’ve done some work." - Timothy Burke
Trinity: "You always told me to stay off the freeway." Morpheus: "Yes, that's true." Trinity: "You said it was suicide." Morpheus: "Then let us hope that I was wrong."
— The Matrix Reloaded
I think you must have made a mistake. This film doesn't exist.
Hypothetical quotes are the best kind of quotes...
Frédéric Bastiat.
Prior to WW2, Germany was the biggest trading partner of France.
Irrelevant. The quote is not "If goods do cross borders, armies won't."
And of course, one of the historical peaks of globalization and European integration was reached in 1914.
Libertarian quote, or rationality quote?
Traditional Aphorism
I disagree. We're obligated to do things to the best of our ability based on the knowledge we have. If those decisions have bad outcomes, that doesn't mean our actions weren't justified. Otherwise, you displace moral judgement from the here and now into inaccessible ideas about what will have turned out to be the case.
I guess there is a slight ambiguity in the way Nicholas Humphrey uses the word 'right' in the sentence: "none of this would give you a right to administer the poison". I doubt he is making a moral statement. What he is pointing out is that your beliefs will have to be judged by reality. Your beliefs do not affect the fact that what you are administering is poison.
In fact, he points out that having incorrect beliefs might make you morally less culpable. But it doesn't make you right.
No, we're obligated to make sure we have enough knowledge and to gather more knowledge if we don't. If you believe that you don't have the time and/or resources to do this, that's also a decision with moral consequences.
In other words, it's not enough to merely try to make the correct decision.
The possibility that more information will change your recommended course of action is one that has to be weighed against the costs of acquiring more information, not a moral imperative. One can always find oneself in a situation where the evidence is stacked to deceive one. That doesn't mean that before you put on your socks in the morning you ought to perform an exhaustive check to make sure that your sock drawer hasn't been rigged to blow up the White House when opened.
You use only the resources you have, including your judgement, including your metajudgement.
Somebody should start a sister site, Less Culpable. It might be More Useful.
What does having a 'right' mean in this context? Is Humphrey trying to say that other observers who know that the vial contains poison aren't obliged to allow the confused parent to administer the poison? I suppose that would be a reasonable point to make. If he is only talking in the sense of degree of blame assigned to the confused parent then his claim is more ethically questionable.
Terry Pratchett, The Last Hero
Found here.
It seems to be a misquotation of this.
-- Slavoj Zizek, The Plague of Fantasies
"A car with a broken engine cannot drive backward at 200 mph, even if the engine is really really broken."
--Eliezer
Good quote, of course, but it's against one of the rules:
Out of curiosity, does that rule extend to, say, material originally posted on Yudkowsky's personal site and later re-used or quoted as a source in a LW/OB article/post/comment? Is that a gray area?
Yes. It's also slightly gray to post quotes from other prominent Lesswrongians.
Where I make my 'slightly gray' evaluation based on whether the quote is sufficiently baddass to make it worth stretching the spirit of the thread. Sometimes they are. It's when the quotes aren't even all that good that I'd discourage it.
When did this rule come about? As recently as six months ago it was considered normal to quote EY as long as it wasn't from LW.
I figured the intent of the rule was "don't turn quotes threads into LW ingroup circlejerks", so the idea's to not do any quotes from e.g. the people in the "Top contributors" sidebar, no matter where they showed up. Do other people have other interpretations for the rule?
I'm surprised by this. I never noticed this "considered normal".
I'm pretty sure gray areas aren't rules. The actual non-gray rule is listed in the OP.
Hmm. So we're weighing badass-ness (as in wedrifid's comment (sister to this one)) against the "don't post quotes that are already part of the general LessWrong gestalt" (in whatever capacity that exists) valuation, in such cases?
http://www.slate.com/articles/health_and_science/human_evolution/2012/10/evolution_of_anxiety_humans_were_prey_for_predators_such_as_hyenas_snakes.2.html