Rationality Quotes February 2013
Another monthly installment of the rationality quotes thread. The usual rules apply:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments or posts from Less Wrong itself or from Overcoming Bias.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (563)
—Mike Sinnett, Boeing's 787 chief project engineer
Isn't the point of the article that Boeing may not have actually done at least the first two steps (design cell not to fail, prevent failure of a cell from causing battery problems)?
I am confused.
It's the point of the problem, anyway.
SInnett is probably a very good designer, but the battery design was outsourced.
On scientists trying to photograph an atom's shadow:
Luke McKinney - 6 Microscopic Images That Will Blow Your Mind
I've just come across a fascinatingly compact observation by I. J. Good:
This is a beautifully simple recipe for a conflict of interest:
Considering absolute losses assuming failure and absolute gains conditioned on success, an adviser is incentivized to give the wrong advice, precisely when:
You can see this reflected in a lot of cases because the gains to an advisor often don't scale anywhere near as fast as the gains to society or a firm. It's the Fearful Committee Formula.
Which is not nearly as common as the reverse, the Reckless Adviser Formula, when the personal loss to the adviser is so low and the potential personal gain is so high, they recommend adoption even when the expected gain for the company is negative.
In general, this is referred to as the principal-agent problem.
Note that the adviser's ethical problem also exists if L/V > p/(1-p) > l/v.
Is the order also inverted in the original?
Name three?
The success of Market-Based Management / Koch Industries appears to be due at least in part to their focus on NPV at the managerial level. You get stories like (from memory, and thus subject to fuzz) the manager of a refining plant selling the land the plant was on to a casino which was moving to the area, which he was rewarded for doing because the land the plant was on was more valuable to the casino than the company, even after factoring in the time lost because the plant was shut down and relocated. The corporate culture (and pay incentive structure) rewarded that sort of lateral use of resources, whereas a culture which compartmentalized people and departments would have balked at the lost time and disruption.
Men in Black on guessing the teacher's password:
Zed: You're all here because you are the best of the best. Marines, air force, navy SEALs, army rangers, NYPD. And we're looking for one of you. Just one.
[...]
Edwards: Maybe you already answered this, but, why exactly are we here?
Zed: [noticing a recruit raising his hand] Son?
Jenson: Second Lieutenant, Jake Jenson. West Point. Graduate with honors. We're here because you are looking for the best of the best of the best, sir! [throws Edwards a contemptible glance]
[Edwards laughs]
Zed: What's so funny, Edwards?
Edwards: Boy, Captain America over here! "The best of the best of the best, sir!" "With honors." Yeah, he's just really excited and he has no clue why we're here. That's just, that's very funny to me.
The scene in question.
That whole testing sequence is one of the best examples in film of how to distinguish what's expected of you from what's actually a good idea.
(Or in that specific case, what seems to be expected of you.)
-Joel Spolsky
-- Steve Jobs
(The Organization Formerly Known as SIAI had this problem until relatively recently. Eliezer worked, but he never published anything.)
And they ship the characters the fans want.
If your service is down, it has no features.
And no bugs.
Well, there is one pretty major bug: That your service is not doing anything at all!
It's a feature.
It has all the bugs. All of them.
(Well, not really. For instance, it doesn't have any security holes.)
If it bears any resemblance to a product at all, your own admin-level access constitutes a potential security hole.
Faramir, from Lord of the Rings on lost purposes and the thing that he protects
Except that a non-overwhelming love of a useful art may help you become better in the art, even though you would switch to another if it helped you optimize more.
-- Milton Friedman
-- Bertold Brecht
(I'm always amused when people of opposite political views express similar thoughts on society.)
Also:
This solution only works if you are in the special position of being able to make institutional design changes that can't be undone by potential future enemies. Otherwise, whose "right things" will happen depends on who is currently in charge of institutional design (think gerrymandering).
Then try to make it politically profitable to help sustain those changes you make. Make it so painfully obvious that the only reason to remove those changes would be for one's unethical gain that no politician would ever do so. The problem then though, is that people end up just not caring enough.
What you're describing is exactly the position of being able to make institutional design changes that can't be undone by potential future enemies. This position is "special" not only because the task is very difficult, but also because you have to be the first to think of it.
From a participant at the January CFAR workshop. I don't remember who. This struck me as an excellent description of what rationalists seek.
People often seem to get these mixed up, resulting in "You want useful beliefs and accurate emotions."
Not sure what an "accurate emotion" would mean, feel like some sort of domain error. (e.g. a blue sound.)
An accurate emotion = "I'm angry because I should be angry because she is being really, really mean to me."
A useful emotion = "Showing empathy towards someone being mean to me will minimize the cost to me of others' hostility."
Where's that 'should' coming from? (Or are you just explaining the concept rather than endorsing it?)
I mean in the way most (non-LW) people would interpret it, so explaining not endorsing.
Contrasting "accurate beliefs and useful emotions" with "useful beliefs and accurate emotions" would probably make a good exercise for a novice rationalist.
Why not both useful beliefs and useful emotions?
Why privilege beliefs?
This is addressed by several Sequence posts, e.g. Why truth? And..., Dark Side Epistemology, and Focus Your Uncertainty.
Beliefs shoulder the burden of having to reflect the territory, while emotions don't. (Although many people seem to have beliefs that could be secretly encoding heuristics that, if they thought about it, they could just be executing anyway, e.g. believing that people are nice could be secretly encoding a heuristic to be nice to people, which you could just do anyway. This is one kind of not-really-anticipation-controlling belief that doesn't seem to be addressed by the Sequences.)
"Beliefs shoulder the burden of having to reflect the territory, while emotions don't."
This is how I have come to think of beliefs. It's like refactoring code. You should do it when you spot regularities you can eke efficiency out of. But you should do this only if it does not make the code unwieldy or unnatural, and only if it does not make the code fragile. Beliefs should be the same thing. When your rules of thumb seem to respect some regularity in reality, I'm perfectly happy to call that "truth". So long as that does not break my tools.
If useful doesn't equal accurate then you have biased your map.
The most useful beliefs to have are almost always accurate ones so in almost all situations useful=accurate. But most people have an innate desire to bias their map in a way that harms them over the long-run. Restated, most people have harmful emotional urges that do their damage by causing them to have inaccurate maps that "feel" useful but really are not. Drilling into yourself the value of having an accurate map in part by changing your emotions to make accuracy a short-term emotional urge will cause you to ultimately have more useful beliefs than if you have the short-term emotional urge of having useful beliefs.
A Bayesian super-intelligence could go for both useful beliefs and emotions. But given the limitations of the human brain I'm better off programming the emotional part of mine to look for accuracy in beliefs rather than usefulness.
It's perhaps worth noting that EY seems to have taken instead the "accurate beliefs and accurate emotions" tack in e.g. The Twelve Virtues of Rationality. Or at least that seems to be what's implied.
I mean, I suspect "accurate beliefs and useful emotions" really is the way to go; but this is something that -- if it really is a sort of consensus here -- we need to be much more explicit about, IMO. At the moment there seems to be little about that in the sequences / core articles, or at least little about it that's explicity (I'm going from memory in making that statement).
Agreed. The idea that I should be paying attention to and then hacking my emotions is not something I learned from the Sequences but from the CFAR workshop. In general, though, the Sequences are more concerned with epistemic than instrumental rationality, and emotion-hacking is mostly an instrumental technique (although it is also epistemically valuable to notice and then stop your brain from flinching away from certain thoughts).
Randall Munroe, on updating on other people's beliefs.
Dilbert dunnit first!
(Seeing that strip again reminds me of an explanation for why teenagers in the US tend to take more risks than adults. It's not because the teenagers irrationally underestimate risks but because they see bigger benefits to taking risks.)
Let me just put the text string ‘xkcd’ in here, because I was going to add this if nobody else had, and it's lucky that I found it first.
Oh, and there's more text in the comic than what's quoted, and it's good too, so read the comic everybody!
See also this Will_Newsome comment. (I incorrectly remembered that it said something like “If all your friends jumped off a bridge, would you jump too?” “If all of them survived, I probably would.”)
Devine and Cohen, Absolute Zero Gravity, p. 96.
It's an interesting story, but it might not be as silly as it sounds if one considers "ease of explanation" as a metric for how much credence one's model assigns to a given scenario. (Yes, I agree this is a hackneyed way of modeling stuff.)
Unfortunately, this seems to be the default way humans do things.
Well, the world is a complicated place and we have limited working memory, so our models can only be so good without the use of external tools. In practice, I think looking for reasons why something is true, then looking for reasons why it isn't true, has been a useful rationality technique for me. Maybe because I'm more motivated to think of creative, sometimes-valid arguments when I'm rationalizing one way or the other.
So, uh, what's the explanation?
The story appears to be apocryphal. I've heard many versions of it associated with various famous scientists. The source quoted is a collection of jokes, with very low veracity. Additionally, there are no independent versions of the story anywhere on Google. By the way, the quoted date of Sommerfeld's death is also incorrect. I wonder if there even were (unpowered) ceiling fans in Munich's trolleys during that time.
Good point. Effects that don't exist don't need to be explained.
I'm not much of an engineer, but based on my understanding of their design from the description given, I can't see how they would even contribute to their alleged purpose.
Insultingly Stupid Movie Physics' review of The Core
The remark included the following as a footnote:
See also the extra panel (hover onto the red button) in yesterday's SMBC comic.
... I had not known about red buttons on SMBC.
roll d20... success on 'resist re-binge' check.
32 people in the same ten block radius simultaneously dying of malfunctioning pacemakers seems so tremendously unlikely, I can't imagine how one could even locate that as an explanation in a matter of seconds.
Also from the review:
William Deseriewicz
The whole speech is worth reading as one giant rationality quote
Not bad, although it seems to equate originality with goodness a little too much.
-- John C Wright
That reminds me of http://xkcd.com/690/.
Also:
-- Raymond Arritt
(Quoting this before dinner is making me hungry.)
Wikipedia may ultimately have to do one of two things, or both:
1) Provide better structure for alternate versions of contested ideas
2) Construct a practically effective demarcation between strictly factual domains, and anything more interpretive.
Such a demarcation will always be challenged; I don't see any way around that, but I'd also insist that it's necessary for our sanity. Supposed it was possible, maybe using a browser with links to a database, to try to "brand" (or give the underwriters seal of approval to) those pages that provided straightforward factual assertions, and unretouched photographs, and scans of original source texts, such as all newspapers of which a copy still exists), and to promote the idea that the respectability of any interpretive or ethical claim consists very largely in its groundedness in showing links to the "smells like a fact" zone.
Several versions with explicit labeling of which viewpoint it represents would be a huge step in improving general information retrieval. Hypertext in general was obviously a huge leap, but the problem of presenting the evolution of a school of thought on a particular subject has not been solved satisfactorily IMO. Path dependence of various things is still among the information we regularly do not record/throw away. We should not be reliant upon brilliant synthesists taking interest in each subject and writing a well organized history.
Been making a game of looking for rationality quotes in the super bowl
"It's only weird if it doesn't work" --Bud Light Commercial
Only a rationality quote out of context, though, since the ad is about superstitious rituals among sports fans. My automatic mental reply is "well that doesn't work"
Well, but in the universe of the commercials, it clearly did, so long as you went to the appropriate expert.
S. T. Rev
-- Geoff Anders (paraphrased)
Did he mean if they're someone else's fault then you have to fix the person?
Yep.
Linus Pauling
Yes, but also being able to tell which of those ideas are good is even better.
From the alt-text in the above-linked comic:
The example in the comic is not a good one. Of the choices on the board, E being proportional to mc^2 is the only option where the units match. You only need to have that one idea to save yourself the trouble of having lots of other ideas.
-- Noah Brand
I'd prefer if this quote ended with " ... and then I got done weeping and started working on my shoe budget," but oh wells.
Generally speaking, bigger problems tend to be cheaper to solve (i.e. solving them will yield more utilons per dollar); so if there is a painting in a museum that risks being sold, and there are people that risk dying from malaria, the existence of latter is a good indication that worrying about the former isn't the most effective use of a given amount of resources. (“Concentrate on the high-order bits” -- Umesh Vazirani.) But in this particular case, that heuristic doesn't seem to work (unless I'm overestimating the cost of prosthetics).
"...And then I remembered status is positional, felt superior to the footless man, and stopped weeping."
Shoes aren't just about positional social status, are they? (I mean, the difference between a $20 pair of shoes and a $300 pair of shoes mostly is, but the difference between a $20 pair of shoes and no shoes at all isn't, is it?)
This. If only people realized that unpleasant facts do not cancel each other out, and pointing out one unpleasant fact in addition to another should never ever make us feel better, because it only leaves us in a worse world than we started out in. Compute the actual utilities. It's such a common and avoidable error.
I think both your comment and the quote are forgetting the instrumental purpose of crying and/or feeling bad.
I can't say I see your point. Mind explaining?
My guess: The purpose of crying is to make people around you more likely to help you.
So if you don't have shoes, there is a chance that crying in public will make someone give you money to buy the shoes. But if there is a person without feet nearby, your chances become smaller, because people will redirect their limited altruist budgets to that other person. Your crying becomes less profitable.
I think people just accidentally conflate keeping problems in perspective with the idea that the existence of bigger problems makes the small problems negligible and therefore equivalent to non-problems.
I've seen this happen with positive things too; sometimes you won't mind repeatedly doing small favors for someone and they start acting like you not minding means the favor is equivalent to doing nothing from your perspective, which is frustrating when your small but non-zero effort goes unacknowledged.
It's sort of like approximating sinθ as 0 for small angles. ^_^
—Yagyū Munenori, The Life-Giving Sword
-Alex Tabarrok
One amusing aspect is that assuming the person is justified in their belief that their church/country is ethical, the above is a valid inference.
Not necessarily. You don't punish people based on their likelihood of being guilty but based on severity of their crime.
If torture is used as tool to gain information instead of being used to punish it's even more questionable whether the likelihood of being guilty correlates with the severity of the torture. The fact that someone decides to torture to get more information suggests that they have an insuffienct amount of information.
If there a 50% chance that a person has information that can prevent a nuclear explosion, you can argue that it's ethical to torture to get that information.
After the bomb has exploded and you know for certain who did the crime, there not much need to torture anyone.
An interrigator that tortures is more likely to get false confession that implicate innocents. If he then goes and tortures those innocents, you see that people who torture are more likely to punish innocents than people who don't.
--Jovah's Angel by Sharon Shinn
--Tom Chivers
I agree subject to the specification that each such observation must look substantially more like the absence of a duck then a duck. There are many things we see which are not ducks in particular locations. My shoe doesn't look like a duck in my closet, but it also doesn't look like the absence of a duck in my closet. Or to put it another way, my sock looks exactly like it should look if there's no duck in my closet, but it also looks exactly like it should look if there is a duck in my closet.
If your sock does not have feathers or duck-shit on it, then it is somewhat more likely that it has not been sat on by a duck.
Insufficiently more likely. I've been around ducks many times without that happening to my socks. Log of the likelihood ratio would be close to zero.
You originally were talking about a duck in your closet, which isn't the same as thing as being around ducks.
The discussion reminds me of this, which makes the point that, while corelation is not causation, if there's no corelation, there almost certainly isn't causation.
This is completely wrong, though not many people seem to understand that yet.
For example, the voltage across a capacitor is uncorrelated with the current through it; and another poster has pointed out the example of the thermostat, a topic I've also written about on occasion.
It's a fundamental principle of causal inference that you cannot get causal conclusions from wholly acausal premises and data. (See Judea Pearl, passim.) This applies just as much to negative conclusions as positive. Absence of correlation cannot on its own be taken as evidence of absence of causation.
It depends. While true when the signal is periodic, it is not so in general. A spike of current through the capacitor results in a voltage change. Trivially, if voltage is an exponent (V=V0exp(-at), then so is current (I=C dV/dt=-aCV0 exp(-at)), with 100% correlation between the two on a given interval.
As for the Milton's thermostat, only the perfect one is uncorrelated (the better the control system, the less the correlation), and no control system without complete future knowledge of inputs is perfect. Of course, if the control system is good enough, in practice the correlation will drown in the noise. That's why there is so little good evidence that fiscal (or monetary) policy works.
Yes, this is completely wrong. There is frequently no correlation but strong causation due to effect cancellation (homeostasis, etc.)
Here's a recent paper making this point in the context of mediation analysis in social science (I could post many more):
http://www.quantpsy.org/pubs/rucker_preacher_tormala_petty_2011.pdf
Nancy, I don't mean to jump on you specifically here, but this does seem to me to be a special instance of a general online forum disease, where people {prefer to use | view as authoritative} online sources of information (blogs, wikipedia, even tvtropes, etc.) vs mainstream sources (books, academic papers, professionals). Vinge calls it "the net of a million lies" for a reason!
Not disagreeing, but just wanted to mention the useful lesson that there are some cases of causation without correlation. For example, the fuel burned by a furnace is uncorrelated with the temperature inside a home. (See: Milton Friedman's thermostat.)
thefolksong
The first response that comes to my mind is "because if the butterfly were trying that hard to escape the kid, it would fly above the kid's reach, and the kid would give up." When I look at the scene, I see a kid chasing a butterfly, and a butterfly too stupid to realize it should flee instead of simply dodging.
Animals on the intelligence levels of butterflies (which, keep in mind, have specific mating flight patterns they use to tell other members of their species apart from things like ribbons and stray flower petals,) don't seem to even have retreat instincts, just avoidance instincts. They can't recognize persistent pursuit. A fly won't hesitate to land on a person who has been trying to swat it for minutes on end.
Because you're a human, not a butterfly. It seems like an animal that used a cognitive filter that defaulted to the latter case would take a pretty severe fitness hit.
Three things, in no particular order:
I seem to recall that, in some obscure language, each noun has an agency level and in a sentence the most agenty noun is the subject by default, unless the verb is specially inflected to show otherwise: for example, “[dog] [bite] [man]” would mean ‘a man bit a dog’, regardless of word order, because the noun “[man]” has higher agency than “[dog]”.
Would you sooner see a tiger chasing a man, or a man running away from a tiger? If the former, it's not just the fact that butterflies are not human, it's the fact that the butterflies are small.
I think that, at least in the case of the lion, it would also depend on whether the two of them are moving towards the left side or the right side of my visual field. I heard that in The Great Wave off Kanagawa the boats are intended to look more agenty than the wave, but for Western people it will typically look like the other way round (due to Western languages being written from left to right), and for a Westerner to get the right effect they'd have to look at the picture in a mirror. (It works for me, at least.)
Is this visual field orientation issue really Western vs Eastern? If so, has it evaporated lately?
One of the media that most lends itself to testing this notion is video games, since there is almost always an agent, and often a preferred direction to gameplay. In some cases, there is a lot of free movement but when you enter a new zone/approach a boss, it generally goes one way rather than the other.
Eastern games favoring left-to-right over right-to-left: Super Mario Brothers, Ninja Gaiden, Megaman, Ghosts and Goblins, Double Dragon, TMNT, River City Ransom, Sonic the Hedgehog, Gradius/Lifeforce, UN Squadron, Rygar, Contra, Codename: Viper, Faxanadu (at least, the beginning, which is all I saw), Excitebike, Zelda 2, Act Raiser, Wizards and Warriors, and Cave Story.
On the other side, Final Fantasy combat generally puts the party on to right side, facing left. That's pretty leftward-oriented for sure. And very slightly - more slightly than any of the above - Metroid. Whenever you find a major powerup, you approach it from the right. You enter Tourian (the last area) from the right, and approach all 3 full bosses from the right. Those two are all I can think of with any sort of leftward bias at all.
In the west, the only games I can think of that favor right-to-left over left-to-right are Choplifter and Solaris; also, we get slightly-leftward readings on the Atari game of The Empire Strikes Back (you go left to meet the attack, but the primary agents are the attacking walkers, which are going right, and you need to keep up with them) and Pitfall (it seems mainly designed for players going right... which meant it was easier to turn around and go left; however, I'm sure the designer did this intentionally).
In absolute terms and even more at a fractional level, that's more than the eastern games.
... Now my head hurts. And man, going to a boarding school at a young age really exposed me to a lot of games.
Huh, I just tried that, and it works for me too. When you mirror it, it looks like they're going into the wave instead of fleeing from it. The effect is really strong; I wondered if it would still work when I knew about it, but it does.
BTW, does anyone get different effects from the emoticons :-/ and :-\ or it's just me?
V erpragyl qvfpbirerq gung, juvyr gurl fhccbfrq gb or flabalzbhf (ba Snprobbx gurl eraqre gb gur fnzr cvp), gb zr gur sbezre srryf zber yvxr “crecyrkvgl, pbashfvba” (naq gung'f ubj V trarenyyl hfr vg), jurernf gur ynggre srryf zber yvxr “qvfnccebiny” (naq V bayl fnj gung orpnhfr zl cubar unf :-\ ohg abg :-/ nzbat gur cer-pbzcbfrq rzbgvpbaf, fb V cvpxrq gur sbezre ohg vg qvqa'g ybbx evtug gb zr).
[Edited to move the question to the front and rot-13 the rest as per Nesov's suggestion.]
You shouldn't prime the audience before asking a question like that.
Don't good hunters have good mental models of their prey? I mean I get that you're thinking that it wouldn't help to feel sympathy for animals of other species. But it would help in many cases to have empathy, and to see things from the other animal's perspective.
--Sam Harris
This reminds me of
which I believe is a paraphrasing of something Jonathan Swift said, but I'm not sure. Anyone have the original?
I don't think this is empirically true, though. Suppose I believe strongly that violent crime rates are soaring in my country (Canada), largely because I hear people talking about "crime being on the rise" all the time, and because I hear about murders on the news. I did not reason myself into this position, in other words.
Then you show me some statistics, and I change my mind.
In general, I think a supermajority of our starting opinions (priors, essentially) are held for reasons that would not pass muster as 'rational,' even if we were being generous with that word. This is partly because we have to internalize a lot of things in our youth and we can't afford to vet everything our parents/friends/culture say to us. But the epistemic justification for the starting opinions may be terrible, and yet that doesn't mean we're incapable of having our minds changed.
The chance of this working depends greatly on how significant the contested fact is to your identity. You may be willing to believe abstractly that crime rates are down and public safety is up after being shown statistics to that effect -- but I predict that (for example) a parent who'd previously been worried about child abductions after hearing several highly publicized news stories, and who'd already adopted and vigorously defended childrearing policies consistent with this fear, would be much less likely to update their policies after seeing an analogous set of statistics.
I agree, but I think part of the process of having your mind changed is the understanding that you came to believe those internalized things in a haphazard way. And you might be resisting that understanding because of the reasons @Nornagest mentions -- you've invested into them or incorporated them into your identity, for example. I think I'm more inclined to change the quote to
to make it slightly more useful in practice, because often changing the person's mind will require not only knowing the more accurate facts or proper reasoning, but also knowing why the person is attached to his old position -- and people generally don't reveal that until they're ready to change their mind on their own.
Oops, I guess I wasn't sure where to put this comment.
You put them into a social enviroment where the high status people value logic and evidence. You give them the plausible promise that they can increase their status in that enviroment by increasing the amount that they value logic and evidence.
If you can't appeal to reason to make reason appealing, you appeal to emotion and authority to make reason appealing.
Take all their stuff. Tell them that they have no evidence that it's theirs and no logical arguments that they should be allowed to keep it.
They beat you up. People who haven't specialized in logic and evidence have not therefore been idle.
Shoot them?
I think you just independently invented the holy war.
This is from the Sam Harris vs. William Lane Craig debate, starting around the 44 minute mark. IIRC, Luke's old website has a review of this particular debate.
Put them in a situation where they need to use logic and evidence to understand their environment and where understanding their environment is crucial for their survival, and they'll figure it out by themselves. No one really believes God will protect them from harm...
I have some friends who do... At least insofar as things like "I don't have to worry about finances because God is watching over me, so I won't bother trying to keep a balanced budget." Then again, being financially irresponsible (a behaviour I find extremely hard to understand and sympathize with) seems to be common-ish, and not just among people who think God will take care of their problems.
Why not? Thinking about money is work. It involves numbers.
Moreover, it often involves a great deal of stress. Small wonder that many people try to avoid that stress by just not thinking about how they spend money.
I think that's mostly because money is too abstract, and as long as you get by you don't even realize what you've lost. Survival is much more real.
Sadly, that only works on a natural-selection basis, so the ethics boards forbid us from doing this. If they never see anyone actually failing to survive, they won't change their behavior.
Can't make an omelette without breaking some eggs. Videotape the whole thing so the next one has even more evidence.
Ozy Frantz - Brain Chemicals are not Fucking Magic
Klingon proverb.
--Gabe Newell during a talk. The whole talk is worthwhile if you're interested in institutional design or Valve.
What's the percent chance that I'm doing it wrong?
The whole quote:
The problems you face might not require a serious approach; without more information, I can't say.
S. T. Rev
-Yevgeny Yevtushenko
From this recent talk
I cannot express how true this is, at least not without a lot of swear words.
Aubrey de Grey being an immortalist himself, I'm assuming the irony to be unintentional?
Haha, didn't occur to me until I read your comment, so there's one data point for you.
W. H. Auden, "The More Loving One"
(Joseph Heath & Andrew Potter, The Rebel Sell)
-- Screwtape, The Screwtape Letters by C.S. Lewis
-- Magnificent Sasquatch
-- C. S. Lewis, Out of the Silent Planet
ShittingtonUK
The publisher selected that design. The author's involvement almost always ends with the manuscript.
-- Lawrence Watt-Evans
You don't "judge" a book by its cover; you use the cover as additional evidence to more accurately predict what's in the book. Knowing what the publisher wants you to assume about the book is preferable to not knowing.
(Except when it's a novel and the text on the back cover spoilers events from the middle of the book or later which I would have preferred to not read until the right time.)
Spoilers matter less than you think.
According to a single counter-intuitive (and therefore more likely to make headlines), unreplicated study.
Gah! Spoiler!
I don't like the study setup there. One readthrough of spoiled vs one readthrough of unspoiled material lets you compare the participants' hedonic ratings of dramatic irony vs mystery, and it's quite reasonable that the former would be equally or more enjoyable... but unlike in the study, in real life unspoiled material can be read twice: the first time for the mystery, then the second time for the dramatic irony; with spoiled material you only get the latter.
Those error bars look large enough that I could still be right about myself even without being a total freak.
No, they selected them to sell more copies by highjacking the easier-to-press buttons of your nervous system.
There's something to that, but it's not as if Varian's Microeconomic Analysis is going to have the cover of Spice and Wolf 1.
On the other hand, the method of judging a book's contents by its cover clearly has holes in it considering Spice and Wolf 1 has the cover of Spice and Wolf 1.
Probably purely true for some books, but as someone who buys thousands of books a year, my impression is they are very likely to reveal who they think their readers will be (hence a lot of covers say "stay away" to me), and just occasionally they can show a startling streak of originality. E.g. the board designs (there may be no dustjacket) on Dave Eggers' books are uniquely artistic in my opinion, and in this case since he has been seriously into graphics, I don't think it's any accident. You might think "Maybe this book is written by a bold and original person" and IMHO you'd be right. Also, the cover design of The Curious Incident of the Dog in the Night-Time by Mark Haddon kind of sent a message on my wavelength and it was not misleading (for me).
(Joseph Heath, The Efficient Society)
Heath is an excellent writer on economics/philosophy.
-- Chad Fowler (from The 4-Hour Body)
"We're even wrong about which mistakes we're making."
-Carl Winfeld
-- Randall Munroe
Definitely a double, but I can't link the others right now.
@slicknet
With apologies for double-commenting: "Don't assume others are ignorant" is likely to be read by a lot of people (including myself at first) as "Aim high and don't be easily be convinced of an inferential gap". Posts on underconfidence may also be relevant.
If we are in the business of making assumptions, there is no dichotomy, you can as well consider both hypotheticals. (Actually believing that either of these holds in general, or in any given case where you don't have sufficient information, would probably be dumb, ignorant, a mistake.)
Also, consider the possibility that it is you who is dumb, ignorant, and making mistakes.
The Last Psychiatrist (http://thelastpsychiatrist.com/2009/06/delaying_gratification.html)
-- Scott Sumner (talking about Italian politicians when the EU controls their monetary policy, but it generalizes)
Francis Spufford, Red Plenty
Introduction to Learn Python The Hard Way, by Zed A. Shaw
If anyone feels even remotely inspired to click through and actually learn python, do it. Its been the most productive thing I've done on the internet.
This makes me wonder how much my writing skills would improve if I retyped excellently written essays for a while.
Benjamin Franklin's method of learning to write well is summarized here. His version:
I would expect the answer to be "not much, compared to writing and publishing horrible, horrible fanfiction".
I'd like to see a study result on that.
In Art History class I learned that a common way for great artists to learn to paint was by copying the work of the masters. I then asked the art teacher why it was a rule that we couldn't copy other famous historical paintings. I can't remember her exact answer but the times I haven't followed her advice and went and copied a great painting, I seem to have learned more. But again, I'd like to see a study result.
— Gaston Leroux
Only with very low probability.
and the human mind loves to find patterns even when the probabilities of the pattern being a rule are low. Coincidences are correlation.
[Footnote to: "This was a most disturbing result. Niels Bohr (not for the first time) was ready to abandon the law of conservation of energy". The disturbing result refers to the observations of electron energies in beta-decay prior to hypothesizing the existence of neutrinos.]
-David Griffiths, Introduction to Elementary Particles, 2008 page 24
-Luc de Clapiers
Joke: a tourist was driving around lost in the countryside in Ireland among the 1 lane roads and hill farms divided by ancient stone fences, and he asks a sheep farmer how to get to Dublin, to which he replies:
"Well ... if I was going to Dublin, I wouldn't start from here."
Moral, as I see it anyway: While the heuristic "to get to Y, start from X instead of where you are" has some value (often cutting a hard problem into two simpler ones), ultimately we all must start from where we are.
Satoshi Kanazawa
While I pretty much agree with the quote, it doesn't provide anyone that isn't already convinced with many good reasons to believe it. Less of an unusually rational statement and more of an empiricist applause light, in other words.
In any case, a scientific conclusion needn't be inherently offensive for closer examination to be recommended: if most researchers' backgrounds are likely to introduce implicit biases toward certain conclusions on certain topics, then taking a close look at the experimental structure to rule out such bias isn't merely a good political sop but is actually good science in its own right. Of course, dealing with this properly would involve hard work and numbers and wouldn't involve decrying all but the worst studies as bad science when you've read no more than the abstract.
Unfortunately, since the people deciding which papers to take a closer look at tend to have the same biases as most scientists, the papers that actually get examined closely are the ones going against common biases.
I hate to find myself in the position of playing apologist for this mentality, but I believe the party line is that most of the relevant biases are instilled by mass culture and present at some level even in most people trying to combat them, never mind scientists who oppose them in a kind of vague way but mostly have better things to do with their lives.
In light of the Implicit Association Test this doesn't even seem all that far-fetched to me. The question is to what extent it warrants being paranoid about experimental design, and that's where I find myself begging to differ.
This seems to imply that science is somehow free from motivated cognition — people looking for evidence to support their biases. Since other fields of human reason are not, it would be astonishing if science were.
(Bear in mind, I use "science" mostly as the name of a social institution — the scientific community, replete with journals, grants and funding sources, tenure, and all — and not as a name for an idealized form of pure knowledge-seeking.)
I take the quote to be normative rather than descriptive. Science is not free from motivated cognition, but that's a bug, not a feature.
I'd take an issue with "undesirable", the way I understand it. For example, the conclusion that traveling FTL is impossible without major scientific breakthroughs was quite undesirable to those who want to reach for the stars. Similarly with "dangerous": the discovery of nuclear energy was quite dangerous.
If travelling faster than light is possible,
I desire to believe that travelling faster than light is possible;
If travelling faster than light is impossible,
I desire to believe that travelling faster than light is impossible;
Let me not become attached to beliefs I may not want.
I think it's pretty clear that scientific conclusions can be dangerous in the sense that telling everybody about them is dangerous. For example, the possibility of nuclear weapons. On the other hand, there should probably be an ethical injunction against deciding what kind of science other people get to do. (But in return maybe scientists themselves should think more carefully about whether what they're doing is going to kill the human race or not.)
-- From the final screen of Call of Cthulhu: The Wasted Land
-- Time Braid
(Sorry, I couldn't resist.)
Studies show that people who try to run behind a car frequently fail to keep up, while nobody who runs in front of a car fails more than once.
Karl Popper
There's a failure mode associated to this attitude worth watching out for, which is assuming that people who disagree with you are being irrational and so not bothering to check if you have arguments against what they say.
Sun Tzu on establishing a causal chain from reality to your beliefs.
Dupe.
— Herbert Butterfield, The Whig Interpretation of History
Eckhart Tolle, as quoted by Owen Cook in The Blueprint Decoded