Rationality Quotes February 2013
Another monthly installment of the rationality quotes thread. The usual rules apply:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments or posts from Less Wrong itself or from Overcoming Bias.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (563)
--Tom Chivers
I agree subject to the specification that each such observation must look substantially more like the absence of a duck then a duck. There are many things we see which are not ducks in particular locations. My shoe doesn't look like a duck in my closet, but it also doesn't look like the absence of a duck in my closet. Or to put it another way, my sock looks exactly like it should look if there's no duck in my closet, but it also looks exactly like it should look if there is a duck in my closet.
If your sock does not have feathers or duck-shit on it, then it is somewhat more likely that it has not been sat on by a duck.
Insufficiently more likely. I've been around ducks many times without that happening to my socks. Log of the likelihood ratio would be close to zero.
You originally were talking about a duck in your closet, which isn't the same as thing as being around ducks.
The discussion reminds me of this, which makes the point that, while corelation is not causation, if there's no corelation, there almost certainly isn't causation.
Not disagreeing, but just wanted to mention the useful lesson that there are some cases of causation without correlation. For example, the fuel burned by a furnace is uncorrelated with the temperature inside a home. (See: Milton Friedman's thermostat.)
This is completely wrong, though not many people seem to understand that yet.
For example, the voltage across a capacitor is uncorrelated with the current through it; and another poster has pointed out the example of the thermostat, a topic I've also written about on occasion.
It's a fundamental principle of causal inference that you cannot get causal conclusions from wholly acausal premises and data. (See Judea Pearl, passim.) This applies just as much to negative conclusions as positive. Absence of correlation cannot on its own be taken as evidence of absence of causation.
-- Geoff Anders (paraphrased)
Satoshi Kanazawa
I'd take an issue with "undesirable", the way I understand it. For example, the conclusion that traveling FTL is impossible without major scientific breakthroughs was quite undesirable to those who want to reach for the stars. Similarly with "dangerous": the discovery of nuclear energy was quite dangerous.
If travelling faster than light is possible,
I desire to believe that travelling faster than light is possible;
If travelling faster than light is impossible,
I desire to believe that travelling faster than light is impossible;
Let me not become attached to beliefs I may not want.
Something not (currently) possible can still be desirable.
FTL being impossible is undesirable if you want to go to the stars.
The conclusion that "FTL is impossible" is undesirable if and only iff FTL is possible.
The two conditions are very different.
Shouldn't it read
"FTL is impossible" is undesirable if and only if FTL is possible."
as it stands it reads "FTL is impossible" is undesirable if and only if and only if (iff) FTL is possible.
They are indeed. You seem to have added a level of indirection not present in the original statement. One statement is about this world, the other is about possible worlds.
I think it's pretty clear that scientific conclusions can be dangerous in the sense that telling everybody about them is dangerous. For example, the possibility of nuclear weapons. On the other hand, there should probably be an ethical injunction against deciding what kind of science other people get to do. (But in return maybe scientists themselves should think more carefully about whether what they're doing is going to kill the human race or not.)
I think nuclear weapons have a chance of killing a large number of people but are very unlikely to kill the human race.
While I pretty much agree with the quote, it doesn't provide anyone that isn't already convinced with many good reasons to believe it. Less of an unusually rational statement and more of an empiricist applause light, in other words.
In any case, a scientific conclusion needn't be inherently offensive for closer examination to be recommended: if most researchers' backgrounds are likely to introduce implicit biases toward certain conclusions on certain topics, then taking a close look at the experimental structure to rule out such bias isn't merely a good political sop but is actually good science in its own right. Of course, dealing with this properly would involve hard work and numbers and wouldn't involve decrying all but the worst studies as bad science when you've read no more than the abstract.
Unfortunately, since the people deciding which papers to take a closer look at tend to have the same biases as most scientists, the papers that actually get examined closely are the ones going against common biases.
I hate to find myself in the position of playing apologist for this mentality, but I believe the party line is that most of the relevant biases are instilled by mass culture and present at some level even in most people trying to combat them, never mind scientists who oppose them in a kind of vague way but mostly have better things to do with their lives.
In light of the Implicit Association Test this doesn't even seem all that far-fetched to me. The question is to what extent it warrants being paranoid about experimental design, and that's where I find myself begging to differ.
This seems to imply that science is somehow free from motivated cognition — people looking for evidence to support their biases. Since other fields of human reason are not, it would be astonishing if science were.
(Bear in mind, I use "science" mostly as the name of a social institution — the scientific community, replete with journals, grants and funding sources, tenure, and all — and not as a name for an idealized form of pure knowledge-seeking.)
I take the quote to be normative rather than descriptive. Science is not free from motivated cognition, but that's a bug, not a feature.
Sure, but I often see this sort of argument used against concerns about bias in (claimed) scientific conclusions. I'd rather people didn't treat science as privileged against bias, and the quote above seems to encourage that.
Is Newtons theory of gravity true or false? It's neither. For some problems the theory provides a good model that allows us to make good predictions about the world around us. For other problems the theory produces bad predictions.
The same is true for nearly every scientific model. There are problems where it's useful to use the model. There are problems where it isn't.
There are also factual statements in science. Claiming that true and false are the only possible adjectives to describe them is also highly problematic. Instead of true and false, likely and unlikely are much better words. In hard science most scientific conclusions come with p values. The author doesn't try to declare them true or false but declares them to be very likely.
It's also interesting that the person who made this claim isn't working in the hard sciences. He seems to be a evolutionary psychologist based in the London School of Economics. In the Wikipedia article that desribes him he's quoted as suggesting that the US should have retaliated 9/11 with nuclear bombs. That a non-scientific racist position. He published some material that's widely considered as racist in Psychology Today. I don't see why "racist" is no valid word to describe his conclusions.
Huh, what definition of "racist" are you using here? Would you describe von Neumann's proposal for a pre-emtive nuclear strike on the USSR as "racist"?
I'm not sure what you mean by "racist", however is your claim supposed to be that this somehow implies that the conclusion is false/less likely? You may want to practice repeating the Litany of Tarski.
It's basically about putting a low value on the life on non-white civilians. In addition "I would do to foreigners, what Ann Coulter would do to them", is also a pretty straight way to signal racism.
I haven't argued that fact. I'm advocating for having a broad number of words which multidimensional meaning.
I see no reason to treat someone who makes wrong claims about race and who's personal beliefs cluster with racist beliefs in his nonscientific statements the same way as someone who just makes wrong statements about the boiling point of some new synthetic chemical.
Rather than using the ambiguous word "racist", one could say specifically that Kanazawa is an advocate of genocide.
So would you call the bombings of civilians during WWII "racist"?
So you would agree that there are some statements that are both "racist" and true.
What do you mean by "wrong"? If you mean "wrong" in the sense of "false", you've yet to present any evidence that any of Satoshi Kanazawa's claims are wrong.
What happens if you apply the same epistomological standards to claims that someone is racist that you apply to claims from science?
A scientist can have an inclination towards--for example--racist ideas. You can't just call this a kind of being wrong, because depending on the truth of what they're studying, this can make them right more often or less often.
So racist scientists are possible, and racist scientific practice is possible. I think 'racist' is an appropriate label for the conclusions drawn with that practice, correct or incorrect.
Though, I think being racist is a property of a whole group of conclusions drawn by scientists with a particular bias. It's not an inherent property of any of the conclusions; another researcher with completely different biases wouldn't be racist for independently rediscovering one of them.
It's a useful descriptor because a body of conclusions drawn by racist scientists, right or wrong, is going to be different in important ways from one drawn by non-racist scientists. It doesn't reduce to "larger fraction correct" or "larger fraction incorrect" because it depends on if they're working on a problem where racists are more or less likely to be correct.
From a participant at the January CFAR workshop. I don't remember who. This struck me as an excellent description of what rationalists seek.
People often seem to get these mixed up, resulting in "You want useful beliefs and accurate emotions."
Contrasting "accurate beliefs and useful emotions" with "useful beliefs and accurate emotions" would probably make a good exercise for a novice rationalist.
Not sure what an "accurate emotion" would mean, feel like some sort of domain error. (e.g. a blue sound.)
An accurate emotion = "I'm angry because I should be angry because she is being really, really mean to me."
A useful emotion = "Showing empathy towards someone being mean to me will minimize the cost to me of others' hostility."
Where's that 'should' coming from? (Or are you just explaining the concept rather than endorsing it?)
I mean in the way most (non-LW) people would interpret it, so explaining not endorsing.
Why not both useful beliefs and useful emotions?
Why privilege beliefs?
If useful doesn't equal accurate then you have biased your map.
The most useful beliefs to have are almost always accurate ones so in almost all situations useful=accurate. But most people have an innate desire to bias their map in a way that harms them over the long-run. Restated, most people have harmful emotional urges that do their damage by causing them to have inaccurate maps that "feel" useful but really are not. Drilling into yourself the value of having an accurate map in part by changing your emotions to make accuracy a short-term emotional urge will cause you to ultimately have more useful beliefs than if you have the short-term emotional urge of having useful beliefs.
A Bayesian super-intelligence could go for both useful beliefs and emotions. But given the limitations of the human brain I'm better off programming the emotional part of mine to look for accuracy in beliefs rather than usefulness.
Good point about beliefs possibly only "feeling" useful. But that applies to accuracy as well. Privileging accuracy can also lead you to overstate its usefulness. In fact, I find it's often better to not even have beliefs at all. Rather than trying to contort my beliefs to be useful, a bunch of non map-based heuristics gets the job done handily. Remember, the map-territory distinction is itself but a useful meta-heuristic.
This is addressed by several Sequence posts, e.g. Why truth? And..., Dark Side Epistemology, and Focus Your Uncertainty.
Beliefs shoulder the burden of having to reflect the territory, while emotions don't. (Although many people seem to have beliefs that could be secretly encoding heuristics that, if they thought about it, they could just be executing anyway, e.g. believing that people are nice could be secretly encoding a heuristic to be nice to people, which you could just do anyway. This is one kind of not-really-anticipation-controlling belief that doesn't seem to be addressed by the Sequences.)
"Beliefs shoulder the burden of having to reflect the territory, while emotions don't." Superb point that. And thanks for the links.
"Beliefs shoulder the burden of having to reflect the territory, while emotions don't."
This is how I have come to think of beliefs. It's like refactoring code. You should do it when you spot regularities you can eke efficiency out of. But you should do this only if it does not make the code unwieldy or unnatural, and only if it does not make the code fragile. Beliefs should be the same thing. When your rules of thumb seem to respect some regularity in reality, I'm perfectly happy to call that "truth". So long as that does not break my tools.
It's perhaps worth noting that EY seems to have taken instead the "accurate beliefs and accurate emotions" tack in e.g. The Twelve Virtues of Rationality. Or at least that seems to be what's implied.
I mean, I suspect "accurate beliefs and useful emotions" really is the way to go; but this is something that -- if it really is a sort of consensus here -- we need to be much more explicit about, IMO. At the moment there seems to be little about that in the sequences / core articles, or at least little about it that's explicity (I'm going from memory in making that statement).
Indeed, accurate emotions appear a better description. Consider killing someone might free up many opportunities, and would only have the consequence of bettering many lives; the useful emotion would be happiness at the opportunity to forever end that person's continued generation and spread of negative utility. Regardless of whether the accurate emotion might yield the same result, I'd trust the decisions of they who emote accurately, for though I know not whither hacking for emotional usefulness leads, a change of values for the disutility of others I strongly suspect.
Agreed. The idea that I should be paying attention to and then hacking my emotions is not something I learned from the Sequences but from the CFAR workshop. In general, though, the Sequences are more concerned with epistemic than instrumental rationality, and emotion-hacking is mostly an instrumental technique (although it is also epistemically valuable to notice and then stop your brain from flinching away from certain thoughts).
emotion-hacking seems far more important in epistemic rationality, as your understanding of the world is the setting in which you use instrumental rationality, and your "lens" (which presumably encompasses your emotions) is the key hurdle (assuming you are otherwise rational) preventing you from achieving the objectivity necessary to form true beliefs about the world.
I suppose I should distinguish between two kinds of emotion-hacking: hacking your emotional responses to thoughts, and hacking your emotional responses to behaviors. The former is an epistemic technique and the latter is an instrumental technique. Both are quite useful.
W. H. Auden, "The More Loving One"
The only interpretation I've been able to read into this is that the speaker wants to become more emotionally accepting of death. Am I missing something?
That interpretation didn't even occur to me, possibly because I read the whole poem instead of the bit I quoted (and maybe I quoted the wrong bit). Here is the whole thing (it's short). I always feel a bit awkward arguing about how I interpreted a poem, so maybe this will resolve the issue?
(Incidentally, am I the only one mildly annoyed by how people seem to think of "rationality quotes" as "anti-deathism quotes"? The position may be rational, but it is not remotely related to rationality.)
Thank you, that was helpful. I don't see the deathist tones anymore. Now it reads a bit more like 'If I happened to find myself in a world without stars I think I'd adapt,' which reminds me a bit of the Litany of Gendlin and the importance of facing reality. It makes more sense to have it here now.
This is true, and now I have to go back and look at all the anti-deathist quotes I upvoted and examine them more closely for content directly related to rationality. Damn.
You're not the only one. We should be doing more firewalling the optimal from the rational in general.
Introduction to Learn Python The Hard Way, by Zed A. Shaw
If anyone feels even remotely inspired to click through and actually learn python, do it. Its been the most productive thing I've done on the internet.
The Last Psychiatrist (http://thelastpsychiatrist.com/2009/06/delaying_gratification.html)
--Gabe Newell during a talk. The whole talk is worthwhile if you're interested in institutional design or Valve.
What's the percent chance that I'm doing it wrong?
78.544%.
The whole quote:
The problems you face might not require a serious approach; without more information, I can't say.
Devine and Cohen, Absolute Zero Gravity, p. 96.
— Herbert Butterfield, The Whig Interpretation of History
--Sam Harris
Take all their stuff. Tell them that they have no evidence that it's theirs and no logical arguments that they should be allowed to keep it.
They beat you up. People who haven't specialized in logic and evidence have not therefore been idle.
Shoot them?
I think you just independently invented the holy war.
You can find out what persuades them and give them that.
And in some instances that would likely be what we call logic or evidence.
You usually can't get someone with a spider phobia to drop his phobia by trying to convince them with logic or evidence. On the other hand there are psychological strategies to help them to get rid of the phobia.
I think cognitive behavioural therapy for phobias, which seems to work pretty well in a large number of cases, actually relies on helping people see that their fear is irrational.
As someone with a phobia, I can tell you from experience that realizing your fear is irrational doesn't actually make the fear go away. Sometimes it even makes you feel more guilty for having it in the first place. Realizing it's irrational just helps you develop coping strategies for acting normal when you're freaking out in public.
Oh sure, I can definitely believe that. Maybe a better choice of wording above would have been "internalise" rather than "see", which would rather negate my point, I guess. Or maybe it works differently for some people. I don't have any experience with phobias or CBT myself.
It's alief vs. belief. It's one thing to see that, in theory, almost all spiders are harmless. It's another to remain calm in the presence of a spider if you've had a history of being terrified of them.
Desensitization is a process of teaching a person how to calm themselves, and then exposing them to things which are just a little like spiders (a picture of a cartoon spider, perhaps, or the word spider). When they can calm themselves around that, they're exposed to something a little more like a spider, and learn to be calm around that.
The alief system can learn, but it's not necessarily a verbal process.
Even when it is verbal, as when someone learns to identify various sorts of irrational thoughts, it's much slower than understanding an argument.
Right; that's the "behavioural" part of cognitive behavioural therapy, right? But the "cognitive" part is an explicit, verbal process.
Put them in a situation where they need to use logic and evidence to understand their environment and where understanding their environment is crucial for their survival, and they'll figure it out by themselves. No one really believes God will protect them from harm...
Sadly, that only works on a natural-selection basis, so the ethics boards forbid us from doing this. If they never see anyone actually failing to survive, they won't change their behavior.
Can't make an omelette without breaking some eggs. Videotape the whole thing so the next one has even more evidence.
I have some friends who do... At least insofar as things like "I don't have to worry about finances because God is watching over me, so I won't bother trying to keep a balanced budget." Then again, being financially irresponsible (a behaviour I find extremely hard to understand and sympathize with) seems to be common-ish, and not just among people who think God will take care of their problems.
I think that's mostly because money is too abstract, and as long as you get by you don't even realize what you've lost. Survival is much more real.
Why not? Thinking about money is work. It involves numbers.
Moreover, it often involves a great deal of stress. Small wonder that many people try to avoid that stress by just not thinking about how they spend money.
Well... as something completely and obviously deterministic (the amount of money you have at the end of the month is the amount you had at the beginning of the month, plus the amount you've earned, minus the amount you've spent, for a sufficiently broad definition of “earn” and “spend”), that's about the last situation in which I'd expect people to rely on God. With stuff which is largely affected by factors you cannot control directly (e.g. your health) I would be much less surprised.
Once you have those figures, it is deterministic; however, at the start of the month, those figures are not yet determined. One might win a small prize in a lottery; the price of some staple might unexpectedly increase or decrease; an aunt may or may not send an expensive gift; a minor traffic accident may or may not happen, requiring immediate expensive repairs.
So there are factors that you cannot control that affect your finances.
"Praying for healing" was quite a common occurrence at my friend's church. I didn't pick that as an example because's it's a lot less straightforward. Praying for healing probably does appear to help sometimes (placebo effect), and it's hard enough for people who don't believe in God to be rational about health–there aren't just factor you cannot control, there are plenty of factors we don't understand.
There hasn't been a lot of money spent researching it, but meta-analysis of the studies that have been conducted show that on average there is no placebo effect.
That's really interesting...I had not heard that. Thanks for the info!
Does this cause you to doubt the veracity of the claim in the parent, or to update towards your model of what people rely on God for being wrong? I guess it should probably be both, to some extent. It's just not really clear from your post which you're doing.
Mostly the latter, as per Hanlon's razor.
If you threaten someone in their survival they are likely to get emotional. That's not the best mental state to apply logic.
Suicide bombers don't suddenly start believing in reason just before they are send out to kill themselves.
Soldiers in trenches who fear for their lives on the other hand do often start to pray. Maybe there are a few atheists in foxholes, but that state seems to promote religiousness.
Does it promote religiousness or attract the religious?
You put them into a social enviroment where the high status people value logic and evidence. You give them the plausible promise that they can increase their status in that enviroment by increasing the amount that they value logic and evidence.
How would this encourage them to actually value logic and evidence instead of just appearing to do so?
Couple of attempts:
The hard sciences
Professions with a professional code of ethics, and consequences for violating it.
Maybe the idea could gain popularity from a survival-island type reality program in which contestants have to measure the height of trees without climbing them, calculate the diameter of the earth, or demonstrate the existence of electrons (in order of increasing difficulty).
It's not a question of encouragement. Humans tends to want to be like the high status folk that they look up to.
Want to be like or appear to be like? I'm not convinced people can be relied on to make the distinction, much less choose the "correct" one.
Or do they want to be like those folks appear to be like?
People tend to conform to it's peers values.
This is from the Sam Harris vs. William Lane Craig debate, starting around the 44 minute mark. IIRC, Luke's old website has a review of this particular debate.
This reminds me of
which I believe is a paraphrasing of something Jonathan Swift said, but I'm not sure. Anyone have the original?
I don't think this is empirically true, though. Suppose I believe strongly that violent crime rates are soaring in my country (Canada), largely because I hear people talking about "crime being on the rise" all the time, and because I hear about murders on the news. I did not reason myself into this position, in other words.
Then you show me some statistics, and I change my mind.
In general, I think a supermajority of our starting opinions (priors, essentially) are held for reasons that would not pass muster as 'rational,' even if we were being generous with that word. This is partly because we have to internalize a lot of things in our youth and we can't afford to vet everything our parents/friends/culture say to us. But the epistemic justification for the starting opinions may be terrible, and yet that doesn't mean we're incapable of having our minds changed.
The chance of this working depends greatly on how significant the contested fact is to your identity. You may be willing to believe abstractly that crime rates are down and public safety is up after being shown statistics to that effect -- but I predict that (for example) a parent who'd previously been worried about child abductions after hearing several highly publicized news stories, and who'd already adopted and vigorously defended childrearing policies consistent with this fear, would be much less likely to update their policies after seeing an analogous set of statistics.
I agree, but I think part of the process of having your mind changed is the understanding that you came to believe those internalized things in a haphazard way. And you might be resisting that understanding because of the reasons @Nornagest mentions -- you've invested into them or incorporated them into your identity, for example. I think I'm more inclined to change the quote to
to make it slightly more useful in practice, because often changing the person's mind will require not only knowing the more accurate facts or proper reasoning, but also knowing why the person is attached to his old position -- and people generally don't reveal that until they're ready to change their mind on their own.
Oops, I guess I wasn't sure where to put this comment.
If you can't appeal to reason to make reason appealing, you appeal to emotion and authority to make reason appealing.
-Yevgeny Yevtushenko
ShittingtonUK
No, they selected them to sell more copies by highjacking the easier-to-press buttons of your nervous system.
There's something to that, but it's not as if Varian's Microeconomic Analysis is going to have the cover of Spice and Wolf 1.
On the other hand, the method of judging a book's contents by its cover clearly has holes in it considering Spice and Wolf 1 has the cover of Spice and Wolf 1.
Deliberate non sequitur alert: I'm often attracted to a cover that has holes in it. E.g. The Curious Incident of the Dog in the Night-Time.
Probably purely true for some books, but as someone who buys thousands of books a year, my impression is they are very likely to reveal who they think their readers will be (hence a lot of covers say "stay away" to me), and just occasionally they can show a startling streak of originality. E.g. the board designs (there may be no dustjacket) on Dave Eggers' books are uniquely artistic in my opinion, and in this case since he has been seriously into graphics, I don't think it's any accident. You might think "Maybe this book is written by a bold and original person" and IMHO you'd be right. Also, the cover design of The Curious Incident of the Dog in the Night-Time by Mark Haddon kind of sent a message on my wavelength and it was not misleading (for me).
The publisher selected that design. The author's involvement almost always ends with the manuscript.
-- Lawrence Watt-Evans
You don't "judge" a book by its cover; you use the cover as additional evidence to more accurately predict what's in the book. Knowing what the publisher wants you to assume about the book is preferable to not knowing.
(Except when it's a novel and the text on the back cover spoilers events from the middle of the book or later which I would have preferred to not read until the right time.)
Spoilers matter less than you think.
According to a single counter-intuitive (and therefore more likely to make headlines), unreplicated study.
Gah! Spoiler!
Those error bars look large enough that I could still be right about myself even without being a total freak.
Really? 11 of the 12 stories got rated higher when spoiled, which is decent evidence against the nil hypothesis (spoilers have zero effect on hedonic ratings) regardless of the error bars' size. Under the nil hypothesis, each story has a 50/50 chance of being rated higher when spoiled, giving a probability of (¹²C₁₁ × 0.5¹¹ × 0.5¹) + (¹²C₁₂ × 0.5¹² × 0.5⁰) = 0.0032 that ≥11 stories get a higher rating when spoiled. So the nil hypothesis gets rejected with a p-value of 0.0063 (the probability's doubled to make the test two-tailed), and presumably the results are still stronger evidence against a spoilers-are-bad hypothesis.
This, of course, doesn't account for unseen confounders, inter-individual variation in hedonic spoiler effects, publication bias, or the sample (79% female and taken from "the psychology subject pool at the University of California, San Diego") being unrepresentative of people in general. So you're still not necessarily a total freak!
You can't just ignore the error bars like that. In 8 of the 12 cases, the error bars overlap, which means there's a decent chance that those comparisons could have gone either way, even assuming the sample mean is exactly correct. A spoilers-are-good hypothesis still has to bear the weight of this element of chance.
As a rough estimate: I'd say we can be sure that 4 stories are definitely better spoilered (>2 sd's apart); out of the ones 1..2 sd's apart, maybe 3 are actually better spoilered; and out of the remainder, they could've gone either way. So we have maybe 9 out of 12 stories that are better with spoilers, which gives a probability of 14.5% if we do the same two-tailed test on the same null hypothesis.
I don't necessarily want you to trust the numbers above, because I basically eyeballed everything; however, it gives an idea of why error bars matter.
Ignoring the error bars does throw away potentially useful information, and this does break the rules of Bayes Club. But this makes the test a conservative one (Wikipedia: "it has very general applicability but may lack the statistical power of other tests"), which just makes the rejection of the nil hypothesis all the more convincing.
If I'm interpreting this correctly, "the error bars overlap" means that the heights of two adjacent bars are within ≈2 standard errors of each other. In that case, overlapping error bars doesn't necessarily indicate a decent chance that the comparisons could go either way; a 2 std. error difference is quite a big one.
But this is an invalid application of the test. The sign test already allows for the possibility that each pairwise comparison can have the wrong sign. Making your own adjustments to the numbers before feeding them into the test is an overcorrection. (Indeed, if "we can be sure that 4 stories are definitely better spoilered", there's no need to statistically test the nil hypothesis because we already have definite evidence that it is false!)
This reminds me of a nice advantage of the sign test. One needn't worry about squinting at error bars; it suffices to be able to see which of each pair of solid bars is longer!
Okay, if all you're testing is that "there exist stories for which spoilers make reading more fun" then yes, you're done at that point. As far as I'm concerned, it's obvious that such stories exist for either direction; the conclusion "spoilers are good" or "spoilers are bad" follows if one type of story dominates.
Yeah, it doesn't seem likely given that study that works are liked in average less when spoiled; but what I meant is that probably there are certain individuals who like works less when spoiled. (Imagine Alice said something to the effect that she prefers chocolate ice cream to vanilla ice cream, and Bob said that it's not actually the case that vanilla tastes worse than chocolate, citing a study in which for 11 out of 12 ice cream brands their vanilla ice cream is liked more in average than their chocolate ice cream -- though in most cases the difference between the averages is not much bigger than each standard deviation; even if the study was conducted among a demographic that does include Alice, that still wouldn't necessarily mean Alice is mistaken, lying, or particularly unusual, would it?)
Just so. These are the sort of "inter-individual variation in hedonic spoiler effects" I had in mind earlier.
Edit: to elaborate a bit, it was the "error bars look large enough" bit of your earlier comment that triggered my sceptical "Really?" reaction. Apart from that bit I agree(d) with you!
Edit 2: aha, I probably did misunderstand you earlier. I originally interpreted your error bars comment as a comment on the statistical significance of the pairwise differences in bar length, but I guess you were actually ballparking the population standard deviation of spoiler effect from the sample size and the standard errors of the means.
Huh. For some reason I had read that as "intra-individual". Whatever happened to the "assume people are saying something reasonable" module in my brain?
Yep.
thefolksong
Because you're a human, not a butterfly. It seems like an animal that used a cognitive filter that defaulted to the latter case would take a pretty severe fitness hit.
Don't good hunters have good mental models of their prey? I mean I get that you're thinking that it wouldn't help to feel sympathy for animals of other species. But it would help in many cases to have empathy, and to see things from the other animal's perspective.
S. T. Rev
Do we know anything about executive function failures other than AD(H)D?
In most cases 'executive dysfunction' covers the same territory as 'adult ADHD', but it can also be the outcome of some kinds of brain damage.
http://en.wikipedia.org/wiki/Executive_dysfunction
Insultingly Stupid Movie Physics' review of The Core
The remark included the following as a footnote:
32 people in the same ten block radius simultaneously dying of malfunctioning pacemakers seems so tremendously unlikely, I can't imagine how one could even locate that as an explanation in a matter of seconds.
Also from the review:
Unless the 32 people used the same, or very similar, pacemakers, and somebody forgot to say that.
Still sounds extremely unlikely. If a model of car has a particular design flaw, you'll expect to hear a lot of reports of that model suffering the same malfunction, but you wouldn't expect to hear that dozens of units within a certain radius suffered the same malfunction simultaneously. You'd need to subject them all to some sort of outside interference at the same time for that sort of occurrence to be plausible, and an event of that scale ought to leave evidence beyond its effect on all the pacemakers in the vicinity.
On scientists trying to photograph an atom's shadow:
Luke McKinney - 6 Microscopic Images That Will Blow Your Mind
-- Magnificent Sasquatch
—Mike Sinnett, Boeing's 787 chief project engineer
Isn't the point of the article that Boeing may not have actually done at least the first two steps (design cell not to fail, prevent failure of a cell from causing battery problems)?
I am confused.
-- Milton Friedman
-- Bertold Brecht
(I'm always amused when people of opposite political views express similar thoughts on society.)
Also:
I think the Brecht quote is somewhat misleading. The problem is not that not enough people want/demand goodness, the problem is that it is too easy to profit by cheating without getting caught.
-- John C Wright
That reminds me of http://xkcd.com/690/.
Also:
-- Raymond Arritt
(Quoting this before dinner is making me hungry.)
Wikipedia may ultimately have to do one of two things, or both:
1) Provide better structure for alternate versions of contested ideas
2) Construct a practically effective demarcation between strictly factual domains, and anything more interpretive.
Such a demarcation will always be challenged; I don't see any way around that, but I'd also insist that it's necessary for our sanity. Supposed it was possible, maybe using a browser with links to a database, to try to "brand" (or give the underwriters seal of approval to) those pages that provided straightforward factual assertions, and unretouched photographs, and scans of original source texts, such as all newspapers of which a copy still exists), and to promote the idea that the respectability of any interpretive or ethical claim consists very largely in its groundedness in showing links to the "smells like a fact" zone.
Men in Black on guessing the teacher's password:
Zed: You're all here because you are the best of the best. Marines, air force, navy SEALs, army rangers, NYPD. And we're looking for one of you. Just one.
[...]
Edwards: Maybe you already answered this, but, why exactly are we here?
Zed: [noticing a recruit raising his hand] Son?
Jenson: Second Lieutenant, Jake Jenson. West Point. Graduate with honors. We're here because you are looking for the best of the best of the best, sir! [throws Edwards a contemptible glance]
[Edwards laughs]
Zed: What's so funny, Edwards?
Edwards: Boy, Captain America over here! "The best of the best of the best, sir!" "With honors." Yeah, he's just really excited and he has no clue why we're here. That's just, that's very funny to me.
-- Time Braid
Eckhart Tolle, as quoted by Owen Cook in The Blueprint Decoded
S. T. Rev
Joke: a tourist was driving around lost in the countryside in Ireland among the 1 lane roads and hill farms divided by ancient stone fences, and he asks a sheep farmer how to get to Dublin, to which he replies:
"Well ... if I was going to Dublin, I wouldn't start from here."
Moral, as I see it anyway: While the heuristic "to get to Y, start from X instead of where you are" has some value (often cutting a hard problem into two simpler ones), ultimately we all must start from where we are.
—Yagyū Munenori, The Life-Giving Sword
Been making a game of looking for rationality quotes in the super bowl
"It's only weird if it doesn't work" --Bud Light Commercial
Only a rationality quote out of context, though, since the ad is about superstitious rituals among sports fans. My automatic mental reply is "well that doesn't work"
-Luc de Clapiers
-Joel Spolsky
If your service is down, it has no features.
-- Screwtape, The Screwtape Letters by C.S. Lewis