Rationality Quotes February 2013

2 Post author: arundelo 05 February 2013 10:20PM

Another monthly installment of the rationality quotes thread. The usual rules apply:

  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote comments or posts from Less Wrong itself or from Overcoming Bias.
  • No more than 5 quotes per person per monthly thread, please.

Comments (563)

Comment author: arundelo 01 February 2013 05:00:17PM 25 points [-]

Eventually you just have to admit that if it looks like the absence of a duck, walks like the absence of a duck, and quacks like the absence of a duck, the duck is probably absent.

--Tom Chivers

Comment author: Eliezer_Yudkowsky 01 February 2013 11:13:04PM 15 points [-]

I agree subject to the specification that each such observation must look substantially more like the absence of a duck then a duck. There are many things we see which are not ducks in particular locations. My shoe doesn't look like a duck in my closet, but it also doesn't look like the absence of a duck in my closet. Or to put it another way, my sock looks exactly like it should look if there's no duck in my closet, but it also looks exactly like it should look if there is a duck in my closet.

Comment author: fubarobfusco 02 February 2013 04:18:29AM 4 points [-]

If your sock does not have feathers or duck-shit on it, then it is somewhat more likely that it has not been sat on by a duck.

Comment author: Eliezer_Yudkowsky 02 February 2013 05:26:10AM 6 points [-]

Insufficiently more likely. I've been around ducks many times without that happening to my socks. Log of the likelihood ratio would be close to zero.

Comment author: NancyLebovitz 03 February 2013 04:26:59PM *  3 points [-]

You originally were talking about a duck in your closet, which isn't the same as thing as being around ducks.

The discussion reminds me of this, which makes the point that, while corelation is not causation, if there's no corelation, there almost certainly isn't causation.

Comment author: simplicio 04 February 2013 11:44:31PM 7 points [-]

Not disagreeing, but just wanted to mention the useful lesson that there are some cases of causation without correlation. For example, the fuel burned by a furnace is uncorrelated with the temperature inside a home. (See: Milton Friedman's thermostat.)

Comment author: RichardKennaway 05 February 2013 08:37:54AM *  14 points [-]

if there's no corelation, there almost certainly isn't causation.

This is completely wrong, though not many people seem to understand that yet.

For example, the voltage across a capacitor is uncorrelated with the current through it; and another poster has pointed out the example of the thermostat, a topic I've also written about on occasion.

It's a fundamental principle of causal inference that you cannot get causal conclusions from wholly acausal premises and data. (See Judea Pearl, passim.) This applies just as much to negative conclusions as positive. Absence of correlation cannot on its own be taken as evidence of absence of causation.

Comment author: Qiaochu_Yuan 01 February 2013 06:08:33PM 34 points [-]

Things that are your fault are good because they can be fixed. If they're someone else's fault, you have to fix them, and that's much harder.

-- Geoff Anders (paraphrased)

Comment author: James_Miller 01 February 2013 07:35:27PM 9 points [-]

No scientific conclusions can ever be good or bad, desirable or undesirable, sexist, racist, offensive, reactionary or dangerous; they can only be true or false. No other adjectives apply.

Satoshi Kanazawa

Comment author: shminux 01 February 2013 07:41:48PM 8 points [-]

I'd take an issue with "undesirable", the way I understand it. For example, the conclusion that traveling FTL is impossible without major scientific breakthroughs was quite undesirable to those who want to reach for the stars. Similarly with "dangerous": the discovery of nuclear energy was quite dangerous.

Comment author: [deleted] 02 February 2013 06:20:39PM *  5 points [-]

If travelling faster than light is possible,
I desire to believe that travelling faster than light is possible;
If travelling faster than light is impossible,
I desire to believe that travelling faster than light is impossible;
Let me not become attached to beliefs I may not want.

Comment author: shminux 02 February 2013 07:15:05PM 1 point [-]

Something not (currently) possible can still be desirable.

Comment author: Larks 02 February 2013 07:59:34PM 11 points [-]

FTL being impossible is undesirable if you want to go to the stars.

The conclusion that "FTL is impossible" is undesirable if and only iff FTL is possible.

The two conditions are very different.

Comment author: Baruta07 04 February 2013 09:49:05PM *  1 point [-]

Shouldn't it read

"FTL is impossible" is undesirable if and only if FTL is possible."

as it stands it reads "FTL is impossible" is undesirable if and only if and only if (iff) FTL is possible.

Comment author: shminux 05 February 2013 02:34:05AM 3 points [-]

They are indeed. You seem to have added a level of indirection not present in the original statement. One statement is about this world, the other is about possible worlds.

Comment author: Qiaochu_Yuan 01 February 2013 07:55:58PM 4 points [-]

I think it's pretty clear that scientific conclusions can be dangerous in the sense that telling everybody about them is dangerous. For example, the possibility of nuclear weapons. On the other hand, there should probably be an ethical injunction against deciding what kind of science other people get to do. (But in return maybe scientists themselves should think more carefully about whether what they're doing is going to kill the human race or not.)

Comment author: NancyLebovitz 03 February 2013 12:09:22PM 1 point [-]

I think nuclear weapons have a chance of killing a large number of people but are very unlikely to kill the human race.

Comment author: Nornagest 01 February 2013 08:35:08PM 13 points [-]

While I pretty much agree with the quote, it doesn't provide anyone that isn't already convinced with many good reasons to believe it. Less of an unusually rational statement and more of an empiricist applause light, in other words.

In any case, a scientific conclusion needn't be inherently offensive for closer examination to be recommended: if most researchers' backgrounds are likely to introduce implicit biases toward certain conclusions on certain topics, then taking a close look at the experimental structure to rule out such bias isn't merely a good political sop but is actually good science in its own right. Of course, dealing with this properly would involve hard work and numbers and wouldn't involve decrying all but the worst studies as bad science when you've read no more than the abstract.

Comment author: Eugine_Nier 02 February 2013 05:56:24AM 4 points [-]

if most researchers' backgrounds are likely to introduce implicit biases toward certain conclusions on certain topics, then taking a close look at the experimental structure to rule out such bias isn't merely a good political sop but is actually good science in its own right.

Unfortunately, since the people deciding which papers to take a closer look at tend to have the same biases as most scientists, the papers that actually get examined closely are the ones going against common biases.

Comment author: Nornagest 02 February 2013 07:19:00AM 3 points [-]

I hate to find myself in the position of playing apologist for this mentality, but I believe the party line is that most of the relevant biases are instilled by mass culture and present at some level even in most people trying to combat them, never mind scientists who oppose them in a kind of vague way but mostly have better things to do with their lives.

In light of the Implicit Association Test this doesn't even seem all that far-fetched to me. The question is to what extent it warrants being paranoid about experimental design, and that's where I find myself begging to differ.

Comment author: fubarobfusco 02 February 2013 04:16:55AM 13 points [-]

This seems to imply that science is somehow free from motivated cognition — people looking for evidence to support their biases. Since other fields of human reason are not, it would be astonishing if science were.

(Bear in mind, I use "science" mostly as the name of a social institution — the scientific community, replete with journals, grants and funding sources, tenure, and all — and not as a name for an idealized form of pure knowledge-seeking.)

Comment author: [deleted] 02 February 2013 06:22:10PM *  8 points [-]

I take the quote to be normative rather than descriptive. Science is not free from motivated cognition, but that's a bug, not a feature.

Comment author: fubarobfusco 02 February 2013 09:03:38PM 0 points [-]

Sure, but I often see this sort of argument used against concerns about bias in (claimed) scientific conclusions. I'd rather people didn't treat science as privileged against bias, and the quote above seems to encourage that.

Comment author: ChristianKl 02 February 2013 04:27:52PM -2 points [-]

Is Newtons theory of gravity true or false? It's neither. For some problems the theory provides a good model that allows us to make good predictions about the world around us. For other problems the theory produces bad predictions.

The same is true for nearly every scientific model. There are problems where it's useful to use the model. There are problems where it isn't.

There are also factual statements in science. Claiming that true and false are the only possible adjectives to describe them is also highly problematic. Instead of true and false, likely and unlikely are much better words. In hard science most scientific conclusions come with p values. The author doesn't try to declare them true or false but declares them to be very likely.

It's also interesting that the person who made this claim isn't working in the hard sciences. He seems to be a evolutionary psychologist based in the London School of Economics. In the Wikipedia article that desribes him he's quoted as suggesting that the US should have retaliated 9/11 with nuclear bombs. That a non-scientific racist position. He published some material that's widely considered as racist in Psychology Today. I don't see why "racist" is no valid word to describe his conclusions.

Comment author: Eugine_Nier 02 February 2013 07:31:16PM *  5 points [-]

In the Wikipedia article that desribes him he's quoted as suggesting that the US should have retaliated 9/11 with nuclear bombs. That a non-scientific racist position.

Huh, what definition of "racist" are you using here? Would you describe von Neumann's proposal for a pre-emtive nuclear strike on the USSR as "racist"?

He published some material that's widely considered as racist in Psychology Today. I don't see why "racist" is no valid word to describe his conclusions.

I'm not sure what you mean by "racist", however is your claim supposed to be that this somehow implies that the conclusion is false/less likely? You may want to practice repeating the Litany of Tarski.

Comment author: ChristianKl 02 February 2013 08:58:08PM 1 point [-]

Huh, what definition of "racist" are you using here?

It's basically about putting a low value on the life on non-white civilians. In addition "I would do to foreigners, what Ann Coulter would do to them", is also a pretty straight way to signal racism.

I'm not sure what you mean by "racist", however is your claim supposed to be that this somehow implies that the conclusion is false/less likely?

I haven't argued that fact. I'm advocating for having a broad number of words which multidimensional meaning.

I see no reason to treat someone who makes wrong claims about race and who's personal beliefs cluster with racist beliefs in his nonscientific statements the same way as someone who just makes wrong statements about the boiling point of some new synthetic chemical.

Comment author: fubarobfusco 02 February 2013 09:08:23PM 5 points [-]

Rather than using the ambiguous word "racist", one could say specifically that Kanazawa is an advocate of genocide.

Comment author: Eugine_Nier 02 February 2013 10:01:23PM *  5 points [-]

It's basically about putting a low value on the life on non-white civilians.

So would you call the bombings of civilians during WWII "racist"?

I haven't argued that fact. I'm advocating for having a broad number of words which multidimensional meaning.

So you would agree that there are some statements that are both "racist" and true.

I see no reason to treat someone who makes wrong claims about race

What do you mean by "wrong"? If you mean "wrong" in the sense of "false", you've yet to present any evidence that any of Satoshi Kanazawa's claims are wrong.

Comment author: NancyLebovitz 03 February 2013 12:33:08PM 6 points [-]

What happens if you apply the same epistomological standards to claims that someone is racist that you apply to claims from science?

Comment author: alex_zag_al 05 February 2013 02:08:41AM *  1 point [-]

A scientist can have an inclination towards--for example--racist ideas. You can't just call this a kind of being wrong, because depending on the truth of what they're studying, this can make them right more often or less often.

So racist scientists are possible, and racist scientific practice is possible. I think 'racist' is an appropriate label for the conclusions drawn with that practice, correct or incorrect.

Though, I think being racist is a property of a whole group of conclusions drawn by scientists with a particular bias. It's not an inherent property of any of the conclusions; another researcher with completely different biases wouldn't be racist for independently rediscovering one of them.

It's a useful descriptor because a body of conclusions drawn by racist scientists, right or wrong, is going to be different in important ways from one drawn by non-racist scientists. It doesn't reduce to "larger fraction correct" or "larger fraction incorrect" because it depends on if they're working on a problem where racists are more or less likely to be correct.

Comment author: James_Miller 01 February 2013 07:41:37PM 39 points [-]

You want accurate beliefs and useful emotions.

From a participant at the January CFAR workshop. I don't remember who. This struck me as an excellent description of what rationalists seek.

Comment author: Dorikka 01 February 2013 10:49:12PM 22 points [-]

People often seem to get these mixed up, resulting in "You want useful beliefs and accurate emotions."

Comment author: James_Miller 02 February 2013 05:34:27PM *  4 points [-]

Contrasting "accurate beliefs and useful emotions" with "useful beliefs and accurate emotions" would probably make a good exercise for a novice rationalist.

Comment author: FiftyTwo 02 February 2013 06:36:01PM 9 points [-]

Not sure what an "accurate emotion" would mean, feel like some sort of domain error. (e.g. a blue sound.)

Comment author: James_Miller 02 February 2013 07:38:10PM *  15 points [-]

An accurate emotion = "I'm angry because I should be angry because she is being really, really mean to me."

A useful emotion = "Showing empathy towards someone being mean to me will minimize the cost to me of others' hostility."

Comment author: AdeleneDawner 02 February 2013 07:40:30PM 4 points [-]

Where's that 'should' coming from? (Or are you just explaining the concept rather than endorsing it?)

Comment author: James_Miller 02 February 2013 08:34:56PM 4 points [-]

I mean in the way most (non-LW) people would interpret it, so explaining not endorsing.

Comment author: sark 02 February 2013 06:47:23PM 9 points [-]

Why not both useful beliefs and useful emotions?

Why privilege beliefs?

Comment author: James_Miller 02 February 2013 07:16:09PM 3 points [-]

If useful doesn't equal accurate then you have biased your map.

The most useful beliefs to have are almost always accurate ones so in almost all situations useful=accurate. But most people have an innate desire to bias their map in a way that harms them over the long-run. Restated, most people have harmful emotional urges that do their damage by causing them to have inaccurate maps that "feel" useful but really are not. Drilling into yourself the value of having an accurate map in part by changing your emotions to make accuracy a short-term emotional urge will cause you to ultimately have more useful beliefs than if you have the short-term emotional urge of having useful beliefs.

A Bayesian super-intelligence could go for both useful beliefs and emotions. But given the limitations of the human brain I'm better off programming the emotional part of mine to look for accuracy in beliefs rather than usefulness.

Comment author: sark 03 February 2013 11:55:17AM 1 point [-]

Good point about beliefs possibly only "feeling" useful. But that applies to accuracy as well. Privileging accuracy can also lead you to overstate its usefulness. In fact, I find it's often better to not even have beliefs at all. Rather than trying to contort my beliefs to be useful, a bunch of non map-based heuristics gets the job done handily. Remember, the map-territory distinction is itself but a useful meta-heuristic.

Comment author: Qiaochu_Yuan 02 February 2013 08:37:47PM *  11 points [-]

This is addressed by several Sequence posts, e.g. Why truth? And..., Dark Side Epistemology, and Focus Your Uncertainty.

Beliefs shoulder the burden of having to reflect the territory, while emotions don't. (Although many people seem to have beliefs that could be secretly encoding heuristics that, if they thought about it, they could just be executing anyway, e.g. believing that people are nice could be secretly encoding a heuristic to be nice to people, which you could just do anyway. This is one kind of not-really-anticipation-controlling belief that doesn't seem to be addressed by the Sequences.)

Comment author: sark 03 February 2013 11:56:19AM 0 points [-]

"Beliefs shoulder the burden of having to reflect the territory, while emotions don't." Superb point that. And thanks for the links.

Comment author: sark 03 February 2013 12:00:40PM 5 points [-]

"Beliefs shoulder the burden of having to reflect the territory, while emotions don't."

This is how I have come to think of beliefs. It's like refactoring code. You should do it when you spot regularities you can eke efficiency out of. But you should do this only if it does not make the code unwieldy or unnatural, and only if it does not make the code fragile. Beliefs should be the same thing. When your rules of thumb seem to respect some regularity in reality, I'm perfectly happy to call that "truth". So long as that does not break my tools.

Comment author: Sniffnoy 03 February 2013 02:35:04AM 5 points [-]

It's perhaps worth noting that EY seems to have taken instead the "accurate beliefs and accurate emotions" tack in e.g. The Twelve Virtues of Rationality. Or at least that seems to be what's implied.

I mean, I suspect "accurate beliefs and useful emotions" really is the way to go; but this is something that -- if it really is a sort of consensus here -- we need to be much more explicit about, IMO. At the moment there seems to be little about that in the sequences / core articles, or at least little about it that's explicity (I'm going from memory in making that statement).

Comment author: Zaine 03 February 2013 04:12:43AM -1 points [-]

Indeed, accurate emotions appear a better description. Consider killing someone might free up many opportunities, and would only have the consequence of bettering many lives; the useful emotion would be happiness at the opportunity to forever end that person's continued generation and spread of negative utility. Regardless of whether the accurate emotion might yield the same result, I'd trust the decisions of they who emote accurately, for though I know not whither hacking for emotional usefulness leads, a change of values for the disutility of others I strongly suspect.

Comment author: Qiaochu_Yuan 03 February 2013 09:33:31PM *  4 points [-]

Agreed. The idea that I should be paying attention to and then hacking my emotions is not something I learned from the Sequences but from the CFAR workshop. In general, though, the Sequences are more concerned with epistemic than instrumental rationality, and emotion-hacking is mostly an instrumental technique (although it is also epistemically valuable to notice and then stop your brain from flinching away from certain thoughts).

Comment author: non-expert 04 February 2013 04:52:27PM 0 points [-]

emotion-hacking seems far more important in epistemic rationality, as your understanding of the world is the setting in which you use instrumental rationality, and your "lens" (which presumably encompasses your emotions) is the key hurdle (assuming you are otherwise rational) preventing you from achieving the objectivity necessary to form true beliefs about the world.

Comment author: Qiaochu_Yuan 04 February 2013 04:58:15PM 2 points [-]

I suppose I should distinguish between two kinds of emotion-hacking: hacking your emotional responses to thoughts, and hacking your emotional responses to behaviors. The former is an epistemic technique and the latter is an instrumental technique. Both are quite useful.

Comment author: Kindly 01 February 2013 07:44:07PM 12 points [-]

Were all stars to disappear or die,
I should learn to look at an empty sky
And feel its total darkness sublime,
Though this might take me a little time.

W. H. Auden, "The More Loving One"

Comment author: Toddling 02 February 2013 08:45:56PM 1 point [-]

The only interpretation I've been able to read into this is that the speaker wants to become more emotionally accepting of death. Am I missing something?

Comment author: Kindly 02 February 2013 09:13:26PM 5 points [-]

That interpretation didn't even occur to me, possibly because I read the whole poem instead of the bit I quoted (and maybe I quoted the wrong bit). Here is the whole thing (it's short). I always feel a bit awkward arguing about how I interpreted a poem, so maybe this will resolve the issue?

(Incidentally, am I the only one mildly annoyed by how people seem to think of "rationality quotes" as "anti-deathism quotes"? The position may be rational, but it is not remotely related to rationality.)

Comment author: Toddling 02 February 2013 11:10:26PM 3 points [-]

Thank you, that was helpful. I don't see the deathist tones anymore. Now it reads a bit more like 'If I happened to find myself in a world without stars I think I'd adapt,' which reminds me a bit of the Litany of Gendlin and the importance of facing reality. It makes more sense to have it here now.

This is true, and now I have to go back and look at all the anti-deathist quotes I upvoted and examine them more closely for content directly related to rationality. Damn.

Comment author: Qiaochu_Yuan 03 February 2013 12:35:38AM 8 points [-]

(Incidentally, am I the only one mildly annoyed by how people seem to think of "rationality quotes" as "anti-deathism quotes"? The position may be rational, but it is not remotely related to rationality.)

You're not the only one. We should be doing more firewalling the optimal from the rational in general.

Comment author: Kingoftheinternet 01 February 2013 07:47:05PM *  14 points [-]

If you are reading this book and flipping out at every third sentence because you feel I'm insulting your intelligence, then I have three points of advice for you:

  • Stop reading my book. I didn't write it for you. I wrote it for people who don't already know everything.

  • Empty before you fill. You will have a hard time learning from someone with more knowledge if you already know everything.

  • Go learn Lisp. I hear people who know everything really like Lisp.

For everyone else who's here to learn, just read everything as if I'm smiling and I have a mischievous little twinkle in my eye.

Introduction to Learn Python The Hard Way, by Zed A. Shaw

Comment author: pewpewlasergun 02 February 2013 04:15:27AM 9 points [-]

If anyone feels even remotely inspired to click through and actually learn python, do it. Its been the most productive thing I've done on the internet.

Comment author: Oscar_Cunningham 01 February 2013 09:04:17PM 9 points [-]

Evolutionary psychology, economics, and behavior studies in general often fail to account for what may be an innate, or strongly socialized, motivating variable. "Rational people will seek to maximize their gain." Sure. Now define gain. In many discussions about behavior and economics, we do not account for obedience and social pressure. This is a mistake, as it is evident that it is a highly significant, though invisible, determinant.

The Last Psychiatrist (http://thelastpsychiatrist.com/2009/06/delaying_gratification.html)

Comment author: Vaniver 01 February 2013 09:27:35PM 17 points [-]

If you're not making quantitative predictions, you're probably doing it wrong.

--Gabe Newell during a talk. The whole talk is worthwhile if you're interested in institutional design or Valve.

Comment author: Mass_Driver 02 February 2013 08:20:12AM 13 points [-]

What's the percent chance that I'm doing it wrong?

Comment author: DanArmak 02 February 2013 11:14:55AM 1 point [-]

78.544%.

Comment author: Vaniver 02 February 2013 03:54:43PM 10 points [-]

The whole quote:

If you're not making quantitative predictions, you're probably doing it wrong, or you're probably not doing it as well as you can. That's sort of become kind of critical to how we operate. You have to predict in advance. Anybody can explain anything after the fact, and it has to be quantitative or you're not being serious about how you're approaching the problem.

The problems you face might not require a serious approach; without more information, I can't say.

Comment author: VincentYu 01 February 2013 09:36:33PM *  53 points [-]

In Munich in the days of the great theoretical physicist Arnold Sommerfeld (1868–1954), trolley cars were cooled in summer by two small fans set into their ceilings. When the trolley was in motion, air flowing over its top would spin the fans, pulling warm air out of the cars. One student noticed that although the motion of any given fan was fairly random—fans could turn either clockwise or counterclockwise—the two fans in a single car nearly always rotated in opposite directions. Why was this? Finally he brought the problem to Sommerfeld.

“That is easy to explain,” said Sommerfeld. “Air hits the fan at the front of the car first, giving it a random motion in one direction. But once the trolley begins to move, a vortex created by the first fan travels down the top of the car and sets the second fan moving in precisely the same direction.”

“But, Professor Sommerfeld,” the student protested, “what happens is in fact the opposite! The two fans nearly always rotate in different directions.”

“Ahhhh!” said Sommerfeld. “But of course that is even easier to explain.”

Devine and Cohen, Absolute Zero Gravity, p. 96.

Comment author: taelor 01 February 2013 09:50:23PM 3 points [-]

It has been said that the historian is the avenger, and that standing as a judge between the parties and rivalries and causes of bygone generations he can lift up the fallen and beat down the proud, and by his exposures and his verdicts, his satire and his moral indignation, can punish unrighteousness, avenge the injured or reward the innocent. One may be forgiven for not being too happy about any division of mankind into good and evil, progressive and reactionary, black and white; and it is not clear that moral indignation is not a dispersion of one’s energies to the great confusion of one’s judgement. There can be no complaint against the historian who personally and privately has his preferences and antipathies, and who as a human being merely has a fancy to take part in the game that he is describing; it is pleasant to see him give way to his prejudices and take them emotionally, so that they splash into colour as he writes; provided that when he steps in this way into the arena he recognizes that he is stepping into a world of partial judgements and purely personal appreciations and does not imagines that he is speaking ex cathedra.

But if the historian can rear himself up like a god and judge, or stand as the official avenger of the crimes of the past, then one can require that he shall be still more godlike and regard himself rather as the reconciler than as the avenger; taking it that his aim is to achieve the understanding of the men and parties and causes of the past, and that in this understanding, if it can be complete, all things will ultimately be reconciled. It seems to be assumed that in history we can have something more than the private points of view of particular historian; that there are “verdicts of history” and that history itself, considered impersonally, has something to say to men. It seems to be accepted that each historian does something more than make a confession of his private mind and his whimsicalities, and that all of them are trying to elicit a truth, and perhaps combining through their various imperfections to express a truth, which, if we could perfectly attain it, would be the voice of History itself.

But if history is in this way something like the memory of mankind and represents the spirit of man brooding over man’s past, we must imagine it as working not to accentuate antagonisms or to ratify old party-cries but to find the unities that underlie the differences and to see all lives as part of the one web of life. The historian trying to feel his way towards this may be striving to be like a god but perhaps he is less foolish than the one who poses as god the avenger. Studying the quarrels of an ancient day he can at least seek to understand both parties to the struggle and he must want to understand them better than they understood themselves; watching them entangled in the net of time and circumstance he can take pity on them – these men who perhaps had no pity for one another; and, though he can never be perfect, it is difficult to see why he should aspire to anything less than taking these men and their quarrels into a world where everything is understood and all sins are forgiven.

— Herbert Butterfield, The Whig Interpretation of History

Comment author: Grif 02 February 2013 01:12:40AM *  24 points [-]

If someone doesn’t value evidence, what evidence are you going to provide that proves they should value evidence? If someone doesn’t value logic, what logical argument would you invoke to prove they should value logic?

--Sam Harris

Comment author: Qiaochu_Yuan 02 February 2013 03:39:43AM *  5 points [-]

Take all their stuff. Tell them that they have no evidence that it's theirs and no logical arguments that they should be allowed to keep it.

Comment author: fubarobfusco 02 February 2013 04:03:26AM 23 points [-]

They beat you up. People who haven't specialized in logic and evidence have not therefore been idle.

Comment author: Qiaochu_Yuan 02 February 2013 04:18:25AM 4 points [-]

Shoot them?

Comment author: gryffinp 02 February 2013 10:32:43AM 31 points [-]

I think you just independently invented the holy war.

Comment author: Nisan 02 February 2013 04:15:58AM 1 point [-]

You can find out what persuades them and give them that.

Comment author: James_Miller 02 February 2013 06:25:46AM 1 point [-]

And in some instances that would likely be what we call logic or evidence.

Comment author: ChristianKl 02 February 2013 04:47:25PM 2 points [-]

You usually can't get someone with a spider phobia to drop his phobia by trying to convince them with logic or evidence. On the other hand there are psychological strategies to help them to get rid of the phobia.

Comment author: Emily 02 February 2013 06:47:35PM 1 point [-]

I think cognitive behavioural therapy for phobias, which seems to work pretty well in a large number of cases, actually relies on helping people see that their fear is irrational.

Comment author: jooyous 02 February 2013 06:58:04PM 4 points [-]

As someone with a phobia, I can tell you from experience that realizing your fear is irrational doesn't actually make the fear go away. Sometimes it even makes you feel more guilty for having it in the first place. Realizing it's irrational just helps you develop coping strategies for acting normal when you're freaking out in public.

Comment author: Emily 02 February 2013 08:20:33PM 0 points [-]

Oh sure, I can definitely believe that. Maybe a better choice of wording above would have been "internalise" rather than "see", which would rather negate my point, I guess. Or maybe it works differently for some people. I don't have any experience with phobias or CBT myself.

Comment author: NancyLebovitz 03 February 2013 04:56:20PM 1 point [-]

It's alief vs. belief. It's one thing to see that, in theory, almost all spiders are harmless. It's another to remain calm in the presence of a spider if you've had a history of being terrified of them.

Desensitization is a process of teaching a person how to calm themselves, and then exposing them to things which are just a little like spiders (a picture of a cartoon spider, perhaps, or the word spider). When they can calm themselves around that, they're exposed to something a little more like a spider, and learn to be calm around that.

The alief system can learn, but it's not necessarily a verbal process.

Even when it is verbal, as when someone learns to identify various sorts of irrational thoughts, it's much slower than understanding an argument.

Comment author: Emily 03 February 2013 05:31:24PM 0 points [-]

Right; that's the "behavioural" part of cognitive behavioural therapy, right? But the "cognitive" part is an explicit, verbal process.

Comment author: Andreas_Giger 02 February 2013 04:29:35AM *  3 points [-]

Put them in a situation where they need to use logic and evidence to understand their environment and where understanding their environment is crucial for their survival, and they'll figure it out by themselves. No one really believes God will protect them from harm...

Comment author: DanArmak 02 February 2013 11:11:45AM *  5 points [-]

Sadly, that only works on a natural-selection basis, so the ethics boards forbid us from doing this. If they never see anyone actually failing to survive, they won't change their behavior.

Comment author: Andreas_Giger 02 February 2013 03:47:46PM *  3 points [-]

Can't make an omelette without breaking some eggs. Videotape the whole thing so the next one has even more evidence.

Comment author: Swimmer963 02 February 2013 01:03:42PM 11 points [-]

No one really believes God will protect them from harm...

I have some friends who do... At least insofar as things like "I don't have to worry about finances because God is watching over me, so I won't bother trying to keep a balanced budget." Then again, being financially irresponsible (a behaviour I find extremely hard to understand and sympathize with) seems to be common-ish, and not just among people who think God will take care of their problems.

Comment author: Andreas_Giger 02 February 2013 03:45:27PM *  2 points [-]

I think that's mostly because money is too abstract, and as long as you get by you don't even realize what you've lost. Survival is much more real.

Comment author: MixedNuts 02 February 2013 04:44:53PM 2 points [-]

Why not? Thinking about money is work. It involves numbers.

Comment author: Kindly 02 February 2013 04:51:06PM 2 points [-]

Moreover, it often involves a great deal of stress. Small wonder that many people try to avoid that stress by just not thinking about how they spend money.

Comment author: [deleted] 02 February 2013 04:55:58PM 1 point [-]

Well... as something completely and obviously deterministic (the amount of money you have at the end of the month is the amount you had at the beginning of the month, plus the amount you've earned, minus the amount you've spent, for a sufficiently broad definition of “earn” and “spend”), that's about the last situation in which I'd expect people to rely on God. With stuff which is largely affected by factors you cannot control directly (e.g. your health) I would be much less surprised.

Comment author: CCC 02 February 2013 06:57:47PM 6 points [-]

Once you have those figures, it is deterministic; however, at the start of the month, those figures are not yet determined. One might win a small prize in a lottery; the price of some staple might unexpectedly increase or decrease; an aunt may or may not send an expensive gift; a minor traffic accident may or may not happen, requiring immediate expensive repairs.

So there are factors that you cannot control that affect your finances.

Comment author: Swimmer963 03 February 2013 01:24:18AM 0 points [-]

With stuff which is largely affected by factors you cannot control directly (e.g. your health) I would be much less surprised.

"Praying for healing" was quite a common occurrence at my friend's church. I didn't pick that as an example because's it's a lot less straightforward. Praying for healing probably does appear to help sometimes (placebo effect), and it's hard enough for people who don't believe in God to be rational about health–there aren't just factor you cannot control, there are plenty of factors we don't understand.

Comment author: woodside 03 February 2013 07:59:05AM 2 points [-]

There hasn't been a lot of money spent researching it, but meta-analysis of the studies that have been conducted show that on average there is no placebo effect.

Comment author: Swimmer963 03 February 2013 01:36:12PM 2 points [-]

That's really interesting...I had not heard that. Thanks for the info!

Comment author: bentarm 03 February 2013 08:30:45PM 3 points [-]

...that's about the last situation in which I'd expect people to rely on God

Does this cause you to doubt the veracity of the claim in the parent, or to update towards your model of what people rely on God for being wrong? I guess it should probably be both, to some extent. It's just not really clear from your post which you're doing.

Comment author: [deleted] 03 February 2013 11:41:23PM 1 point [-]

Mostly the latter, as per Hanlon's razor.

Comment author: ChristianKl 02 February 2013 04:47:34PM 1 point [-]

If you threaten someone in their survival they are likely to get emotional. That's not the best mental state to apply logic.

Suicide bombers don't suddenly start believing in reason just before they are send out to kill themselves.

Soldiers in trenches who fear for their lives on the other hand do often start to pray. Maybe there are a few atheists in foxholes, but that state seems to promote religiousness.

Comment author: AspiringRationalist 04 February 2013 02:17:13AM 1 point [-]

Soldiers in trenches who fear for their lives on the other hand do often start to pray. Maybe there are a few atheists in foxholes, but that state seems to promote religiousness.

Does it promote religiousness or attract the religious?

Comment author: ChristianKl 02 February 2013 05:07:14PM 27 points [-]

You put them into a social enviroment where the high status people value logic and evidence. You give them the plausible promise that they can increase their status in that enviroment by increasing the amount that they value logic and evidence.

Comment author: aleksiL 03 February 2013 02:16:21PM 1 point [-]

How would this encourage them to actually value logic and evidence instead of just appearing to do so?

Comment author: HalMorris 03 February 2013 04:46:59PM 0 points [-]

Couple of attempts:

The hard sciences

Professions with a professional code of ethics, and consequences for violating it.

Comment author: HalMorris 03 February 2013 04:52:27PM 2 points [-]

Maybe the idea could gain popularity from a survival-island type reality program in which contestants have to measure the height of trees without climbing them, calculate the diameter of the earth, or demonstrate the existence of electrons (in order of increasing difficulty).

Comment author: ChristianKl 03 February 2013 10:00:31PM 2 points [-]

It's not a question of encouragement. Humans tends to want to be like the high status folk that they look up to.

Comment author: aleksiL 04 February 2013 10:51:44AM 1 point [-]

Want to be like or appear to be like? I'm not convinced people can be relied on to make the distinction, much less choose the "correct" one.

Comment author: RichardKennaway 04 February 2013 01:43:54PM 4 points [-]

Want to be like or appear to be like?

Or do they want to be like those folks appear to be like?

Comment author: Omegaile 04 February 2013 02:14:24PM 6 points [-]

People tend to conform to it's peers values.

Comment author: BerryPick6 02 February 2013 05:28:41PM 2 points [-]

This is from the Sam Harris vs. William Lane Craig debate, starting around the 44 minute mark. IIRC, Luke's old website has a review of this particular debate.

Comment author: jooyous 02 February 2013 09:51:31PM *  11 points [-]

This reminds me of

You can't reason someone out of a position they didn't reason themselves into.

which I believe is a paraphrasing of something Jonathan Swift said, but I'm not sure. Anyone have the original?

Comment author: simplicio 04 February 2013 11:35:34PM 18 points [-]

You can't reason someone out of a position they didn't reason themselves into.

I don't think this is empirically true, though. Suppose I believe strongly that violent crime rates are soaring in my country (Canada), largely because I hear people talking about "crime being on the rise" all the time, and because I hear about murders on the news. I did not reason myself into this position, in other words.

Then you show me some statistics, and I change my mind.

In general, I think a supermajority of our starting opinions (priors, essentially) are held for reasons that would not pass muster as 'rational,' even if we were being generous with that word. This is partly because we have to internalize a lot of things in our youth and we can't afford to vet everything our parents/friends/culture say to us. But the epistemic justification for the starting opinions may be terrible, and yet that doesn't mean we're incapable of having our minds changed.

Comment author: Nornagest 04 February 2013 11:59:20PM *  5 points [-]

Suppose I believe strongly that violent crime rates are soaring in my country (Canada), largely because I hear people talking about "crime being on the rise" all the time, and because I hear about murders on the news. I did not reason myself into this position, in other words. Then you show me some statistics, and I change my mind.

The chance of this working depends greatly on how significant the contested fact is to your identity. You may be willing to believe abstractly that crime rates are down and public safety is up after being shown statistics to that effect -- but I predict that (for example) a parent who'd previously been worried about child abductions after hearing several highly publicized news stories, and who'd already adopted and vigorously defended childrearing policies consistent with this fear, would be much less likely to update their policies after seeing an analogous set of statistics.

Comment author: jooyous 05 February 2013 12:23:38AM *  2 points [-]

This is partly because we have to internalize a lot of things in our youth and we can't afford to vet everything our parents/friends/culture say to us. But the epistemic justification for the starting opinions may be terrible, and yet that doesn't mean we're incapable of having our minds changed.

I agree, but I think part of the process of having your mind changed is the understanding that you came to believe those internalized things in a haphazard way. And you might be resisting that understanding because of the reasons @Nornagest mentions -- you've invested into them or incorporated them into your identity, for example. I think I'm more inclined to change the quote to

You can't expect to reason someone out of a position they didn't reason themselves into.

to make it slightly more useful in practice, because often changing the person's mind will require not only knowing the more accurate facts or proper reasoning, but also knowing why the person is attached to his old position -- and people generally don't reveal that until they're ready to change their mind on their own.

Oops, I guess I wasn't sure where to put this comment.

Comment author: Turgurth 03 February 2013 01:12:28AM 8 points [-]

If you can't appeal to reason to make reason appealing, you appeal to emotion and authority to make reason appealing.

Comment author: Rubix 02 February 2013 01:17:50AM 25 points [-]

"In any man who dies, there dies with him his first snow and kiss and fight. Not people die, but worlds die in them."

-Yevgeny Yevtushenko

Comment author: [deleted] 02 February 2013 01:18:28AM 15 points [-]

Judge a book by its cover. The author and publisher selected that design to represent the book's content and tone. #MoreSensibleSayings

ShittingtonUK

Comment author: CellBioGuy 02 February 2013 01:25:32AM 10 points [-]

No, they selected them to sell more copies by highjacking the easier-to-press buttons of your nervous system.

Comment author: Nic_Smith 02 February 2013 02:38:51AM *  3 points [-]

There's something to that, but it's not as if Varian's Microeconomic Analysis is going to have the cover of Spice and Wolf 1.

Comment author: Desrtopa 02 February 2013 02:31:22PM 8 points [-]

On the other hand, the method of judging a book's contents by its cover clearly has holes in it considering Spice and Wolf 1 has the cover of Spice and Wolf 1.

Comment author: HalMorris 03 February 2013 04:37:53PM 1 point [-]

Deliberate non sequitur alert: I'm often attracted to a cover that has holes in it. E.g. The Curious Incident of the Dog in the Night-Time.

Comment author: HalMorris 02 February 2013 02:49:20AM 3 points [-]

Probably purely true for some books, but as someone who buys thousands of books a year, my impression is they are very likely to reveal who they think their readers will be (hence a lot of covers say "stay away" to me), and just occasionally they can show a startling streak of originality. E.g. the board designs (there may be no dustjacket) on Dave Eggers' books are uniquely artistic in my opinion, and in this case since he has been seriously into graphics, I don't think it's any accident. You might think "Maybe this book is written by a bold and original person" and IMHO you'd be right. Also, the cover design of The Curious Incident of the Dog in the Night-Time by Mark Haddon kind of sent a message on my wavelength and it was not misleading (for me).

Comment author: sketerpot 02 February 2013 06:13:42AM 23 points [-]
Comment author: Kaj_Sotala 03 February 2013 10:20:44PM 25 points [-]

Authors are deliberately excluded from all this, on the grounds that they're so in love with what's inside the book that they don't understand what the cover stuff is for. Which is advertising.

The purpose of cover art is not to show the reader what's inside the book.

It's to get his attention from across the bookstore and get him to pick the book up in the first place.

Half-naked women and muscular barbarians are very good for getting teenaged readers to at least take a look. Black and red are good, too. And spiffy hardware, like spaceships. Cut-out covers, foil, blood, all that stuff--it gets attention, and the art and marketing people really don't give a damn whether it agrees with what's inside the book.

The cover gets you to pick up the book and read the blurbs; the blurbs are supposed to convince you to actually buy it. The blurb writer doesn't care any more about accuracy than the art director did; his job is to sell the book, period. One way to do that is to skim through the book and pick out all the most lurid details.

So all this is done without the author's interference. The author might put up a fuss about the half-naked women, since everyone in the story is ninety years old and wearing dirty bathrobes the whole time. The author might object to having his sentimental tale of old age cover-blurbed, "Shocking Love Secrets of the Ancients!" Who wants to waste time arguing with him? Better to shut him out and deliver the package as a fait accompli.

-- Lawrence Watt-Evans

Comment author: Andreas_Giger 02 February 2013 03:40:32PM 15 points [-]

You don't "judge" a book by its cover; you use the cover as additional evidence to more accurately predict what's in the book. Knowing what the publisher wants you to assume about the book is preferable to not knowing.

Comment author: [deleted] 02 February 2013 05:01:02PM *  7 points [-]

(Except when it's a novel and the text on the back cover spoilers events from the middle of the book or later which I would have preferred to not read until the right time.)

Comment author: aleksiL 03 February 2013 02:23:57PM 5 points [-]

Spoilers matter less than you think.

Comment author: Kaj_Sotala 03 February 2013 10:16:24PM 18 points [-]

According to a single counter-intuitive (and therefore more likely to make headlines), unreplicated study.

Comment author: BerryPick6 03 February 2013 10:17:36PM *  9 points [-]

Gah! Spoiler!

Comment author: [deleted] 03 February 2013 11:47:51PM 3 points [-]

Those error bars look large enough that I could still be right about myself even without being a total freak.

Comment author: satt 04 February 2013 01:11:06AM 2 points [-]

Really? 11 of the 12 stories got rated higher when spoiled, which is decent evidence against the nil hypothesis (spoilers have zero effect on hedonic ratings) regardless of the error bars' size. Under the nil hypothesis, each story has a 50/50 chance of being rated higher when spoiled, giving a probability of (¹²C₁₁ × 0.5¹¹ × 0.5¹) + (¹²C₁₂ × 0.5¹² × 0.5⁰) = 0.0032 that ≥11 stories get a higher rating when spoiled. So the nil hypothesis gets rejected with a p-value of 0.0063 (the probability's doubled to make the test two-tailed), and presumably the results are still stronger evidence against a spoilers-are-bad hypothesis.

This, of course, doesn't account for unseen confounders, inter-individual variation in hedonic spoiler effects, publication bias, or the sample (79% female and taken from "the psychology subject pool at the University of California, San Diego") being unrepresentative of people in general. So you're still not necessarily a total freak!

Comment author: Kindly 04 February 2013 02:56:54PM 0 points [-]

You can't just ignore the error bars like that. In 8 of the 12 cases, the error bars overlap, which means there's a decent chance that those comparisons could have gone either way, even assuming the sample mean is exactly correct. A spoilers-are-good hypothesis still has to bear the weight of this element of chance.

As a rough estimate: I'd say we can be sure that 4 stories are definitely better spoilered (>2 sd's apart); out of the ones 1..2 sd's apart, maybe 3 are actually better spoilered; and out of the remainder, they could've gone either way. So we have maybe 9 out of 12 stories that are better with spoilers, which gives a probability of 14.5% if we do the same two-tailed test on the same null hypothesis.

I don't necessarily want you to trust the numbers above, because I basically eyeballed everything; however, it gives an idea of why error bars matter.

Comment author: satt 04 February 2013 10:56:40PM 1 point [-]

You can't just ignore the error bars like that.

Ignoring the error bars does throw away potentially useful information, and this does break the rules of Bayes Club. But this makes the test a conservative one (Wikipedia: "it has very general applicability but may lack the statistical power of other tests"), which just makes the rejection of the nil hypothesis all the more convincing.

In 8 of the 12 cases, the error bars overlap, which means there's a decent chance that those comparisons could have gone either way, even assuming the sample mean is exactly correct. A spoilers-are-good hypothesis still has to bear the weight of this element of chance.

If I'm interpreting this correctly, "the error bars overlap" means that the heights of two adjacent bars are within ≈2 standard errors of each other. In that case, overlapping error bars doesn't necessarily indicate a decent chance that the comparisons could go either way; a 2 std. error difference is quite a big one.

As a rough estimate: I'd say we can be sure that 4 stories are definitely better spoilered (>2 sd's apart); out of the ones 1..2 sd's apart, maybe 3 are actually better spoilered; and out of the remainder, they could've gone either way. So we have maybe 9 out of 12 stories that are better with spoilers, which gives a probability of 14.5% if we do the same two-tailed test on the same null hypothesis.

But this is an invalid application of the test. The sign test already allows for the possibility that each pairwise comparison can have the wrong sign. Making your own adjustments to the numbers before feeding them into the test is an overcorrection. (Indeed, if "we can be sure that 4 stories are definitely better spoilered", there's no need to statistically test the nil hypothesis because we already have definite evidence that it is false!)

I don't necessarily want you to trust the numbers above, because I basically eyeballed everything; however, it gives an idea of why error bars matter.

This reminds me of a nice advantage of the sign test. One needn't worry about squinting at error bars; it suffices to be able to see which of each pair of solid bars is longer!

Comment author: Kindly 04 February 2013 11:04:13PM 2 points [-]

Indeed, if "we can be sure that 4 stories are definitely better spoilered", there's no need to statistically test the nil hypothesis because we already have definite evidence that it is false!

Okay, if all you're testing is that "there exist stories for which spoilers make reading more fun" then yes, you're done at that point. As far as I'm concerned, it's obvious that such stories exist for either direction; the conclusion "spoilers are good" or "spoilers are bad" follows if one type of story dominates.

Comment author: [deleted] 04 February 2013 08:10:54PM *  3 points [-]

Yeah, it doesn't seem likely given that study that works are liked in average less when spoiled; but what I meant is that probably there are certain individuals who like works less when spoiled. (Imagine Alice said something to the effect that she prefers chocolate ice cream to vanilla ice cream, and Bob said that it's not actually the case that vanilla tastes worse than chocolate, citing a study in which for 11 out of 12 ice cream brands their vanilla ice cream is liked more in average than their chocolate ice cream -- though in most cases the difference between the averages is not much bigger than each standard deviation; even if the study was conducted among a demographic that does include Alice, that still wouldn't necessarily mean Alice is mistaken, lying, or particularly unusual, would it?)

Comment author: satt 04 February 2013 09:29:19PM *  1 point [-]

Just so. These are the sort of "inter-individual variation in hedonic spoiler effects" I had in mind earlier.

Edit: to elaborate a bit, it was the "error bars look large enough" bit of your earlier comment that triggered my sceptical "Really?" reaction. Apart from that bit I agree(d) with you!

Edit 2: aha, I probably did misunderstand you earlier. I originally interpreted your error bars comment as a comment on the statistical significance of the pairwise differences in bar length, but I guess you were actually ballparking the population standard deviation of spoiler effect from the sample size and the standard errors of the means.

Comment author: [deleted] 05 February 2013 04:56:34AM *  1 point [-]

These are the sort of "inter-individual variation in hedonic spoiler effects" I had in mind earlier.

Huh. For some reason I had read that as "intra-individual". Whatever happened to the "assume people are saying something reasonable" module in my brain?

 I guess you were actually ballparking the population standard deviation of spoiler effect from the sample size and the standard errors of the means.

Yep.

Comment author: [deleted] 02 February 2013 01:19:39AM 19 points [-]

Saw kid tryin' to catch a butterfly, got me wonderin why I didn't see a butterfly trying desperately to fly away from a kid

thefolksong

Comment author: woodside 03 February 2013 07:53:04AM *  9 points [-]

Because you're a human, not a butterfly. It seems like an animal that used a cognitive filter that defaulted to the latter case would take a pretty severe fitness hit.

Comment author: alex_zag_al 05 February 2013 01:54:15AM 6 points [-]

Don't good hunters have good mental models of their prey? I mean I get that you're thinking that it wouldn't help to feel sympathy for animals of other species. But it would help in many cases to have empathy, and to see things from the other animal's perspective.

Comment author: [deleted] 02 February 2013 01:20:29AM 11 points [-]

People's executive functioning is largely invisible to them, and perceived in moral terms to the extent that it is visible.

S. T. Rev

Comment author: MixedNuts 02 February 2013 05:03:00PM 0 points [-]

Do we know anything about executive function failures other than AD(H)D?

Comment author: [deleted] 02 February 2013 05:13:49PM 1 point [-]

In most cases 'executive dysfunction' covers the same territory as 'adult ADHD', but it can also be the outcome of some kinds of brain damage.

Comment author: Emily 02 February 2013 06:45:07PM 1 point [-]
Comment author: jsbennett86 02 February 2013 03:36:42AM *  31 points [-]

It seems that 32 Bostonians have simultaneously dropped dead in a ten-block radius for no apparent reason, and General Purcell wants to know if it was caused by a covert weapon. Of course, the military has been put in charge of the investigation and everything is hush-hush.

Without examining anything, Keyes takes about five seconds to surmise that the victims all died from malfunctioning pacemakers and the malfunction was definitely not due to a secret weapon. We're supposed to be impressed, but our experience with real scientists and engineers indicates that when they're on-the-record, top-notch scientists and engineers won't even speculate about the color of their socks without looking at their ankles. They have top-notch reputations because they're almost always right. They're almost always right because they keep their mouths shut until they've fully analyzed the data.

Insultingly Stupid Movie Physics' review of The Core

Comment author: jsbennett86 02 February 2013 03:37:42AM 11 points [-]

The remark included the following as a footnote:

Even top-notch engineers and scientists will speculate wildly when they're off-the-record. We define on-the-record as those times when their written or oral communications are likely to be taken seriously and directly attributed to the scientist or engineer making them. Surely answering a direct question posed by a general would fall into this category.

Comment author: Desrtopa 02 February 2013 02:26:07PM 4 points [-]

32 people in the same ten block radius simultaneously dying of malfunctioning pacemakers seems so tremendously unlikely, I can't imagine how one could even locate that as an explanation in a matter of seconds.

Comment author: jsbennett86 02 February 2013 10:57:46PM 3 points [-]

Also from the review:

A pacemaker malfunction isn't automatically fatal. In most cases the patient's heart will still beat, although with an abnormal rhythm. The severity of a pacemaker problem depends on the type of malfunction as well as the severity of the patient's condition. EM interference can cause problems, but major problems are rare considering the amount of EM interference pacemaker patients are exposed to. Pacemakers are designed to minimize these problems. It's hard to believe that dozens of pacemaker patients with various heart conditions and different makes and models of pacemakers would simultaneously die from microwave exposure.

Comment author: HalMorris 03 February 2013 04:53:06PM 1 point [-]

Unless the 32 people used the same, or very similar, pacemakers, and somebody forgot to say that.

Comment author: Desrtopa 04 February 2013 04:56:41PM 3 points [-]

Still sounds extremely unlikely. If a model of car has a particular design flaw, you'll expect to hear a lot of reports of that model suffering the same malfunction, but you wouldn't expect to hear that dozens of units within a certain radius suffered the same malfunction simultaneously. You'd need to subject them all to some sort of outside interference at the same time for that sort of occurrence to be plausible, and an event of that scale ought to leave evidence beyond its effect on all the pacemakers in the vicinity.

Comment author: jsbennett86 02 February 2013 03:45:22AM *  33 points [-]

On scientists trying to photograph an atom's shadow:

...the idea sounds stupid. But scientists don't care about sounding stupid, which is what makes them not stupid, and they did it anyway.

Luke McKinney - 6 Microscopic Images That Will Blow Your Mind

Comment author: Eliezer_Yudkowsky 02 February 2013 05:25:27AM 18 points [-]

Good things come to those who steal them.

-- Magnificent Sasquatch

Comment author: andreas 02 February 2013 05:42:44AM *  38 points [-]

"I design a cell to not fail and then assume it will and then ask the next 'what-if' questions," Sinnett said. "And then I design the batteries that if there is a failure of one cell it won't propagate to another. And then I assume that I am wrong and that it will propagate to another and then I design the enclosure and the redundancy of the equipment to assume that all the cells are involved and the airplane needs to be able to play through that."

Mike Sinnett, Boeing's 787 chief project engineer

Comment author: Nic_Smith 05 February 2013 02:47:59AM 4 points [-]

Isn't the point of the article that Boeing may not have actually done at least the first two steps (design cell not to fail, prevent failure of a cell from causing battery problems)?

I am confused.

Comment author: Eugine_Nier 02 February 2013 06:06:48AM 61 points [-]

It’s nice to elect the right people, but that’s not the way you solve things. The way you solve things is by making it politically profitable for the wrong people to do the right things.

-- Milton Friedman

Comment author: Multiheaded 04 February 2013 06:32:54PM *  14 points [-]

No one can be good for long if goodness is not in demand.

-- Bertold Brecht

(I'm always amused when people of opposite political views express similar thoughts on society.)


Also:

The aim of science is not to open the door to infinite wisdom, but to set some limit on infinite error.

Comment author: Eugine_Nier 05 February 2013 12:28:40AM 1 point [-]

I think the Brecht quote is somewhat misleading. The problem is not that not enough people want/demand goodness, the problem is that it is too easy to profit by cheating without getting caught.

Comment author: Eugine_Nier 02 February 2013 06:51:31AM 26 points [-]

[S]econd thoughts tend to be tentative, and people tend not to believe that they are being lied to. Their own fairmindedness makes them gullible. Upon hearing two versions of any story, the natural reaction of any casual listener is to assume both versions are slanted to favor their side, and that the truth is perhaps somewhere in the middle. So if I falsely accuse an innocent group of ten people of wrongdoing, the average bystander, if he later hears my false accusation disputed, will assume that five or six of the people are guilty, rather than assume I lied and admit that he was deceived.

-- John C Wright

Comment author: [deleted] 02 February 2013 06:28:04PM *  20 points [-]

That reminds me of http://xkcd.com/690/.

Also:

If one group of editors were to say the Earth is flat and another group were to say it is round, it would not benefit Wikipedia for the groups to compromise and say the Earth is shaped like a calzone.

-- Raymond Arritt

(Quoting this before dinner is making me hungry.)

Comment author: HalMorris 03 February 2013 04:23:44PM 4 points [-]

Wikipedia may ultimately have to do one of two things, or both:

1) Provide better structure for alternate versions of contested ideas

2) Construct a practically effective demarcation between strictly factual domains, and anything more interpretive.

Such a demarcation will always be challenged; I don't see any way around that, but I'd also insist that it's necessary for our sanity. Supposed it was possible, maybe using a browser with links to a database, to try to "brand" (or give the underwriters seal of approval to) those pages that provided straightforward factual assertions, and unretouched photographs, and scans of original source texts, such as all newspapers of which a copy still exists), and to promote the idea that the respectability of any interpretive or ethical claim consists very largely in its groundedness in showing links to the "smells like a fact" zone.

Comment author: philh 02 February 2013 11:22:32AM *  47 points [-]

Men in Black on guessing the teacher's password:

Zed: You're all here because you are the best of the best. Marines, air force, navy SEALs, army rangers, NYPD. And we're looking for one of you. Just one.
[...]
Edwards: Maybe you already answered this, but, why exactly are we here?
Zed: [noticing a recruit raising his hand] Son?
Jenson: Second Lieutenant, Jake Jenson. West Point. Graduate with honors. We're here because you are looking for the best of the best of the best, sir! [throws Edwards a contemptible glance]
[Edwards laughs]
Zed: What's so funny, Edwards?
Edwards: Boy, Captain America over here! "The best of the best of the best, sir!" "With honors." Yeah, he's just really excited and he has no clue why we're here. That's just, that's very funny to me.

Comment author: Eliezer_Yudkowsky 02 February 2013 01:03:41PM 7 points [-]

Heaven? They tried to recruit me, but I turned them down. My place is here in shadows, with the blood and the fear and the screams of the dying, standing back to back with my loves against the world.

-- Time Braid

Comment author: [deleted] 02 February 2013 04:46:25PM 3 points [-]

Anything that's ever said is really just a signpost leading towards a certain state of being.

Eckhart Tolle, as quoted by Owen Cook in The Blueprint Decoded

Comment author: [deleted] 03 February 2013 01:13:32AM 34 points [-]

Market exchange is a pathetically inadequate substitute for love, but it scales better.

S. T. Rev

Comment author: HalMorris 03 February 2013 04:01:55PM 7 points [-]

Joke: a tourist was driving around lost in the countryside in Ireland among the 1 lane roads and hill farms divided by ancient stone fences, and he asks a sheep farmer how to get to Dublin, to which he replies:

"Well ... if I was going to Dublin, I wouldn't start from here."

Moral, as I see it anyway: While the heuristic "to get to Y, start from X instead of where you are" has some value (often cutting a hard problem into two simpler ones), ultimately we all must start from where we are.

Comment author: Grognor 03 February 2013 09:59:37PM *  37 points [-]

It is because a mirror has no commitment to any image that it can clearly and accurately reflect any image before it. The mind of a warrior is like a mirror in that it has no commitment to any outcome and is free to let form and purpose result on the spot, according to the situation.

—Yagyū Munenori, The Life-Giving Sword

Comment author: [deleted] 04 February 2013 01:55:07AM *  17 points [-]

Been making a game of looking for rationality quotes in the super bowl

"It's only weird if it doesn't work" --Bud Light Commercial

Only a rationality quote out of context, though, since the ad is about superstitious rituals among sports fans. My automatic mental reply is "well that doesn't work"

Comment author: Stabilizer 05 February 2013 01:17:36AM 5 points [-]

Clarity is the counterbalance of profound thoughts.

-Luc de Clapiers

Comment author: Stabilizer 05 February 2013 01:20:51AM 41 points [-]

Shipping is a feature. A really important feature. Your product must have it.

-Joel Spolsky

Comment author: fubarobfusco 05 February 2013 05:27:05AM 3 points [-]

If your service is down, it has no features.

Comment author: Eugine_Nier 05 February 2013 01:22:59AM 15 points [-]

Of a proposed course of action He wants men, so far as I can see, to ask very simple questions; is it righteous? is it prudent? is it possible? Now if we can keep men asking "Is it in accordance with the general movement of our time? Is it progressive or reactionary? Is this the way that History is going?" they will neglect the relevant questions. And the questions they do ask are, of course, unanswerable; for they do not know the future, and what the future will be depends very largely on just those choices which they now invoke the future to help them to make.

-- Screwtape, The Screwtape Letters by C.S. Lewis