Happy New Year! Here's the latest and greatest installment of rationality quotes. Remember:

  • Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself
  • Do not quote comments/posts on LessWrong or Overcoming Bias
  • No more than 5 quotes per person per monthly thread, please
Rationality Quotes January 2013
New Comment
604 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

"Ten thousand years' worth of sophistry doesn't vanish overnight," Margit observed dryly. "Every human culture had expended vast amounts of intellectual effort on the problem of coming to terms with death. Most religions had constructed elaborate lies about it, making it out to be something other than it was—though a few were dishonest about life, instead. But even most secular philosophies were warped by the need to pretend that death was for the best."

"It was the naturalistic fallacy at its most extreme—and its most transparent, but that didn't stop anyone. Since any child could tell you that death was meaningless, contingent, unjust, and abhorrent beyond words, it was a hallmark of sophistication to believe otherwise. Writers had consoled themselves for centuries with smug puritanical fables about immortals who'd long for death—who'd beg for death. It would have been too much to expect all those who were suddenly faced with the reality of its banishment to confess that they'd been whistling in the dark. And would-be moral philosophers—mostly those who'd experienced no greater inconvenience in their lives than a late train or a surly waiter—began wailing about the destruction of the human spirit by this hideous blight. We needed death and suffering, to put steel into our souls! Not horrible, horrible freedom and safety!"

-- Greg Egan, "Border Guards".

What does puzzle people – at least it used to puzzle me – is the fact that Christians regard faith… as a virtue. I used to ask how on Earth it can be a virtue – what is there moral or immoral about believing or not believing a set of statements? Obviously, I used to say, a sane man accepts or rejects any statement, not because he wants or does not want to, but because the evidence seems to him good or bad. If he were mistaken about the goodness or badness of the evidence, that would not mean he was a bad man, but only that he was not very clever. And if he thought the evidence bad but tried to force himself to believe in spite of it, that would be merely stupid…

What I did not see then – and a good many people do not see still – was this. I was assuming that if the human mind once accepts a thing as true it will automatically go on regarding it as true, until some real reason for reconsidering it turns up. In fact, I was assuming that the human mind is completely ruled by reason. But that is not so. For example, my reason is perfectly convinced by good evidence that anesthetics do not smother me and that properly trained surgeons do not start operating until I am unconscious. But

... (read more)

Now that I am a Christian, I do have moods in which the whole thing looks very improbable; but when I was an atheist, I had moods in which Christianity looked terribly probable...

Dear LWers: do you have these moods (let us gloss them as "extreme temporary loss of confidence in foundational beliefs"):

[pollid:377]

I have had "extreme temporary loss of foundational beliefs," where I briefly lost confidence in beliefs such as the nonexistence of fundamentally mental entities (I would describe this experience as "innate but long dormant animist intutions suddenly start shouting,") but I've never had a mood where Christianity or any other religion looked probable, because even when I had such an experience, I was never enticed to privilege the hypothesis of any particular religion or superstition.

I answered "sometimes" thinking of this as just Christianity, but I would have answered "very often" if I had read your gloss more carefully.

I'm not quite sure how to explicate this, as it's something I've never really though much about and had generalized from one example to be universal. But my intuitions about what is probably true are extremely mood and even fancy-dependent, although my evaluation of particular arguments and such seems to be comparatively stable. I can see positive and negative aspects to this.

6lavalamp
Oh whoops, I didn't read the parenthetical either. Not sure if it changes my answer.
9Qiaochu_Yuan
I am fascinated by all of the answers that are not "never," as this has never happened to me. If any of the answerers were atheists, could any of you briefly describe these experiences and what might have caused them? (I am expecting "psychedelic drugs," so I will be most surprised by experiences that are caused by anything else.)

Erm...when I was a lot younger, when I considered doing something wrong or told a lie I had the vague feeling that someone was keeping tabs. Basically, when weighing utilities I greatly upped the probability that someone would somehow come to know of my wrongdoings, even when it was totally implausible. That "someone" was certainly not God or a dead ancestor or anything supernatural...it wasn't even necessarily an authority figure.

Basically, the superstition was that someone who knew me well would eventually come to find out about my wrongdoing, and one day they would confront me about it. And they'd be greatly disappointed or angry.

I'm ashamed to say that in the past I might have actually done actions which I myself felt were immoral, if it were not for that superstitious feeling that my actions would be discovered by another individual. It's hard to say in retrospect whether the superstitious feeling was the factor that pushed me back over that edge.

Note that I never believed the superstition...it was more of a gut feeling.

I'm older now and am proud to say that I haven't given serious consideration to doing anything which I personally feel is immoral for a very, very l... (read more)

Occasionally the fundamental fact that all our inferences are provisional creeps me out. The realization that there's no way to actually ground my base belief that, say, I'm not a Boltzmann brain, combined with the fact that it's really quite absurd that anything exists rather than nothing at all given that any cause we find just moves the problem outwards is the closest thing I have to "doubting existence".

8[anonymous]
I have been diagnosed with depression in the past, so it's not terribly surprising to me when "My life is worth living" is considered a foundational belief, that has it's confidence fade in and out quite a lot. In this case, the drugs would actually restore me back to a more normal level. Although, considering the frequency with which it is still happening, I may want to reconsult with my Doctor. Saying "I have been diagnosed with mental health problems, and I'm on pills, but really, I still have some pretty bad mental health problems." pattern matches rather well to "Perhaps I should ask my Doctor about updating those pills."
9DaFranker
Yep. Medical professionals often err on the side of lesser dosage anyway, even for life-threatening stuff. After all, "we gave her medication but she died anyway, the disease was too strong" sounds like abstract, chance-and-force-of-nature-and-fate stuff, and like a statistic on a sheet of paper. "Doctor overdoses patient", on the other hand, is such a tasty scoop I'd immediately expect my grandmother to be gossiping about it and the doctor in question to be banned from medical practice for life, probably with their diplomas revoked. They also often take their guidelines from organizations like the FDA, which are renowned to explicitly delay for five years medications that have a 1 in 10000 side-effect mortality rate versus an 80% cure-and-survival rate for diseases that kill 10k+ annually (bogus example, but I'm sure someone more conscientious than me can find real numbers). Anyway, sorry for the possibly undesired tangent. It seems usually-optimal to keep returning to your doctor persistently as much as possible until medication really does take noticeable effect.
8GDC3
I put sometimes. I believe all kinds of crazy stuff and question everything when I'm lying in bed trying to fall asleep, most commonly that death will be an active and specific nothing that I will exist to experience and be bored frightened and upset by forever. Something deep in my brain believes a very specific horrible cosmology as wacky and specific as any religion but not nearly as cheerful. When my faculties are weakened it feels as if I directly know it to be true and any attempt to rehearse my reasons for materialism feels like rationalizing. I'm neither very mentally healthy nor very neurotypical, which may be part of why this happens.
7simplicio
Hasn't happened to me in years. Typically involved desperation about how some aspect of my life (only peripherally related to the beliefs in question, natch) was going very badly. Temptation to pray was involved. These urges really went away when I discovered that they were mainly caused by garden variety frustration + low blood sugar. I think that in my folly-filled youth, my brain discovered that "conversion" experiences (religious/political) are fun and very energizing. When I am really dejected, a small part of me says "Let's convert to something! Clearly your current beliefs are not inspiring you enough!"
6NoisyEmpire
My own response was “rarely”; had I answered when I was a Christian ten years ago, I would probably have said “sometimes”; had I answered as a Christian five years ago I might have said “often” or “very often” (eventually I allowed some of these moments of extreme uncertainty to become actual crises of faith and I changed my mind, though it happened in a very sloppy and roundabout way and had I had LessWrong at the time things could’ve been a lot easier.) And still, I can think of maybe two times in the past year when I suddenly got a terrifying sinking feeling that I have got everything horribly, totally wrong. Both instances were triggered whilst around family and friends who remain religious, and both had to do with being reminded of old arguments I used to use in defense of the Bible which I couldn’t remember, in the moment, having explicitly refuted. Neither of these moods was very important and both were combated in a matter of minutes. In retrospect, I’d guess that my brain was conflating fear of rejection-from-the-tribe-for-what-I-believe with fear of actually-being-wrong. Not psychedelic drugs, but apparently an adequate trigger nonetheless.
5MoreOn
I am firmly atheist right now, lounging in my mom's warm living room in a comfy armchair, tipity-typing on my keyboard. But when I go out to sea, alone, and the weather turns, a storm picks up, and I'm caught out after dark, and thanks to a rusty socket only one bow light works... well, then, I pray to every god I know starting with Poseidon, and sell my soul to the devil while at it. I'm not sure why I do it. Maybe that's what my brain does to occupy the excess processing time? In high school, when I still remembered it, I used to recite the litany against fear. But that's not quite it. When waves toss my little boat around and I ask myself why I'm praying---the answer invariably comes out, ``It's never made things worse. So the Professor God isn't punishing me for my weakness. Who knows... maybe it will work? Even if not, prayer beats panic as a system idle process.''
5Toddling
I answered Sometimes. For me the 'foundational belief' in question is usually along the lines: "Goal (x) is worth the effort of subgoal/process (y)." These moods usually last less than 6 months, and I have a hunch that they're hormonal in nature. I've yet to systematically gather data on the factors that seem most likely to be causing them, mostly because it doesn't seem worth the effort right now. Hah. Seriously, though, I have in fact been convinced that I need to work out a consistent utility function, but when I think about the work involved, I just... blah.
4Sarokrae
I'm a bit late here, but my response seems different enough to the others posted here to warrant replying! My brain is abysmally bad at storing trains of thought/deduction that lead to conclusions. It's very good at having exceptionally long trains of thoughts/deductions. It's quite good at storing the conclusions of my trains of thoughts, but only as cached thoughts and heuristics. It means that my brain is full of conclusions that I know I assign high probabilities to, but don't know why off the top of my head. My beliefs end up stored as a list of theorems in my head, with proofs left as an exercise to the reader. I occasionally double-check them, but it's a time-consuming process. If I'm having a not very mentally agile day, I can't off the top of my head re-prove the results I think I know, and a different result seems tempting, I basically get confused for a while until I re-figure out how to prove the result I know I've proven before. Basically on some days past-me seems like a sufficiently different person that I no longer completely trust her judgement.
3Qiaochu_Yuan
Interesting. I've only had this experience in very restricted contexts, e.g. I noticed recently that I shouldn't trust my opinions on movies if the last time I saw them was more than several years ago because my taste in movies has changed substantially in those years.
4jooyous
Sometimes, I am extremely unconvinced in the utility of "knowing stuff" or "understanding stuff" when confronted with the inability to explain it to suffering people who seem like they want to stop suffering but refuse to consider the stuff that has potential to help them stop suffering. =/
3Qiaochu_Yuan
Interesting. My confidence in my beliefs has never been tied to my ability to explain them to anyone, but then again I'm a mathematician-(in-training), so...
3jooyous
Well, it's not that I'm not confident that they're useful to me. They are! They help me make choices that make me happy. I'm just not confident in how useful pursuing them is in comparison to various utilitarian considerations of helping other people be not miserable. For example, suppose I could learn some more rationality tricks and start saving an extra $100 each month by some means, while in the meantime someone I know is depressed and miserable and seemingly asking for help. Instead of going to learn those rationality tricks to make an extra $100, I am tempted to sit with them and tell them all the ways I learned to manage my thoughts in order to not make myself miserable and depressed. And when this fails spectacularly, eating my time and energy, I am left inclined to do neither because that person is miserable and depressed and I'm powerless to help them so how useful is $100 really? Blah! So, to answer the question, this is the mood in which I question my belief in the usefulness of knowing and doing useful things. I am also a computer science/math person! high five
3Qiaochu_Yuan
Aren't useful things kind of useful to do kind of by definition? (I know this argument is often used to sneak in connotations, but I can't imagine that "is useful" is a sneaky connotation of "useful thing.") What you describe sounds to me like a failure to model your friend correctly. Most people cannot fix themselves given only instructions on how to do so, and what worked for you may not work for your friend. Even if it might, it is hard to motivate yourself to do things when you are miserable and depressed, and when you are miserable and depressed, hearing someone else say "here are all the ways you currently suck, and you should stop sucking in those ways" is not necessarily encouraging. In other words, "useful" is a two- or even three-place predicate.
-1MugaSofer
I should think most of them were. Of course, "foundational belief" is a subjective term.
7duckduckMOO
I put never, but "not anymore" would be more accurate
3Endovior
This. Took a while to build that foundation, and a lot of contemplation in deciding what needed to be there... but once built, it's solid, and not given to reorganization on whim. That's not because I'm closed-minded or anything, it's because stuff like a belief that the evidence provided by your own senses is valid really is kind of fundamental to believing anything else, at all. Not believing in that implies not believing in a whole host of other things, and develops into some really strange philosophies. As a philosophical position, this is called 'empiricism', and it's actually more fundamental than belief in only the physical world (ie: disbelief in spiritual phenomena, 'materialism'), because you need a thing that says what evidence is considered valid before you have a thing that says 'and based on this evidence, I conclude'.
6Jay_Schweikert
I answered "rarely," but I should probably qualify that. I've been an atheist for about 5 years, and in the last 2 or 3, I don't recall ever seriously thinking that the basic, factual premises of Christianity were any more likely than Greek myths. But I have had several moments -- usually following some major personal failing of mine, or maybe in others close to me -- where the Christian idea of man-as-fallen living in a fallen world made sense to me, and where I found myself unconsciously groping for something like the Christian concept of grace. As I recall, in the first few years after my deconversion, this feeling sometimes led me to think more seriously about Christianity, and I even prayed a few times, just in case. In the past couple years that hasn't happened; I understand more fully exactly why I'd have those feelings even without anything like the Christian God, and I've thought more seriously about how to address them without falling on old habits. But certainly that experience has helped me understand what would motivate someone to either seek or hold onto Christianity, especially if they didn't have any training in Bayescraft.
2MaoShan
I thought the most truthful answer for me would be "Rarely", given all possible interpretations of the question. I think that it should have been qualified "within the past year", to eliminate the follies of truth-seeking in one's youth. Someone who answers "Never" cannot be considering when they were a five-year-old. I have believed or wanted to believe a lot of crazy things. Even right now, thinking as an atheist, I rarely have those moods, and only rarely due to my recognized (and combated) tendency toward magical thinking. However, right now, thinking as a Christian, I would have doubts constantly, because no matter how much I would like to believe, it is plain to see that most of what I am expected to have faith in as a Christian is complete crap. I am capable of adopting either mode of thinking, as is anyone else here. We're just better at one mode than others.
0FiftyTwo
I said very often, but I do have clinical depression so its not unexpected.
5Said Achmiz
Sounds like Lewis's confusion would have been substatially cleared up by distinguishing between belief and alief, and then he would not have had to perpetrate such abuses on commonly used words.

To be fair, the philosopher Tamar Gendler only coined the term in 2008.

3Jay_Schweikert
Upvoted. I actually had a remarkably similar experience reading Lewis. Throughout college I had been undergoing a gradual transformation from "real" Christian to liberal Protestant to deist, and I ended up reading Lewis because he seemed to be the only person I could find who was firmly committed to Christianity and yet seemed willing to discuss the kind of questions I was having. Reading Mere Christianity was basically the event that let me give Christianity/theism one last look over and say "well said, but that is enough for me to know it is time to move on."

Not long ago a couple across the aisle from me in a Quiet Car talked all the way from New York City to Boston, after two people had asked them to stop. After each reproach they would lower their voices for a while, but like a grade-school cafeteria after the lunch monitor has yelled for silence, the volume crept inexorably up again. It was soft but incessant, and against the background silence, as maddening as a dripping faucet at 3 a.m. All the way to Boston I debated whether it was bothering me enough to say something. As we approached our destination a professorial-looking man who’d spoken to them twice got up, walked back and stood over them. He turned out to be quite tall. He told them that they’d been extremely inconsiderate, and he’d had a much harder time getting his work done because of them.

“Sir,” the girl said, “I really don’t think we were bothering anyone else.”

“No,” I said, “you were really annoying.”

“Yes,” said the woman behind them.

“See,” the man explained gently, “this is how it works. I’m the one person who says something. But for everyone like me, there’s a whole car full of people who feel the same way.”

-- Tim Kreider, The Quiet Ones

"This is how it sometimes works", I would have said. Anything more starts to sound uncomfortably close to "the lurkers support me in email."

...but why wait until they'd almost gotten to Boston?

Perhaps because at that point, one is not faced with the prospect of spending several hours in close proximity to people with whom one has had an unpleasant social interaction.

[-]Shmi160

No one wants to appear rude, of course. As this was almost the end of the ride, the person who rebuked them minimized the time he'd have to endure in the company of people who might consider him rude because of his admonishment, whether or not they agree with him. I wonder if this is partly a cultural thing.

The passage states that he'd already spoken to them twice.

9Jotto999
I don't know the circumstances, but I would have tried to make eye contact and just blatantly stare at them for minutes straight, maybe even hamming it up with a look of slight unhinged interest. They would have become more uncomfortable and might have started being anxious that a stranger is eavesdropping on them, causing them to want to be more discrete, depending on their disposition. I've actually tried this before, and it seems to sometimes work if they can see you staring at them. Give a subtle, slight grin, like you might be sexually turned on. If you won't see them again then it's worth a try.
6tut
Since this has got 22 upvotes I must ask: What makes this a rationality quote?

Every actual criticism of an idea/behaviour is likely to imply a much larger quantity of silent doubt/disapproval.

Sometimes, but you need to take into account what P(voices criticism | has criticism) is. Otherwise you'll constantly cave to vocal minorities (situations where the above probability is relatively large).

6nabeelqu
I'd say it comes under the 'instrumental rationality' heading. The chatter was clearly bothering the writer, but - irrationally - neither he nor the others (bar one) actually got up and said anything.
2Toddling
You could argue that the silence of the author and the woman behind the couple is an example of the bystander effect.

You're better at talking than I am. When you talk, sometimes I get confused. My ideas of what's right and wrong get mixed up. That's why I'm bringing this. As soon as I start thinking it's all right to steal from our employees, I'm going to start hitting you with the stick.

later

If it makes you feel any better, I agree with your logic completely.

No, what would make me feel better is for you to stop hitting me!

--Freefall

In Japan, it is widely believed that you don't have direct knowledge of what other people are really thinking (and it's very presumptuous to assume otherwise), and so it is uncommon to describe other people's thoughts directly, such as "He likes ice cream" or "She's angry". Instead, it's far more common to see things like "I heard that he likes ice cream" or "It seems like/It appears to be the case that she is angry" or "She is showing signs of wanting to go to the park."

-- TVTropes

Edit (1/7): I have no particular reason to believe that this is literally true, but either way I think it holds an interesting rationality lesson. Feel free to substitute 'Zorblaxia' for 'Japan' above.

Interesting; is this true?

Yes, my Japanese teacher was very insistent about it, and IIRC would even take points off for talking about someones mental state with out the proper qualifiers.

4Vaniver
I think you're missing a word here :P
2beoShaffer
Fixed.
4Toddling
This is good to know, and makes me wonder whether there's a way to encourage this kind of thinking in other populations. My only thought so far has been "get yourself involved with the production of the most widely-used primary school language textbooks in your area." Thoughts?

It's not necessarily an advantageous habit. If a person tells you they like ice cream, and you've seen them eating ice cream regularly with every sign of enjoyment, you have as much evidence that they like ice cream as you have about countless other things that nobody bothers hanging qualifiers on even in Japanese. The sciences are full of things we can't experience directly but can still establish with high confidence.

Rather than teaching people to privilege other people's mental states as an unknowable quality, I think it makes more sense to encourage people to be aware of their degrees of certainty.

1Toddling
Increased awareness of degrees of certainty is more or less what I was thinking of encouraging. It hadn't occurred to me to look for a deeper motive and try to address it directly. This was helpful, thank you.
4ChristianKl
You can look at this way of thinking as a social convention. Japanese people often care about signaling respect with language. Someone who direct speaks about the mental state of another can be seen as presumtious. High status people in any social circle can influence it's social customs. If people get put down for guessing other other's mental states wrong without using qualifiers they are likely to use qualifiers the next time. If you actually want to do this, E-Prime is an interesting. E-Prime calls for tabooing to be. I meet a few people in NLP circles that valued to communicate in E-Prime.
5roryokane
Specific source: Useful Notes: Japanese Language on TV Tropes
6Nornagest
TV Tropes is unreliable on Japanese culture. While it's fond of Japanese media, connection demographics show that Japanese editors are disproportionately rare (even after taking the language barrier into account); almost all the contributors to a page like that are likely to be language students or English-speaking Japanophiles, few of whom have any substantial experience with the language or culture in the wild. This introduces quite a bit of noise; for example, the site's had problems in the past with people reading meanings into Japanese words that don't exist or that are much more specific than they are in the wild. I don't know myself whether the ancestor is accurate, but it'd be wise to take it with a grain of salt.
2abody97
I have to say that's fairly stupid (I'm talking about the claim which the quote is making and generalizing over a whole population; I am not doing argumentum ad hominem here). I've seen many sorts of (fascinated) mythical claims on how the Japanese think/communicate/have sex/you name it differently and they're all ... well, purely mythical. Even if I, for the purposes of this argument, assume that beoShaffer is right about his/her Japanese teacher (and not just imagining or bending traits into supporting his/her pre-defined belief), it's meaningless and does not validate the above claim. Just for the sake of illustration, the simplest explanation for such usages is some linguistic convention (which actually makes sense, since the page from which the quote is sourced is substantially talking about the Japanese Language). Unless someone has some solid proof that it's actually related to thinking rather than some other social/linguistic convention, this is meaningless (and stupid).
3A1987dM
Agreed. Pop-whorfianism is usually silly.
1Toddling
I'm not familiar with this term and your link did not clarify as much as I had hoped. Could you give a clearer definition?
7arborealhominid
Well, the Sapir-Whorf hypothesis is the idea that language shapes thought and/or culture, and Whorfianism is any school of thought based on this hypothesis. I assume pop-Whorfianism is just Whorfian speculation by people who aren't qualified in the field (and who tend to assume that the language/culture relationship is far more deterministic than it actually is).
0Toddling
Thanks.
4A1987dM
Just-so stories about the relationships between language and culture. (The worst thing is that, while just-so stories about evolutionary psychology are generally immediately identified as sexist/classist/*ist drivel, just-so stories about language tend to be taken seriously no matter how ludicrous they are.)
0Qiaochu_Yuan
I don't care whether it's actually true or not; either way it still holds an interesting rationality lesson and that's why I posted it.
5abody97
With all respect that I'm generically required to give, I don't care whether you care or not. The argument I made was handling what you posted/quoted, neither you as a person nor your motives to posting.
-4A1987dM
I think the technical term for that is “bullshit”.
8Qiaochu_Yuan
That is a hugely unfair assessment of my motives (unlike abody97's comment which claims not to be about my motives, which I also doubt). People say untrue things all the time, e.g. when storytelling. The goal of storytelling is not to directly relate the truth of some particular experience, and I didn't think the goal of posting rationality quotes was either, considering how many quotes these posts get from various works of fiction. I posted this quote for no reason other than to suggest an interesting rationality lesson, and calling that "bullshit" sneaks in unnecessary connotations.
1A1987dM
Yes, but that quote is written in such a way that most readers¹ would assume it's true (or at least that the writer believes it's true); so it's not like storytelling. And most readers¹ would find it interesting because they'd think it's true; if I pulled some claim about $natural_language having $weird_feature directly out of my ass and concluded with “... Just kidding.”, I doubt many people¹ would find it that interesting. ---------------------------------------- 1. OK, I admit I'm mostly Generalizing From One Example.
7Qiaochu_Yuan
Would you be satisfied if I edited the original post to read something like "note: I have no particular reason to believe that this is literally true, but I think it holds an interesting rationality lesson either way. Feel free to substitute 'Zorblaxians' for 'Japanese'"?
3A1987dM
Yes.
-3MugaSofer
Have you considered replacing it with "[country]" or similar, then noting at the bottom what page it came from?
1Qiaochu_Yuan
I added a link, but I would prefer to suggest a fake name over a generic name.

I think, actually, scientists should kinda look into that whole 'death' thing. Because, they seem to have focused on diseases... and I don't give a #*=& about them. The guys go, "Hey, we fixed your arthritis!" "Am I still gonna die?" "Yeah."

So that, I think, is the biggest problem. That's why I can't get behind politicians! They're always like, "Our biggest problem today is unemployment!" and I'm like "What about getting old and sick and dying?"

  • Norm MacDonald, Me Doing Stand Up

(a few verbal tics were removed by me; the censorship was already present in the version I heard)

Sympathetic, but ultimately, we die OF diseases. And the years we do have are more or less valuable depending on their quality.

Physicians should maximize QALYs, and extending lifespan is only one way to do it.

3ChristianKl
The question is whether that's a useful paradigm. Aubrey Gray argues that it isn't.

I'd vote this up, but I can't shake the feeling that the author is setting up a false dichotomy. Living forever would be great, but living forever without arthritis would be even better. There's no reason why we shouldn't solve the easier problem first.

Sure there is. If you have two problems, one of which is substantially easier than the other, then you still might solve the harder problem first if 1) solving the easier problem won't help you solve the harder problem and 2) the harder problem is substantially more pressing. In other words, you need to take into account the opportunity cost of diverting some of your resources to solving the easier problem.

4Bugmaster
In general this is true, but I believe that in this particular case the reasoning doesn't apply. Solving problems like arthritis and cancer is essential for prolonging productive biological life. Granted, such solutions would cease to be useful once mind uploading is implemented. However, IMO mind uploading is so difficult -- and, therefore, so far in the future -- that, if we did chose to focus exclusively on it, we'd lose too many utilons to biological ailments. For the same reason, prolonging productive biological life now is still quite useful, because it would allow researchers to live longer, thus speeding up the pace of research that will eventually lead to uploading.
5[anonymous]
Using punctuation that is normally intended to match ({[]}), confused me. Use the !%#$ing other punctuation for that.
0roystgnr
Edited.

For the Greek philosophers, Greek was the language of reason. Aristotle's list of categories is squarely based on the categories of Greek grammar. This did not explicitly entail a claim that the Greek language was primary: it was simply a case of the identification of thought with its natural vehicle. Logos was thought, and Logos was speech. About the speech of barbarians little was known; hence, little was known about what it would be like to think in the language of barbarians. Although the Greeks were willing to admit that the Egyptians, for example, possessed a rich and venerable store of wisdom, they only knew this because someone had explained it to them in Greek.

— Umberto Eco, The Search for the Perfect Language

The women of this country learned long ago, those without swords can still die upon them.

Éowyn explaining to Aragorn why she was skilled with a blade. The Lord of the Rings: The Two Towers, the 2002 movie.

0fburnaby
It's funny. I've seen that movie five times or so. But I watched it again a few days ago, and that line struck me, too. Never stood out before.

If you are an American perhaps it stood out this time because of all the recent discussion of gun control.

"I've never seen the Icarus story as a lesson about the limitations of humans. I see it as a lesson about the limitations of wax as an adhesive."

-- Randall Munroe, in http://what-if.xkcd.com/30/ (What-if xkcd, Interplanetary Cessna)

[-][anonymous]400

.

I'd have thought from observation that quite a lot of human club is just about discussing the rules of human club, excess meta and all. Philosophy in daily practice being best considered a cultural activity, something humans do to impress other humans.

9Emile
I dunno; "philosophy", at least, doesn't seem to be about discussing the rules of the human club, or maybe it's discussing a very specific part of the rules (but then, so is Maths!). Family gossip and stand-up comedy seem much closer to "discussing the rules of the human club".
4David_Gerard
Going meta is the quick win (in the social competition) for cultural discourse, though, c.f. postmodernism.
1lavalamp
Most of this is done by people that don't understand the rules of human club...
1dspeyer
Do you know what he means by this? We spend a lot of time here discussing the rules of human club, and so far seem glad we have.
[-]TimS320

Imagine the average high school clique. They would be very uncomfortable explicitly discussing the rules of the group - even as they enforced them ruthlessly. Further, the teachers, parents, and other adults who knew the students would be just as uncomfortable describing the rules of the clique.

In short, we are socially weird for being willing to discuss the social rules - that our discussion is an improvement doesn't mean it is statistically ordinary.

Ah, I see.

Human club has many rules. Some can be bent. Others can be broken.

5someonewrongonthenet
Well...only insofar as we discuss the social rules on Lesswrong itself. No one, not even the high school clique, is uncomfortable with discussing social rules as generalities. I've seen school age kids discuss these things as generalities quite enthusiastically when a teacher instigates the discussion. It's only when specific names and people are mentioned that the discussion can become dangerous for the speakers. But this is no more true for social rules than it is for other conversations of similar type - for instance, when pointing out flaws of people sitting around you. I don't think LW is immune to this (for example, the use of the word "phyg" is particularly salient, if usually tongue-in-cheek, example) although the anonymity and fact that very few of us know each other personally does provide some level of protection.
2TimS
I suspect this population was not randomly selected. Otherwise, someone might have explained to me why nerds are unpopular at a time in my life that it might have been actually helpful.

I think the author is needlessly overcomplicating things.

1) People instinctively form tight nit groups of friends with people they like. People they like usually means help them survive and raise offspring. This usually means socially adept, athletic, and attractive.

2) Having friends brings diminishing returns. The more friends a person have, the less they feel the need to make new friends. That's why the first day of school is vital.

3) Ill feelings develop between sally and bob. Sally talks to Susanne, and now they both bear ill feelings towards Bob. Thus, Bob has descended a rung in the dominance hierarchy.

4) Bob's vulnerability is a function of how many people Sally can find who will agree with her about him. As a extension of this principle, those with the fewest friends will get the most picked on. The bullies can be both from the popular and unpopular crowd.

5) Factors leading to few friends - lack of social or athletic ability, conspicuous non-conformity via eccentric behavior, dress, or speech, low attractiveness, or misguided use of physical or verbal aggression.

By the power law, approximately 20% of the kids will be friends with 80% of the network. These are the popular ... (read more)

0A1987dM
Something like this?

Some may think these trifling matters not worth minding or relating; but when they consider that tho' dust blown into the eyes of a single person, or into a single shop on a windy day, is but of small importance, yet the great number of the instances in a populous city, and its frequent repetitions give it weight and consequence, perhaps they will not censure very severely those who bestow some attention to affairs of this seemingly low nature. Human felicity is produc'd not so much by great pieces of good fortune that seldom happen, as by little advantages that occur every day.

--Benjamin Franklin

We cannot dismiss conscious analytic thinking by saying that heuristics will get a “close enough” answer 98 percent of the time, because the 2 percent of the instances where heuristics lead us seriously astray may be critical to our lives.

Keith E. Stanovich, What Intelligence Tests Miss: The Psychology of Rational Thought

2DanArmak
For instance, you need analytical thinking to design your heuristics. Let your heuristics build new heuristics and a 2% failure rate compounded will give you a 50% failure rate in a few tens of generations.

Possibly, but Stanovich thinks that most heuristics were basically given to us by evolution and rather than choose among heuristics what we do is decide whether to (use them and spend little energy on thinking) or (not use them and spend a lot of energy on thinking).

4crap
What is analytical thinking, but a sequence of steps of heuristics well vetted not to lead to contradictions?
2simplicio
A heuristic is a "rule of thumb," used because it is computationally cheap for a human brain and returns the right answer most of the time. Analytical thinking uses heuristics, but is distinctive in ALSO using propositional logic, probabilistic reasoning, and mathematics - in other words, exceptionless, normatively correct modes of reasoning (insofar as they are done well) that explicitly state their assumptions and "show the work." So there is a real qualitative difference.
2crap
Propositional logic is made of many very simple steps, though.
3simplicio
Sure. The point is that "A->B; A, therefore B" is necessarily valid. Unlike, say, "the risk of something happening is proportional to the number of times I've heard it mentioned." Calling logic a set of heuristics dissolves a useful semantic distinction between normatively correct reasoning and mere rules of thumb, even if you can put the two on a spectrum.
2crap
Ohh, I agree. I just don't think that there is a corresponding neurological distinction. (Original quote was about evolution).

I don't blame them; nor am I saying I wouldn't similarly manipulate the truth if I thought it would save lives, but I don't lie to myself. You keep two books, not no books. [Emphasis mine]

The Last Psychiatrist (http://thelastpsychiatrist.com/2010/10/how_not_to_prevent_military_su.html)

The dream is damned and dreamer too if dreaming's all that dreamers do.

--Rory Miller

5Document
Nice juxtaposition.
-2MugaSofer
Amusingly, I read this at first as referring to literal dreams.
[-][anonymous]350

Just because someone isn't into finding out The Secrets Of The Universe like me doesn't necessarily mean I can't be friends with them.

-Buttercup Dew (@NationalistPony)

5arborealhominid
Never in my life did I expect to find myself upvoting a comment quoting My Nationalist Pony.

He tells her that the earth is flat -
He knows the facts, and that is that.
In altercations fierce and long
She tries her best to prove him wrong.
But he has learned to argue well.
He calls her arguments unsound
And often asks her not to yell.
She cannot win. He stands his ground. The planet goes on being round.

--Wendy Cope, He Tells Her from the series ‘Differences of Opinion’

“To succeed in a domain that violates your intuitions, you need to be able to turn them off the way a pilot does when flying through clouds. Without visual cues (e.g. the horizon) you can't distinguish between gravity and acceleration. Which means if you're flying through clouds you can't tell what the attitude of the aircraft is. You could feel like you're flying straight and level while in fact you're descending in a spiral. The solution is to ignore what your body is telling you and listen only to your instruments. But it turns out to be very hard to ignore what your body is telling you. Every pilot knows about this problem and yet it is still a leading cause of accidents. You need to do what you know intellectually to be right, even though it feels wrong.”

-Paul Graham

What You Are Inside Only Matters Because of What It Makes You Do

David Wong, 6 Harsh Truths That Will Make You a Better Person. Published in Cracked.com

This article greatly annoyed me because of how it tells people to do the correct practical things (Develop skills! Be persistent and grind! Help people!) yet gives atrocious and shallow reasons for it - and then Wong says how if people criticize him they haven't heard the message. No, David, you can give people correct directions and still be a huge jerk promoting an awful worldview!

He basically shows NO understanding of what makes one attractive to people (especially romantically) and what gives you a feeling of self-worth and self-respect. What you "are" does in fact matter - both to yourself and to others! - outside of your actions; they just reveal and signal your qualities. If you don't do anything good, it's a sign of something being broken about you, but just mechanically bartering some product of your labour for friendship, affection and status cannot work - if your life is in a rut, it's because of some deeper issues and you've got to resolve those first and foremost.

This masochistic imperative to "Work harder and quit whining" might sound all serious and mature, but does not in fact has the power to make you a "better person"; rather, you'll... (read more)

I've taken a crack at what's wrong with that article.

The problem is, there's so much wrong with it from so many different angles that it's rather a large topic.

3simplicio
Yep. As with most self-help advice, it is like an eyeglass prescription - only good for one specific pathology. It may correct one person's vision, while making another's massively worse. Also, I remember what it was like to be (mildly!) depressed, and my oh my would that article not have helped.
1Multiheaded
Yep :). I was doing a more charitable reading than the article really deserves, to be honest. It carried over from the method of political debate I am attempting these days - accept the opponent's premises (e.g. far-right ideas that they proudly call "thoughtcrime"), then show how either a modus-tollens inference from them is instrumentally/ethically preferrable, or how they just have nothing to do with the opponent being an insufferable jerk. 100% true. I often shudder when I think how miserable I could've got if I hadn't watched this at a low point in my life.
5Omegaile
I think the only problem with the article is that it tries to otheroptimize. It seems to address a problem that the author had, as some people do. He seems to overestimate the usefulness of his advices though (he writes for anyone except if "your career is going great, you're thrilled with your life and you're happy with your relationships"). As mentioned by NancyLebovitz, the article is not for the clinical depressed, in fact it is only for a small (?) set of people who sits around all day whining, who thinks they deserve better for who they are, without actually trying to improve the situation. That said, this over generalization is a problem that permeates most self help, and the article is not more guilty than the average.
8Multiheaded
I think I'll just quote the entirety of an angry comment on Nancy's blog. I basically can't help agreeing with the below. Although I don't think the article is entirely bad and worthless - there are a few commonplace yet forcefully asserted life instructions there, if that's your cup of tea - its downsides do outweigh its utility. What especially pisses me off is how Wong hijacks the ostensibly altruistic intent of it as an excuse to throw a load of aggression and condescending superiority in the intended audience's face, then offers an explanation of how feeling repulsed/hurt by that tone further confirms the reader's lower status. This is, like, a textbook example of self-gratification and cruel status play. Conclusion: a truth that's told with bad intent beats all the lies you can invent. And when you mix in some outright lies...
0NancyLebovitz
One of the comments at dreamwidth is by a therapist who said that being extremely vulnerable to shame is a distinct problem-- not everyone who's depressed has it, and not everyone who's shame-prone is depressed. Also, I didn't say clinically depressed. I'm in the mild-to-moderate category, and that sort of talk is bad for me.
1NancyLebovitz
Actually the article says enough different and somewhat contradictory things that it supports multiple readings, or to put it less charitably, it's contradictory in a way that leads people to pick the bits which are most emotionally salient to them and then get angry at each other for misreading the article. The title is "6 Harsh Truths That Will Improve Your Life"-- by implication, anyone's life. Then Wong says, "this will improve your life unless it's awesome in all respects". Then he pulls back to "this is directed at people with a particular false view of the universe".

My complaint about the article is that it has the same problem as most self-help advice. When you read it, it sounds intelligent, you nod your head, it makes sense. You might even think to yourself "Yeah, I'm going to really change now!"

But as everyone whose tried to improve himself knows, it's difficult to change your behavior (and thoughts) on a basis consistent enough to really make a long-lasting difference.

9ChristianKl
It a misleading claim. Studying of how parents influence their kids generally conclude that "being" of the parent is more important than what they specifically do with the kids. From the article: The author of the article doesn't seem to understand that there such a thing as good listening. If a girl tell you about some problem in her life it can be more effective to empathize with the girl than to go and solve the problem. If something says "It's what's on the inside that matters!" a much better response would be ask: What makes you think that your inside is so much better than the inside of other people?
3Said Achmiz
Could you explain this? Or link to info about such studies? (Or both?)
6ChristianKl
If a parent has a low self esteem their child is also likely to have low self esteem. The low self esteem parent might a lot to prove try to do for his child to prove to himself that he's worthy. There a drastic difference between a child observing: "Mommy hugs me because she read in a book that good mothers hug their children and she wants to prove to herself that she's a good mother and Mommy hugs me because she loves me". On paper the women who spents a lot of energy into doing the stuff that good mothers are supposed to do is doing more for their child then a mother who's not investing that much energy because she's more secure in herself. Being secure in herself increase the chance that she will do the right things at the right time signal her self confidence to the child. A child who sees that her mother is self confident than has also a reason to believe that everything is alright. As far as studies go, unfortunately I don't keep good records on what information I read from what sources :( (I would add that hugging is an example I use here to illustrate the point instead of refering to specific study about hugging)
6Said Achmiz
Yes... and studies show that this is largely due to genetic similarity, much less so to parenting style. Which still means that it boils down to what the mother does. The thing is, no one can see what you "are" except by what you do. Your argument seems to be "doing things for the right reason will lead to doing the actual right thing, instead of implementing some standard recommendation of what the right thing is". Granted. But the thing that matters is still the doing, not the being. "Being" is relevant only to the extent that it makes you do. Oh, and as for this: There's a third possibility: "Mommy doesn't hug me, but I know she loves me anyway". Sometimes that's worse than either of the other two.
2ChristianKl
What do you exactly mean with "matter"? If you want to define whether A matters for B, than it's central to look whether changes in A that you can classify cause changes in B. When one speaks about doing one frequently doesn't think about actions like raising one heart rate by 5 bpm to signal that something created an emotional impact on yourself. If an attractive woman walks through the street and a guy sees her and gets attracted, you can say that the woman is doing something because she's reflecting light in exactly the correct way to get the guy attracted. If you define "doing" that broadly it's not a useful word anymore. The cracked article from which I quoted doesn't seem to define "doing" that broadly. On the other hand it's no problem to define "being" broadly enough to cover all "doing" as well.
-1MugaSofer
If there is no other method, then advising people to ignore changing what they are in favor of what they do is bad advice.
0Said Achmiz
I am having trouble parsing your comment. Could you elaborate? "no other method" of what? Also, who is advising people to ignore changing what they are...? And why is advising people to change what they do bad advice? Please do clarify, as at this point I am not sure whether, and on what, we are disagreeing.
-2MugaSofer
If "what you are" is the only/most effective way to change "what you do" (eg unconscious signalling) then the advice of the original article to focus on "what you do" is poor advice, even if it is technically correct that only what you do matters.
2Said Achmiz
"We are what we repeatedly do. Excellence, then, is not an act, but a habit." — Aristotle It goes both ways. And it's meaningless to speak of changing "what you are" if you do not, as a result, do anything different. I don't think the Cracked article, or I, ever said that the only way to change your actions is by changing some mysterious essence of your being. That's actually a rather silly notion, when it's stated explicitly, because it's self-defeating unless you ignore the observable evidence. That is, we can see that changing your actions by choosing to change your actions IS possible; people do it all the time. The conclusion, then, is that by choosing to change your actions, you have thereby changed this ineffable essence of "what you are", which then proceeds to affect what you do. If that's how it works, then worrying about whether you're changing what you are or only changing what you do is pointless; the two cannot be decoupled. And that's the point of the article, as I understand it. "What you are" may be a useful notion in your own internal narrative — it's "how the algorithm feels from the inside" (the algorithm in this case being your decisions to do what you do). But outside of your head, it's meaningless. Out in the world, there is no "what you are" beyond what you do.
-1MugaSofer
As was pointed out elsewhere in these comments, there are situations where changing "what you are" - for example, increasing your confidence levels - is more effective then trying to change your actions directly.
1Said Achmiz
Let's say that you don't do something that you want to do, because you're not confident enough. What is the difference between doing that thing, and improving your confidence which causes you to do that thing? What does it even mean to distinguish between those two cases? And if improving your confidence doesn't cause you to do the thing in question, then what's the point? Edit: On a reread, I might interpret you as saying that one might try (but fail) to change one's actions "directly", or one might attack the root cause, and having done so, succeed at changing one's actions thereby. If that's what you mean, then you're right. However the advice to "change what you do" should not, I think, be interpreted as saying "ignore the root causes of your inaction"; that is not a charitable reading. The author of the Cracked article isn't railing against people who want to do a thing, but can't (due to e.g. lack of confidence); rather, his targets are people who just don't think that they need to be doing anything, because "what they are" is somehow sufficient.
-1MugaSofer
Oh, I didn't realize that. You're right, that is a much more charitable reading.
3Paulovsk
Yep. I'm rather curious how parents can "be" something to children without doing, since it's supposed children don't know their parents before their first contact (after birth, I mean).
1ChristianKl
I didn't said that they aren't doing anything. I said that identifying specific behaviors doesn't make a good predictor. Characteristics like high emotional intelligence are better predictors. Working on increased emotional intelligence and higher self esteem would be work that changes "who you are". Taking steps to raise their own emotional intelligence might have a much higher effect that taking children to the museum to teach them about
1Omegaile
I think I have heard of such studies, but the conclusion is different. Who the parents are matter more than things like which school do the kids go, or in which neighborhood they live, etc. But in my view, that's only because being something (let's say, a sportsman), will makes you do things that influence your kids to pursue a similar path
4A1987dM
I wish my 17-year-old self had read that article.
4DanArmak
For Instance It Makes You Write With Odd Capitalization.
9dspeyer
It's probably a section title.
-2Paulovsk
It's a copywriting technique. It makes easier for people to read (note how the subjects of newsletters campaings usually come like that). Don't ask me for the research, I have no idea where I've read it. In this case, I guess he just pasted the title here.
1A1987dM
So, is there any research done about this kind of stuff? All the discussions of this kind of things I've seen on Wikipedia talk:Manual of Style and places like that appear to be based on people Generalizing From One Example.

I keep coming back to the essential problem that in our increasingly complex society, we are actually required to hold very firm opinions about highly complex matters that require analysis from multiple fields of expertise (economics, law, political science, engineering, others) in hugely complex systems where we must use our imperfect data to choose among possible outcomes that involve significant trade offs. This would be OK if we did not regard everyone who disagreed with us as an ignorant pinhead or vile evildoer whose sole motivation for disagreeing is their intrinsic idiocy, greed, or hatred for our essential freedoms/people not like themselves. Except that there actually are LOTS of ignorant pinheads and vile evildoers whose sole motivation etc., or whose self-interest is obvious to everyone but themselves.

osewalrus

I try to get around this by assuming that self-interest and malice, outside of a few exceptional cases, are evenly distributed across tribes, organizations, and political entities, and that when I find a particularly self-interested or malicious person that's evidence about their own personality rather than about tribal characteristics. This is almost certainly false and indeed requires not only bad priors but bad Bayesian inference, but I haven't yet found a way to use all but the narrowest and most obvious negative-valence concepts to predict group behavior without inviting more bias than I'd be preventing.

-3simplicio
A possible partial solution to this problem.

"Just because you no longer believe a lie, does not mean you now know the truth."

Mark Atwood

[-]Kindly260

Then for the first time it dawned on him that classing all drowthers together made no more sense than having a word for all animals that can't stand upright on two legs for more than a minute, or all animals with dry noses. What possible use could there be for such classifications? The word "drowther" didn't say anything about people except that they were not born in a Westil Family. "Drowther" meant "not us," and anything you said about drowthers beyond that was likely to be completely meaningless. They were not a "class" at all. They were just... people.

Orson Scott Card, The Lost Gate

3MixedNuts
As my math teacher always said,
0[anonymous]
Why is it repulsive...? I guess I don't get it. I mean sure, it's not a subspace... is that what they mean?
0MixedNuts
It's extremely inelegant, and finding yourself using one means you're running into a dead end.
2DanArmak
I don't see why a complement would be inelegant. It's just one extra bit of specification. Now, non-definable non-computable numbers (or sets), they are inelegant :-)

It is not an epistemological principle that one might as well hang for a sheep as for a lamb.

-Bas van Fraassen, The Scientific Image

9DanielLC
What does that mean?

Believing large lies is worse than small lies; basically, it's arguing against the What-The-Hell Effect as applied to rationality. Or so I presume, did not read original.

3A1987dM
I had noticed that effect myself, but I didn't know it had a name.
[-]PDH130

I had noticed it and mistakenly attributed it to the sunk cost fallacy but on reflection it's quite different from sunk costs. However, it was discovering and (as it turns out, incorrectly) generalising the sunk cost fallacy that alerted me to the effect and that genuinely helped me improve myself, so it's a happy mistake.

One thing that helped me was learning to fear the words 'might as well,' as in, 'I've already wasted most of the day so I might as well waste the rest of it,' or 'she'll never go out with me so I might as well not bother asking her,' and countless other examples. My way of dealing it is to mock my own thought processes ('Yeah, things are really bad so let's make them even worse. Nice plan, genius') and switch to a more utilitarian way of thinking ('A small chance of success is better than none,' 'Let's try and squeeze as much utility out of this as possible' etc.).

I hadn't fully grasped the extent to which I was sabotaging my own life with that one, pernicious little error.

Lambs are young sheep; they have less meat & less wool.

The punishment for livestock rustling being identical no matter what animal is stolen, you should prefer to steal a sheep rather than a lamb.

2elspood
However, the parent says this is NOT an epistemological principle, that one should prefer to get the most benefit when choosing between equally-punished crimes. So is it saying that epistemology should not allow for equal punishments for unequal crimes? That seems less like epistemology and more like ethics. Should our epistemology simply not waste time judging which untrue things are more false than others because we shouldn't be believing false things anyway? It would be great if Jason would give us more context about this one, since the meaning doesn't seem clear without it.
0simplicio
I think Eliezer has got the meaning more or less right. When Daniel asked "what it meant," I assumed he was merely referring to the idiom, not the entire quote. As an example of the kind of thing I think the quote is warning against, the theist philosopher Plantinga holds (I'm paraphrasing somewhat uncharitably) that believing in the existence of other minds (i.e., believing that other people are conscious) requires a certain leap of faith which is not justified by empirical evidence. Therefore, theists are not any worse off than everybody else when they make the leap to a god.

The ideas of the Hasids are scientifically and morally wrong; the fashion, food and lifestyle are way stupid; but the community and family make me envious.

-- Penn Jilette

9MixedNuts
Disagree about the fashion.

I once heard a story about the original writer of the Superman Radio Series. He wanted a pay rise, his employers didn't want to give him one. He decided to end the series with Superman trapped at the bottom of a well, tied down with kryptonite and surrounded by a hundred thousand tanks (or something along these lines). It was a cliffhanger. He then made his salary demands. His employers refused and went round every writer in America, but nobody could work out how the original writer was planning to have Superman escape. Eventually the radio guys had to go back to him and meet his wage demands. The first show of the next series began "Having escaped from the well, Superman hurried to..." There's a lesson in there somewhere, but I've no idea what it is.

-http://writebadlywell.blogspot.com/2010/05/write-yourself-into-corner.html

I would argue that the lesson is that when something valuable is at stake, we should focus on the simplest available solutions to the puzzles we face, rather than on ways to demonstrate our intelligence to ourselves or others.

9Fronken
Story ... too awesome ... not to upvote ... not sure why its rational, though.
7CronoDAS
Speaking of writing yourself into a corner... According to TV Tropes, there was one show, "Sledge Hammer", which ended its first season with the main character setting off a nuclear bomb while trying to defuse it. They didn't expect to be renewed for a second season, so when they were, they had a problem. This is what they did: Previously on Sledge Hammer: [scene of nuclear explosion] Tonight's episode takes place five years before that fateful explosion.
2Richard_Kennaway
I think this is an updating of the cliché from serial adventure stories for boys, where an instalment would end with a cliffhanger, the hero facing certain death. The following instalment would resolve the matter by saying "With one bound, Jack was free." Whether those exact words were ever written is unclear from Google, but it's a well-known form of lazy plotting. If it isn't already on TVTropes, now's your chance.
6Desrtopa
Did you just create that redlink? That's not the standard procedure for introducing new tropes, and if someone did do a writeup on it, it would probably end up getting deleted. New tropes are supposed to be introduced as proposals on the YKTTW (You Know That Thing Where) in order to build consensus that they're legitimate tropes that aren't already covered, and gather enough examples for a proper launch. You could add it as a proposal there, but the title is unlikely to fly under the current naming policy. Pages launched from cold starts occasionally stick around (my first page contribution from back when I was a newcomer and hadn't learned the ropes is still around despite my own attempts to get it cutlisted,) but bypassing the YKTTW is frowned upon if not actually forbidden.
3Richard_Kennaway
I didn't make any edits to TVTropes -- the page that it looks like I'm linking to doesn't actually exist. But I wasn't aware of YKTTW. ETA: Neither is their 404 handler, that turns URLs for nonexistent pages into invitations to create them. As a troper yourself, maybe you could suggest to TVTropes that they change it?
3Desrtopa
If you're referring to what I think you are, that's more of a feature than a bug, since works pages don't need to go through the YKTTW. We get a lot more new works pages than new trope pages, so as long as the mechanics for creating either are the same, it helps to keep the process streamlined to avoid too much inconvenience.
0Nornagest
To be fair, that kind of flies in the face of standard wiki practice. Not Invented Here isn't defined in the main namespace, but the entire site probably counts as self-demonstration.
4Kindly
I believe that Cliffhanger Copout refers to the same thing. The Harlan Ellison example in particular is worth reading.
2CCC
Wouldn't that fall under "Cliffhanger Copout"?
2Eliezer Yudkowsky
There's so many different ways that story couldn't possibly be true... (EDIT: Ooh, turns out that the Superman Radio program was the one that pulled off the "Clan of the Fiery Cross" punch against the KKK.)

If your ends don’t justify the means, you’re working on the wrong project.

-Jobe Wilkins (Whateley Academy)

1Luke_A_Somers
... or going about it wrong.
[-]ygert240

I was rereading HP Lovecraft's The Call of Cthulhu lately, and the quote from the Necronomicon jumped out at me as a very good explanation of exactly why cryonics is such a good idea.

(Full disclosure: I myself have not signed up for cryonics. But I intend to sign up as soon as I can arrange to move to a place where it is available.)

The quote is simply this:

That is not dead which can eternal lie,

And with strange aeons even death may die.

8[anonymous]
.

Er... logical fallacy of fictional evidence, maybe? I wince every time somebody cites Terminator in a discussion of AI. It doesn't matter if the conclusion is right or wrong, I still wince because it's not a valid argument.

The original quote has nothing to do with life extension/immortality for humans. It just happens to be an argument for cryonics, and it seems to be a valid one: death as failure to preserve rather than cessation of activity, mortality as a problem rather than a fixed rule.

4[anonymous]
.
8A1987dM
RationalWiki is extremely sceptical of cryonics and still it has quoted that.
1Raemon
It featured prominently in last year's Solstice.
3[anonymous]
.
2Document
http://lesswrong.com/lw/1pq/rationality_quotes_february_2010/1js5 (Full disclosure: I myself don't intend to sign up for cryonics.)
1ygert
Huh... Before posting the quote I did try searching to see if it had already been posted before, but that didn't show up. Oh well.

"A stupid person can make only certain, limited types of errors. The mistakes open to a clever fellow are far broader. But to the one who knows how smart he is compared to everyone else, the possibilities for true idiocy are boundless."

-- Steven Brust, spoken by Vlad, in Iorich

3Shmi
Seems to describe well the founder of this forum. I wonder if this quote resonates with a certain personal experience of yours.

[O]ne may also focus on a single problem, which can appear in different guises in various disciplines, and vary the methods. An advantage of viewing the same problem through the lens of different models is that we can often begin to identify which features of the problem are enduring and which are artifacts of our particular methods or background assumptions. Because abstraction is a license for us to ignore information, looking at several approaches to modeling a problem can give you insight into what is important to keep and what is noise to ignore. Moreover, discovering robust features of a problem, when it happens, can reshape your intuitions.

— Gregory Wheeler, "Formal Epistemology"

[-]RobinZ100

Is there a concrete example of a problem approached thus?

6Sengachi
Viewing the interactions of photons as both a wave and a billiard ball. Both are wrong, but by seeing which traits remain constant in all models, we can project what traits the true model is likely to have.
3RobinZ
Does that work? I don't know enough physics to tell if that makes sense.
5Sengachi
It doesn't give you all the information you need, but that's how the problem was originally tackled. Scientists noticed that they had two contradictory models for light, which had a few overlapping characteristics. Those overlapping areas allowed them to start formulating new theories. Of course it took ridiculous amounts of work after that to figure out a reasonable approximation of reality, but one has to start somewhere.
-1CCC
I had a thought recently; considering reproduction in animals (for simpicity, let me assume mammals) via a programming metaphor. The DNA contributed by mother and father is the source code; the mother's womb is the compiler; and the baby is the executable code. The first thing that's noted is that there is a very good chance (around 50%) that the executable code will include its own compiler. And this immediately leads to the possibility that the compiler can slip in any little changes to the executable code that it wants; it can, in fact, elect to entirely ignore the father's input and simply clone itself. (It seems that it doesn't). Or, in other words, the DNA is quite possibly only a partial description of the resulting baby.

In the space of one hundred and seventy-six years the Lower Mississippi has shortened itself two hundred and forty-two miles. That is an average of a trifle over one mile and a third per year. Therefore, any calm person, who is not blind or idiotic, can see that in the Old Oolitic Silurian Period, just a million years ago next November, the Lower Mississippi River was upwards of one million three hundred thousand miles long, and stuck out over the Gulf of Mexico like a fishing-rod. And by the same token any person can see that seven hundred and forty-two years from now the Lower Mississippi will be only a mile and three-quarters long, and Cairo and New Orleans will have joined their streets together, and be plodding comfortably along under a single mayor and a mutual board of aldermen. There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.

  • Mark Twain - Life on the Mississippi

(If you wonder where "two hundred and forty-two miles" shortening of the river came from, it was the straightening of its original meandering path to improve navigation)

4A1987dM
http://xkcd.com/605/

Many of our most serious conflicts are conflicts within ourselves. Those who suppose their judgements are always consistent are unreflective or dogmatic.

-- John Rawls, Justice as Fairness: A Restatement.

If you ever decide that your life is not too high a price to pay for saving the universe, let me know. We'll be ready.

-- Kyubey (Puella Magi Madoka Magica)

7Eliezer Yudkowsky
For you, I'll walk this endless maze...
2Sengachi
The only ones to love a martyr's actions are those who did not love them.
3MixedNuts
Yeah, if the English language had any words for feelings that aren't hopelessly vague, we wouldn't have those silly arguments about catchy proverbs.
7Eugine_Nier
I suspect this is because of large psychological differences between humans. Specifically, not all humans experience all feelings; thus when a human hears a word referring to a feeling he hasn't experienced he assumes it refers to the closest feeling that he has.
0MixedNuts
Other theories: people just don't introspect much, people like being vague because "I love you" pleases someone who wants to hear "I want to work to make you happy" when you mean "I have lots of fun on dates with you", people before the advent of self-help books had a taboo against discussing (and generally expressing) feelings qua feelings and used preference-revealing actions instead.
-4MugaSofer
I'm given to understand that English is unusually bad in this regard; that would seem to screen out standard human variance unless English-speakers are unusually varied. Also, I don't think it's generalizing from one example; humans demonstrate pretty standard emotions (and facial expressions, for that matter) AFAICT. (Also, there's The Psychological Unity of Mankind. We don't want to overgeneralize, sure, but evidence regarding one human mind is, in fact, evidence regarding all of them. It's far from overwhelming evidence, but still.)
3wedrifid
That isn't true. If I love someone and they martyr themselves (literally or figuratively) in a way that is the unambiguously and overwhelmingly optimal way to fulfill both their volition and my own then I will love the martyr's actions. If you say I do not love the martyr or do not love their actions due to some generalization then you are just wrong.
3TheOtherDave
Agreed... but also, this gets complicated because of the role of external constraints. I can love someone, "love" what they do in the context of the environment in which they did it (I put love here in scare quotes because I'm not sure I mean the same thing by it when applied to an action, but it's close enough for casual conversation), and hate the fact that they were in such an environment to begin with, and if so my feelings about it can easily get confused.
1Sengachi
Ding, rationalist level up! Unfortunately, most people don't view things this way. I figured that so long as we were discussing a show based on how humans try to rationalize away and fight against the truly rational optimum, I might as well throw out a comment on how such people react to truly rational optimizers (martyrs).
1MarkusRamikin
I'm not sure in what way this is about rationality. Can someone please explain? (And yes, I've seen PM and I do remember that line).
2Qiaochu_Yuan
I had scope insensitivity in mind. The universe is pretty big and one person's life is pretty small.
0MarkusRamikin
I see, thanks. Of course Kyubey never reveals how much saving-of-the-universe Madoka's life would pay for exactly. It's not just her life (and suffering) they want, but all the MGs in history, past and future, for an unspecified extention of the Universe's lifespan...
0earthwormchuck163
Also, Kyubey clearly has pretty drastically different values from people, and thus his notion of saving the universe is probably not quite right for us.

I guess my point here is that part of the reason I stayed in Mormonism so long was that the people arguing against Mormonism were using such ridiculously bad arguments. I tried to find the most rigorous reasoning and the strongest research that opposed LDS theology, but the best they could come up with was stuff like horses in the Book of Mormon. It's so easy for a Latter-Day Saint to simply write the horse references off as either a slight mistranslation or a gap in current scientific knowledge that that kind of "evidence" wasn't worth the time of day to me. And for every horse problem there was something like Hugh Nibley's "Two Shots in the Dark" or Eugene England's work on Lehi's alleged travels across Saudi Arabia, apologetic works that made Mormon historical and theological claims look vaguely plausible. There were bright, thoughtful people on both sides of the Mormon apologetics divide, but the average IQ was definitely a couple of dozen points higher in the Mormon camp.

http://www.exmormon.org/whylft18.htm

5Bakkot
This is part of why it's important to fight against all bad arguments everywhere, not just bad arguments on the other side.
1John_Maxwell
Another interpretation: Try to figure out which side has more intelligent defenders and control for that when evaluating arguments. (On the other hand, the fact that all the smart people seem to believe X should probably be seen as evidence too...) Yes, argument screens off authority, but that assumes that you're in a universe where it's possible to know everything and think of everything, I suspect. If one side is much more creative about coming up with clever arguments in support of itself (much better than you), who should you believe if the clever side also has all the best arguments?
6Wei Dai
Isn't the real problem here that the author of the quote was asking the wrong question, namely "Mormonism or non-Mormon Christianity?" when he should have been asking "Theism or atheism?" I don't see how controlling for which side had the more intelligent defenders in the former debate would have helped him better get to the truth. (I mean that may well be the right thing to do in general, but this doesn't seem to be a very good example for illustrating it.)
0blashimov
That may be too much to ask for. Besides, if the horse evidence had worked, you'd be forced to turn around and apply it to Jesus...it may not have worked for her, but it has worked on some theists.
0peuddO
That's just not very correct. There are no external errors in measuring probability, seeing as the unit and measure comes from internal processes. Errors in perceptions of reality and errors in evaluating the strength of an argument will invariably come from oneself, or alternatively from ambiguity in the argument itself (which would make it a worse argument anyway). Intelligent people do make bad ideas seem more believable and stupid people do make good ideas seem less believable, but you can still expect the intelligent people to be right more often. Otherwise, what you're describing as intelligence... ain't. That doesn't mean you should believe something just because a smart person said it - just that you shouldn't believe it less. It's going back to the entire reverse stupidity thing. Trying to make yourself unbiased by compensating in the opposite direction doesn't remove the bias - you're still adjusting from the baseline it's established. On a similar note, I may just have given you an uncharitable reading and assumed you meant something you didn't. Such a misunderstanding won't adjust the truth of what I'm saying about what I'd be reading into your words, and it won't adjust the truth of what you were actually trying to say. Even if there's a bias on my part, it skews perception rather than reality.

"How is it possible! How is it possible to produce such a thing!" he repeated, increasing the pressure on my skull, until it grew painful, but I didn't dare object. "These knobs, holes...cauliflowers -" with an iron finger he poked my nose and ears - "and this is supposed to be an intelligent creature? For shame! For shame, I say!! What use is a Nature that after four billion years comes up with THIS?!"

Here he gave my head a shove, so that it wobbled and I saw stars.

"Give me one, just one billion years, and you'll see what I create!"

  • Stanislaw Lem, "The Sanatorium of Dr. Vliperdius" (trans. Michael Kandel)
[-][anonymous]190

.

"Two roads diverged in a wood. I took the one less traveled by, and had to eat bugs until the park rangers rescued me."

Two roads diverged in a wood. I took the one less traveled by, and I got to eat bugs until the park rangers kicked me out.

4Risto_Saarelma
Duplicate.

Wasn't that poem sarcastic anyway? Until the last stanza, the poem says how the roads were really identical in all particulars -- and in the last stanza the narrator admits that he will be describing this choice falsely in the future.

1gjm
That's not how I read it. There's no particular difference between the two roads, so far as Frost can tell at the point of divergence, but they're still different roads and lead by different routes to different places, and he expects that years from now he'll look back and see (or guess?) that it did indeed make a big difference which one he took.
6RobinZ
He expects that years from now he'll look back and claim it made a big difference. He says he will also claim it was the road less traveled by, for all that the passing there had worn them the same and so forth. It may well make a big difference, but of what nature no-one knows.
3gjm
Yes, I understand that what the poem says is that he'll say it made a big difference, rather than that it did. But there's a difference between not saying it's a real difference and saying it's not a real difference; it seems to me that the former is what Frost does and the latter is what ArisKatsaris says he does. (My reading of the to-ing and fro-ing about whether the road is "less traveled by" is that the speaker's initial impression is that one path is somewhat untrodden, and he picks it on those grounds; on reflection he's not sure it's actually any less trodden than the other, but his intention was to take the less-travelled road; and later on he expects to adopt the shorthand of saying that he picked the less-travelled one. I don't see that any actual dishonesty is intended, though perhaps a certain lack of precision.)
[-]GLaDOS180

While truths last forever, taboos against them can last for centuries.

--"Sid" a commenter from HalfSigma's blog

[-][anonymous]170

Whenever you can, count.

--Sir Francis Galton

I intend to live forever or die trying

-- Groucho Marx

8DanielLC
I'm not sure that's great advice. It will result in you trying to try to live forever. The only way to live forever or die trying is to intend to live forever.
1sketerpot
How to apply that, though? I could try not to try to try to live forever, but that sounds equivalent to trying to merely try to live forever. And now "try" has stopped sounding like a real word, which makes my misfortune even more trying.
1DanielLC
Live forever. Details are in the linked post.
[-][anonymous]160

Person 1: "I don't understand how my brain works. But my brain is what I rely on to understand how things work." Person 2: "Is that a problem?" Person 1: "I'm not sure how to tell."

-Today's xkcd

I have always had an animal fear of death, a fate I rank second only to having to sit through a rock concert. My wife tries to be consoling about mortality and assures me that death is a natural part of life, and that we all die sooner or later. Oddly this news, whispered into my ear at 3 a.m., causes me to leap screaming from the bed, snap on every light in the house and play my recording of “The Stars and Stripes Forever” at top volume till the sun comes up.

-Woody Allen EDIT: Fixed formatting.

0DaFranker
FWIW, it seems like whatever is parsing the markdown in these comments, whenever it sees a ">" for a quote at the beginning of a paragraph it'll keep reading until the next paragraph break, i.e. double-whitespace at the end of a line or two linebreaks.
0MugaSofer
Formatting is broken. Great quote, though.
2ygert
The irony of it... (Although your formatting is less broken, as your only mistake was missing out a single period.)
0MugaSofer
Figures.
[-]GLaDOS140

I notice with some amusement, both in America and English literature, the rise of a new kind of bigotry. Bigotry does not consist in a man being convinced he is right; that is not bigotry, but sanity. Bigotry consists in a man being convinced that another man must be wrong in everything, because he is wrong in a particular belief; that he must be wrong, even in thinking that he honestly believes he is right.

-G. K. Chesterton

Suppose you've been surreptitiously doing me good deeds for months. If I "thank my lucky stars" when it is really you I should be thanking, it would misrepresent the situation to say that I believe in you and am grateful to you. Maybe I am a fool to say in my heart that it is only my lucky stars that I should thank—saying, in other words, that there is nobody to thank—but that is what I believe; there is no intentional object in this case to be identified as you.

Suppose instead that I was convinced that I did have a secret helper but that it wasn't you—it was Cameron Diaz. As I penned my thank-you notes to her, and thought lovingly about her, and marveled at her generosity to me, it would surely be misleading to say that you were the object of my gratitude, even though you were in fact the one who did the deeds that I am so grateful for. And then suppose I gradually began to suspect that I had been ignorant and mistaken, and eventually came to the correct realization that you were indeed the proper recipient of my gratitude. Wouldn't it be strange for me to put it this way: "Now I understand: you are Cameron Diaz!"

--Daniel Dennett, Breaking the Spell (discussing the differences between the "intentional object" of a belief and the thing-in-the-world inspiring that belief)

1MugaSofer
He's talking about God here, right?
1Jay_Schweikert
In large part, yes. This passage is in Dennett's chapter on "Belief in Belief," and he has an aside on the next page describing how to "turn an atheist into a theist by just fooling around with words" -- namely, that "if 'God' were just the name of whatever it is that produced all creatures great and small, then God might turn out to be the process of evolution by natural selection." But I think there's also a more general rationality point about keeping track of the map-territory distinction when it comes to abstract concepts, and about ensuring that we're not confusing ourselves or others by how we use words.
0DaFranker
Besides, none of it passes an ideological turing test with an overwhelming majority of God-believers. I tried it.
1Jay_Schweikert
Sorry, can you clarify what you mean here? None of what passes an ideological turing test? Are you saying something like "theists erroneously conclude that the proponents of evolution must believe in God because evolutionists believe that evolution is what produced all creatures great and small"? What exactly is the mistake that theists make on this point that would lead them to fail the ideological turing test? Or, did I misunderstand you, and are you saying that people like Dennett fail the ideological turing test with theists?
1DaFranker
Oh, sorry. This, specifically, almost never passes an i-turing test IME. I've been called a "sick scientologist" (I assume they didn't know what "Scientology" really is) on account of the claim that if there is a "God", it's the process by which evolution or physics happens to work in our world. Likewise, if I understand what Dennett is saying correctly, the things he's saying are not accepted by God-believers, namely that God could be any sort of metaphor or anthropomorphic representation of natural processes, or of the universe and its inner workings, or "fate" in the sense that "fate" and "free will" are generally understood (i.e. the dissolved explanation) by LWers, or some unknown abstract Great Arbiter of Chance and Probability. (I piled in some of my own attempts in there, but all of the above was rejected time and time again in discussion with untrained theists, down to a single exception who converted to a theology-science hybrid later on and then, last I heard, doesn't really care about theological issues anymore because they seem to have realized that it makes no difference and intuitively dissolved their questions. Discussions with people who have thoroughly studied formal theology usually fare slightly better, but they also have a much larger castle of anti-epistemology to break down.)
2Jay_Schweikert
Ah, okay, thanks for clarifying. In case my initial reply to MugaSofer was misleading, Dennett doesn't really seem to be suggesting here that this is really what most theists believe, or that many theists would try to convert atheists with this tactic. It's more just a tongue-in-cheek example of what happens when you lose track of what concept a particular group of syllables is supposed to point at. But I think there are a great many people who purport to believe in "God," whose concept of God really is quite close to something like the "anthropomorphic representation of natural processes, or of the universe and its inner workings." Probably not for those who identify with a particular religion, but most of the "spiritual but not religious" types seem to have something like this in mind. Indeed, I've had quite a few conversations where it became clear that someone couldn't tell me the difference between a universe where "God exists" and where "God doesn't exist."
0DaFranker
Regarding the second paragraph, I agree with your estimate that most "spiritual but not religious" people might think this way. I was, at some point in the past, exactly there in belief-space - I identified as "spiritual but not religious" explicitly, and explicitly held beliefs along those lines (minus the "anthropomorphic" part, keeping only the "mental" or "thinking" part of the anthropomorphism for some reason). When I later realized that there was no tangible difference and no existing experiment that could tell me whether it was true, I kind of stopped caring, and eventually the questions dissipated on their own, though I couldn't tell exactly why at the time. When I found LW and read the sequences, I figured out what had happened, which was fun, but the real crisis of faith (if you can call it that - I never was religious to begin with, only "spiritual") had happened long before then. People I see who call themselves "spiritual but not religious" and also know some science seem to behave in very similar manners to how I did back then, so I think it makes sense to assume a significant amount of them believe something like this.
-2MugaSofer
For the record, I didn't interpret your comment that way.
0ikrase
Before I became a rationalist, I believed that there was no god, but there were souls, and they manifested through making quantum randomness nonrandom.
0[anonymous]
Which part(s) of that set of beliefs did becoming a rationalist cause you to change?
1ikrase
This was long before Less Wrong. I realized that lower-level discussions of free will were kind of pointless. I abandoned the eternal-springing hope that souls and psychic powers (Hey! Look! For some reason, all of air molecules just happened to be moving upward at the same time! It seems like this guy is a magnet for one in a 10^^^^^^^10 thermodynamic occurances! And they always help him!) could exist. I fully accepted the physical universe.
1somervta
I get the concept of hyperbole, but this: Is ludicrously too far.
0ikrase
It's two tens with six supers between them! That's twice as much as 10^^^10, right! I guess it just intuitively seems like there should be a useful not-impossible-just-rare event that has a probability in that range (long-term vacuum fluctuation appearance of a complex and useful machine on the order of 5kg, maybe?)
2Kindly
Not... quite. Let's say there are 10^^10 particles in the universe, each one of them independently has a 1 in 10^^10 chance of doing what we want over some small unit of time, and we are interested in 10^^10 of those units of time. Then the probability that the event we want to observe happens is much better than 1 in 10^^12, and that was only two up-arrows. (We can rewrite ((10^^10)^(10^^10))^(10^^10) as 10^(10^^9 x 10^^10 x 10^^10) which is less than 10^((10^^10)^3) which is less than 10^((10^^10)^10). This would be the same as 10^^12 if we took exponents in a different order, and the order used to calculate 10^^12 happens to be the one that gives the largest possible number. Actually if I were more careful I could probably get 10^^11 as a bound as well.) And although I'm not entirely sure about the time-resolution business, I think the numbers in the calculation I just did are an upper bound for what we'd want in order to compute the probabability of any universe-history at an atomic scale.
-2MugaSofer
Holy cow, you've tried this? Were you dealing with creationists?
7DaFranker
I assume a significant amount of them were. I also tried subtly using God/Fate, God/Freewill, God/Physics, God/Universe, God/ConsciousMultiverse, and God/Chance, as interchangeable "redefinitions" (of course, on different samples each time) and was similarly called on it. Incidentally, I can't confirm if this suggests a pattern (it probably does), but in one church I tried, for fun, combining all of them and just conflating all the meanings of all the above into "God", and then sometimes using the specific terms and/or God interchangeably when discussing a specific subset or idea. The more confused I made it, the more people I convinced and got to engage in self-reinforcing positive-affect dialogue. So if this was the only evidence available, I'd be forced to tentatively conclude: The more confused your usage of "God" is, the more it matches the religious usage, and the more it passes ideological turing tests! (Spoiler: It does. The rest of my evidence confirms this.)
6TheOtherDave
This ought not be surprising. The more confused a concept is, the more freedom my audience has to understand it to mean whatever suits their purposes. In some audiences, this means it gets criticized more. In others, it gets accepted more uncritically.
0BerryPick6
This reminds me a lot of Spinoza's proof of God in Ethics, although I recognize that is probably partially due to personal biases of mine.
-2MugaSofer
Hmm. I'm pretty sure that if I renamed some other confused idea "God" it wouldn't work so well. Or do you mean confusing? On it's own, that sounds like your assumption is based on the fact that they were religious, which is on the face of it absurd, so I'm guessing you have some evidence you declined to mention. Incidentally, where does the term "ideological turing test" come from? I've never heard it before.
1Watercressed
The term was coined by Brian Caplan here
-2MugaSofer
Ah, cool. That's actually a really good idea, someone should set that up. An empirical test of how well you understand a position/ideology.
1DaFranker
Yes, sorry. I was using the term "confused" in a slightly different manner from the one LWers are used to, and "confusing" fits better. Basically, "meaninglessly mysterious and deep-sounding" would be the more LW-friendly description, I think. Ah, yes. Mostly the conversations and responses I got themselves gave me very strong impressions of creationism, and also some (rather unreliable, however, but still sufficient bayesian evidence) small-scale, local, privately-funded survey statistics about religion and beliefs. To top that, most of the religious places and forums/websites I was visiting were found partially through the help of my at-the-time-girlfriend, whose family was very religious (and dogmatic) and creationist, so I suspect there probably was some effect there. I don't count this, though, because that would be double-counting (it's overridden by the "conversations with people" evidence) No clue. I first saw it on LessWrong, and I think someone linked me to a wiki page about it when I asked what it meant, but I can't remember or find that instance.
-2MugaSofer
Thanks for explaining! Shame about the ITT though.
-2MugaSofer
That's what I thought. Thanks for explaining.

Tobias adjusted his wings and appeared to tighten his talons on the branch. "Maybe you’re right. I don’t know. Look, Ax, it’s a whole new world. We’re having to make all this up as we go along. There aren’t any rules falling out of the sky telling us what and what not to do." "What exactly do you mean?" "Too hard to explain right now," Tobias said. "I just mean that we don’t really have any time-tested rules for dealing with these issues... So we have to see what works and what doesn’t. We can’t afford to get so locked into one idea that we defend it to the death, without really knowing if that idea works- in the real world."

  • Animorphs, book 52: The Sacrifice

The Harvard Law states: Under controlled conditions of light, temperature, humidity, and nutrition, the organism will do as it damn well pleases.

-- Larry Wall

3RomeoStevens
See the Mouse Universe.

Mendel’s concept of the laws of genetics was lost to the world for a generation because his publication did not reach the few who were capable of grasping and extending it; and this sort of catastrophe is undoubtedly being repeated all about us, as truly significant attainments become lost in the mass of the inconsequential.

Vannevar Bush

[-]taelor120

It is startling to realize how much unbelief is necessary to make belief possible. What we know as blind faith is sutained by innnumerable unbeliefs: the fanatical Japanese in Brazil refused to believe for years the evidence of Japan's defeat; the fanatical Communist refuses to believe any unfavorable reports or evidence about Russia, nor will he be disillusioned by seeing with his own eyes the cruel misery inside the Soviet promised land.

It is the true believer's ability to "shut his eyes and stop his ears" to facts that do not deserve to be either seen or heard which is the source of his unequaled fortitude and constancy. He can not be frightened by danger, nor disheartened by obstacles nor baffled by contradictions because he denies their existence. Strength of faith, as Bergson pointed out, manifests itself not in moving mountains, but in not seeing mountains to move.

-- Eric Hoffer, The True Believer

5simplicio
A decent quote, except I am minded to nitpick that there is no such thing as unbelief as a separate category from belief. We just have credences. Many futile conversations have I seen among the muggles, wherein disputants tried to make some Fully General point about unbelief vs belief, or doubt vs certainty.
[-]taelor120

As for the hopeful, it does not seem to make any difference who it is that is seized by a wild hope -- whether it be an enthusiastic intellectual, a land-hungry farmer, a get-rich-quick speculator, a sober merchant or industrialist, a plain workingman or a noble lord -- they all proceed recklessly with the present, wreck it if they must, and create a new world. [...] When hopes and dreams are loose on the streets, it is well for the timid to lock doors, shutter windows and lie low until the wrath has passed. For there is often a monsterous incongruity between the hopes, however noble and tender, and the action which follows them. It is as if ivied maidens and garlanded youths were to herald the four horsemen of the apocalypse.

-- Eric Hoffer, The True Believer

Unfortunately, this is how the brain works:

-- Sir! We are receiving information that conflicts with the core belief system!

-- Get rid of it.

Beatrice the Biologist

[-]Zubon110

Obviously, it was his own view that had been in error. That was quite a realization, that he had been wrong. He wondered if he had ever been wrong about anything important.

-- Sterren with a literal realization that the territory did not match his mental map in The Unwilling Warlord by Lawrence Watt-Evans

If you'd have told a 14th-century peasant that there'd be a huge merchant class in the future who would sit in huge metal cylinders eating meals and drinking wine while the cylinders hurtled through the air faster than a speeding arrow across oceans and continents to bring them to far-flung business opportunities, the peasant would have classified you as insane. And he'd have been wrong to the tune of a few gazillion frequent-flyer miles.

-- someone on Usenet replying to someone deriding Kurzweil

In general, though, that argument is the Galileo gambit and not a very good argument.

There's a more charitable reading of this comment, which is just "the absurdity heuristic is not all that reliable in some domains."

What makes this the Galileo Gambit is that the absurdity factor is being turned into alleged support (by affective association with the positive benefits of air travel and frequent flier miles) rather than just being neutralized. Contrast to http://lesswrong.com/lw/j1/stranger_than_history/ where absurdity is being pointed out as a fallible heuristic but not being associated with positives.

-1A1987dM
That's the way I interpreted it. (How comes I tend to read pretty much anything¹ charitably?) ---------------------------------------- 1. Well, not really anything. I don't think I would have been capable of this.
[-]foret100

In reference to Occam's razor:

"Of course giving an inductive bias a name does not justify it."

--from Machine Learning by Tom M. Mitchell

Interesting how a concept seems more believable if it has a name...

0Robert Miles
Or less. Sometimes an assumption is believed implicitly, and it's not until it has a name that you can examine it at all.
[-]SPLH100

"De notre naissance à notre mort, nous sommes un cortège d’autres qui sont reliés par un fil ténu."

Jean Cocteau

("From our birth to our death, we are a procession of others whom a fine thread connects.")

2simplicio
...and I'm not sure about the fine thread.

It's not easy to find rap lyrics that are appropriate to be posted here. Here's an attempt.

Son, remember when you fight to be free

To see things how they are and not how you like em to be

Cause even when the world is falling on top of me

Pessimism is an emotion, not a philosophy

Knowing what's wrong doesn't imply that you right

And its another, when you suffer to apply it in life

But I'm no rookie

And I'm never gonna make the same mistake twice pussy

  • Immortal Technique "Mistakes"
[-]TsviBT100

There are four types among those who study with the Sages: the sponge, the funnel, the strainer, the sifter. The sponge absorbs everything; the funnel - in one end and out the other; the strainer passes the wine and retains the dregs; the sifter removes the chaff and retains the edible wheat.

-Pirkei Avot (5:15)

Deep wisdom indeed. Some people believe the wrong things, and some believe the right things, some people believe both, some people believe neither.

8TsviBT
To me, it expresses the need to pay attention to what you are learning, and decide which things to retain and which to discard. E.g. one student takes a course in Scala and memorizes the code for generics, while the other writes the code but focuses on understanding the notion of polymorphism and what it is good for.
0MugaSofer
I genuinely don't understand this comment.
6TsviBT
Sorry. Attempt #2: If I had infinite storage space and computing power, I would store every single piece of information I encountered. I don't, so instead I have to efficiently process and store things that I learn. This generally requires that I throw information out the window. For example, if I take a walk, I barely even process most of the detail in my visual input, and I remember very little of it. I only want to keep track of a very few things, like where I am in relation to my house, where the sidewalk is, and any nearby hazards. When the walk is over, I discard even that information. On the other hand, I often have to take derivatives. Although understanding what a derivative means is very important, it would be silly of me to rederive e.g. the chain rule each time I wanted to use it. That would waste a lot of time, and it does not take a lot of space to store the procedure for applying the chain rule. So I store that logically superfluous information because it is important. In other words, I have to be picky about what I remember. Some information is particularly useful or deep, some information isn’t. Just because this is incredibly obvious, doesn’t mean we don’t need to remind ourselves to consciously decide what to pay attention to. I thought the quote expressed this idea nicely and compactly. Whoever wrote the quote probably did not mean it in quite the same way I understand it, but I still like it.
-1MugaSofer
While this comment is true - you can't remember everything - I'm not sure how you could get that from the categorization in the quote. Still, if that's what you got out of it, I can see why you posted it here.
7A1987dM
-- Henri Poincaré
-2TobyBartels
http://tragedyseries.tumblr.com/post/66897529504/thanks-for-your-patience-while-i-am-away-working

[Physics] has come to see that thinking is merely a form of human activity…with no assurance whatever that an intellectual process has validity outside the range in which its validity has already been checked by experience.

-- P. W. Bridgman, ‘‘The Struggle for Intellectual Integrity’’

The universe is not indifferent. How do I know this? I know because I am part of the universe, and I am far from indifferent.

--Scott Derrickson

While affirming the fallacy-of-composition concerns, I think we can take this charitably to mean "The universe is not totally saturated with only indifference throughout, for behold, this part of the universe called Scott Derrickson does indeed care about things."

2A1987dM
That's the way I interpreted it, too. There's a speech in HP:MOR where Harry makes pretty much the same point.

“There is light in the world, and it is us!”

Love that moment.

2Alejandro1
That's exactly the sentiment I was aiming for with the quote.
[-]Kindly160

Scott Derrickson is indifferent. How do I know this? I know because Scott Derrickson's skin cells are part of Scott Derrickson, and Scott Derrickson's skin cells are indifferent.

6A1987dM
If you interpret “X is indifferent” as “no part of X cares”, the original quote is valid and yours isn't.

I touched her hand. Her hand touched her boob. By the transitive property, I got some boob. Algebra's awesome!

-- Steve Smith, American Dad!, season 1, episode 7 "Deacon Stan, Jesus Man", on the applicability of this axiom.

We are all faced throughout our lives with agonizing decisions. Moral choices. Some are on a grand scale. Most of these choices are on lesser points. But! We define ourselves by the choices we have made. We are in fact the sum total of our choices. Events unfold so unpredictably, so unfairly, human happiness does not seem to have been included, in the design of creation. It is only we, with our capacity to love, that give meaning to the indifferent universe. And yet, most human beings seem to have the ability to keep trying, and even to find joy from simple things like their family, their work, and from the hope that future generations might understand more.

-- Closing lines of Crimes and Misdemeanors, script by Woody Allen.

2Alejandro1
I agree with the sentiment expressed in this quote, and I don't see it as opposed to the one expressed i mine, but judging from the pattern of up votes and downvotes, people do not agree. I guess the quote I posted is ambiguous. You could read it as a kind of bad theistic argument ("since there is meaning in my life, there must be Ultimate Meaning in the universe"). Or you could read it as an anti-nihilistic quote ("even if there is no Ultimate Meaning, the fact that there is meaning in my life is enough to make it false that the universe is meaningless"). I was assuming the second reading, but I guess either the people who voted either assumed the first one. Or perhaps they saw the second one and just judged it a poor way of stating this idea.
8Jayson_Virissimo
1taelor
Scott Derrickson may be a part of the universe, but he is not the universe.

Most things that we and the people around us do constantly... have come to seem so natural and inevitable that merely to pose the question, 'Why are we doing this?' can strike us as perplexing - and also, perhaps, a little unsettling. On general principle, it is a good idea to challenge ourselves in this way about anything we have come to take for granted; the more habitual, the more valuable this line of inquiry.

-Alfie Kohn, "Punished By Rewards"

[-]TimS80

Let’s see if I get this right. Fear makes you angry and anger makes you evil, right?

Now I’ll concede at once that fear has been a major motivator of intolerance in human history. I can picture knightly adepts being taught to control fear and anger, as we saw credibly in “The Empire Strikes Back.” Calmness makes you a better warrior and prevents mistakes. Persistent wrath can cloud judgment. That part is completely believable.

But then, in “Return of the Jedi,” Lucas takes this basic wisdom and perverts it, saying — “If you get angry — even at injustice and

... (read more)
[-]gwern200

Lots of people in Weimar Germany got angry at the emerging fascists - and went out and joined the Communist Party. It was tough to be merely a liberal democrat.

3TimS
I suspect you have your causation backwards. People created / joined the Freikorps and other quasi-fascist institutions to fight the threat of Communism. Viable international Communism (~1917) predates the fall of the Kaiser - and the Freikorp had no reason to exist when the existing authorities were already willing and capable of fighting Communism. More generally, the natural reading of the Jedi moral rules is that the risk of evil from strong emotions was so great that liberal democrats should be prohibited from feeling any (neither anger-at-injustice nor love)
3gwern
I don't know why you would think the causation would be only in one direction.
4TimS
Now I'm confused. What is the topic of discussion? Clarification of Weimar Republic politics is not responsive to the Jedi-moral-philosophy point. Anger causing political action, including extreme political action, is a reasonable point, but I don't actually think anger-at-opponent-unjust-acts was the cause of much Communist or Fascist membership. You might think anger-at-social-situation vs. anger-at-unjust-acts is excessive hair-splitting. But I interpreted your response as essentially saying "Anger-at-injustice really does lead to fairly directly evil." Your example does not support that assertion. If I've misinterpreted you, please clarify. I often seem to make these interpretative mistakes, and I'd like to do better at avoiding these types of misunderstandings in the future.
1gwern
It certainly does. In reaction to one evil, Naziism, Germans could go and support a second evil, Communism, which to judge by its global body counts, was many times worse than Naziism, which is exactly the sort of reaction Brin is ridiculing: "oh, how ridiculous, how could getting angry at evil make you evil too?" Well, it could make you support another evil, perhaps even aware of the evil on the theory of 'the enemy of my enemy is my friend'... I don't know how you could get a better example of 'fighting fire with fire' than that or 'when fighting monsters, beware lest you become one'.
3TimS
Anger can lead to evil vs. Anger must lead to evil. And ignoring anger for the moment, Jedi moral philosophy says love leads to evil (that's the Annakin-Padme plot of Attack of the Clones - the romance was explicitly forbidden by Jedi rules).
-2gwern
Not what we're discussing. Let's stay on topic here. Let me quote Brin: How is my example - chosen from the very time period and milieu that Brin himself chose - not a 'single example of that happening ever'?
2TimS
Exactly what we are discussing. Brin explicitly acknowledges the first point - he's rejecting the second point. That's not a charitable reading of that point. In the real world, there are lots of different ways to be evil. In Jedi-land, evil = Sith. Annakin opposes the Sith. Then he feels strong emotions (love of Padme). Then he becomes Sith. Not extremist-opponent-who-is-just-as-bad. Opposing Nazis does not lead one to becoming a Nazi. Of course, in the real world, Nazi isn't the only way to be evil.
2OrphanWilde
In fairness to Lucas, Anakin's love of Padme isn't what converted him; it was Mace Windu's disregard for the morality the Jedi professed to follow. I regard the Jedi versus Sith as less "Good versus evil" and more "Principle Ethics versus Pragmatist/Utilitarian Ethics" - Anakin reluctantly embraced Principles until he saw that the Principles were ineffectual; even its adherents would ultimately choose pragmatism. It's kind of implied, in-canon within the movies (the books go further in vindicating the SIth still), that Sidious' master might not have been evil, per se; he sought to end death.
-1gwern
If there is only one way to be evil in Star Wars, then to become an extremist opponent of a different flavor maps back onto becoming a Sith...
5TimS
Respectfully, I think we have reached the limit of our ability to have productive conversation. (1) I don't desire to have the "Who is more evil: Nazis or Communists?" fight - I'm not sure that discussion is anything more than Blue vs. Green tu quoque mindkiller-ness. The important lesson is "beware 'do not debate him or set forth your own evidence; do not perform replicable experiments or examine history; but turn him in at once to the secret police.'" (2) It is possible to piece together acceptable moral lessons from Jedi philosophy, just like it is possible to have an interesting story of political intrigue in the world of Harry Potter. It just isn't very true to the source material - neither original author would endorse the improvements. In short, I'm tapping out.

Let’s see if I get this right. Fear makes you angry and anger makes you evil, right?

If the memories of my youth serve me anger 'leads to the dark side of the force' via the intermediary 'hate'. That is, it leads you to go around frying things with lightening and choking people with a force grip. This is only 'evil' when you do the killing in cases where killing is not an entirely appropriate response. Unfortunately humans (and furry green muppet 'Lannik') are notoriously bad at judging when drastic violation of inhibitions is appropriate. Power---likely including the power to kill people with your brain---will almost always corrupt.

But then, in “Return of the Jedi,” Lucas takes this basic wisdom and perverts it

Not nearly as much as David Brin perverts the message that Lucas's message. I in fact do reject the instructions of Yoda but I reject what he actually says. I don't need to reject a straw caricature thereof.

“If you get angry — even at injustice and murder — it will automatically and immediately transform you into an unalloyedly evil person!

Automatically. Immediately. Where did this come from? Yoda is 900 years old, wizened and gives clear indications that he think... (read more)

2TimS
Drawing from Attack of the Clones: The proximate emotion that leads to Anakin's fall is love. Even if we ignore the love-of-mother --> Tusken raiders massacre, the romance between Anakin and Padme is expressly forbidden because of the risk of Anakin turning evil. If any strong emotion has such a strong risk of turning evil that the emotion must be forbidden, we aren't really talking about a moral philosophy that bears any resemblance to one worth trying to implement in real humans. I'm not saying that strong emotions don't have a risk of going overboard - they obviously do. But the risk is maybe in the 10% range. It certainly isn't in the >90% range. ---------------------------------------- That's probably an overstatement by Brin. But evil (Sith-ness) is highly likely from feeling strong emotions (in-universe), and that's not representative of the way things work in the real world. It's roughly parallels the false idea that we rationalists want to remove emotions from human experience.
2TheOtherDave
Agreed generally, but will quibble about your last par. Vader's redemption is being presented as a Heroic Feat, it is no more representative of normal moral or psychological processes in this universe than blowing up the Death Star with a single shot is representative of normal tactics.
0ChristianKl
People in Star Wars don't really have political beliefs in any meaningful sense. The Star Wars universe is actual about a struggle between Good and Evil instead of being a struggle between two political factions. Citizens of the US got angry after 2001. The US became a lot more evil in response to torturing people and commits war crimes such as attacking people who try to rescue injured people with drones.
3TimS
The problem Brin is criticizing is that Good is entirely prohibited from feeling strong emotions. Brin explicitly acknowledges that strong emotions can lead to evil acts - he's challenging the implicit idea that strong emotions must lead to evil. Also, not my downvote.
[-]Shmi70

When they realized they were in a desert, they built a religion to worship thirstiness.

SMBC comics: a metaphor for deathism.

While I am a fan of SMBC, in this case he's not doing existentialism justice (or not understanding existentialism). Existentialism is not the same thing as deathism. Existentialism is about finding meaning and responsibility in an absurd existence. While mortality is certainly absurd, biological immortality will not make existential issues go away. In fact, I suspect it will make them stronger..


edit: on the other hand, "existentialist hokey-pokey" is both funny and right on the mark!

0Shmi
I don't see how this strip can be considered to be about existentialism. EDIT: Actually, I'm no longer sure what the strip is about. It obviously starts with Camus' absurdism, but then switches from his anti-nihilist argument against suicide in an absurd world to a potential critique of... what? nihilism? absurdism? as a means of resolving the cognitive dissonance of having a finite lifespan while wanting to live forever... Or does it? Zack Weiner can be convoluted at times.
5IlyaShpitser
It quotes Camus, the father of existentialism. It quotes from "The Myth of Sisyphus," one of the founding texts of existentialism. The invitation to live and create in the desert (e.g. invitation to find your own meaning, responsibility, and personal integrity without a God or without objective meaning in the world) is the existential answer to the desert of nihilism. Frankly, I am not sure how you can think the strip is about anything else. What do you think existentialism is? ---------------------------------------- A more accurate pithy summary of existentialism is this: "When they realized they were in a desert, they built water condensators out of sand." ---------------------------------------- "Beyond the reach of God" is existential.
2BerryPick6
SMBC has also featured a bunch of other strips about existentialism, leading me to suspect he has studied it in some capacity. Notably, here, here, here, here and here.
3IlyaShpitser
http://www.smbc-comics.com/index.php?db=comics&id=1595#comic That's relativism, not existentialism. I mean he's trying to entertain, not be a reliable source about anything. Like wikipedia :).
2BerryPick6
Yeah, the third one I linked too isn't really existentialism either now that I think about it...
1DaFranker
[Meta] I don't see why the parent was downvoted. Is it seriously being downvoted just because it called to attention an inference that was not obvious, but seemed obvious to some who had studied a certain topic X?
6TimS
Not my downvote. But if you don't know enough about existentialism to recognize Camus is a central early figure, then you don't know enough about existentialism to comment about whether a particular philosophical point invokes existentialism accurately. If we replaced "Camus" with "J.S. Mill" and "existentialism" with "consequentialism," the error might be clearer. In short, it isn't an error to miss the reference, but it is an error to challenge someone who explains the reference. (And currently, the karma for the two posts by shminux correctly reflect this difference - with the challenge voted much lower)
1DaFranker
Errh... does not follow. I care about the central early figures of any topic about as much as I care about the size of the computer monitor used by the person who contributed the most to the reddit codebase. (edit: To throw in an example, I spent several months in the dark a while back doing bayesian inference while completely missing references to / quotes from Thomas Bayes. Yes, literally, that bad. So forgive me if I wouldn't have caught your reference to consequentialism if you hadn't explicitly stated that as what "J.S. Mill" was linked to.) The later explanation (in response to said "challenge") was necessary for me to understand why someone was talking about existentialism at all in the first place, so the first comment definitely did not make the reference any more obvious or explained (to me, two-place) than it was beforehand. The "challenge" is actually not obvious to me either. When I re-read the comment, I see someone mentioning that they're missing the information that says "This strip is about existentialism". If any statement of the form "X is not obvious to me" is considered a challenge to those for whom it is obvious, then I would argue that the agents doing this considering have missed the point of the inferential distance articles. To go meta, this previous sentence is what I would consider a challenge.

I care about the central early figures of any topic about as much as I care about the size of the computer monitor used by the person who contributed the most to the reddit codebase.

I think this is a mistake, and a missed chance to practice the virtue of scholarship. Lesswrong could use much more scholarship, not less, in my opinion. The history of the field often gives more to think about than the modern state of the field.

Progress does not obey the Markov property.

7Shmi
Maybe more to think, but less value to the mastery the field, at least in the natural sciences (philosophy isn't one). You can safely delay learning about the history of discovery of electromagnetism, or linear algebra, or the periodic table until after you master the concepts. Apparently in philosophy it's somehow the other way around, you have to learn the whole history first. What a drag.
4DaFranker
[Obligatory disclaimer: This is not a challenge.] I honestly don't see how or why. I already have a rather huge list of things I want to do scholarship with, and I don't see any use I could have for knowledge about the persons behind these things I want to study. Knowing a name for the purposes of searching for more articles written under this name is useful, knowing a name to know the rate of accuracy of predictions made by this name is useful, and often the "central early figures" in a field will coincide with at least one of these or some other criteria for scholarly interest. I hear Galileo is also a central early figure for something related to stars or stellar motion or heliocentrism or something. Something about stellar bodies, probably. This seems entirely screened off (so as to make knowledge about Galileo useless to me) by other knowledge I have from other sources about other things, like newtonian physics and relativity and other cool things. Studying history is interesting, studying the history of some things is also interesting, but the central early figures of some field are only nodes in a history, and relevant to me proportionally to their relevance to the parts of said history that carry information useful for me to remember after having already propagated the effects of this through my belief network. Once I've done updates on my model based on what happened historically, I usually prefer forgetting the specifics of the history, as I tend to remember that I already learned about this history anyway (which means I won't learn it again, count it again, and break my mind even more later on). So... I don't see where knowledge about the people comes in, or why it's a good opportunity to learn more. Am I cheating by already having a list of things to study and a large collection of papers to read? To rephrase, if the information gained by knowing the history of something can be screened off by a more compact or abstract model, I prefer the latter.
2IlyaShpitser
That's fine if you are trying to do economics with your time. But it sounded to me from the comment that you didn't care as well. Actually the economics is nontrivial here, because different bits of the brain engage with the formal material vs the historic context. I think an argument for learning a field (even a formal/mathematical field) as a living process evolving through time, rather than the current snapshot really deserves a separate top level post, not a thread reply. My personal experience trying to learn math the historic way and the snapshot way is that I vastly prefer the former. Perhaps I don't have a young, mathematically inclined brain. History provides context for notational and conceptual choices, good examples, standard motivating problems that propelled the field forward, lessons about dead ends and stubborn old men, and suggests a theory of concepts as organically evolving and dying, rather than static. Knowledge rooted in historic context is much less brittle. For example, I wrote a paper with someone about what a "confounder" is. * People have been using that word probably for 70 years without a clear idea of what it means, and the concept behind it for maybe 250 more (http://jech.bmj.com/content/65/4/297.full.pdf+html). In the course of writing the paper we went through maybe half a dozen historic definitions people actually put forth (in textbooks and such), all but one of them "wrong." Probably our paper is not the last word on this. Actually "confounder" as a concept is mostly dying, to be replaced by "confounding" (much clearer, oddly). Even if we agree that our paper happens to be the latest on the subject, how much would you gain by reading it, and ignoring the rest? What if you read one of the earlier "wrong" definitions and nothing else? You can't screen off, because history does not obey the Markov property. * This is "analytic philosophy," I suppose, and in danger of running afoul of Luke's wrath!
0BerryPick6
Not really, but only because the example you gave was Astronomy. If we're talking specifically about Existentialism (although I guess the conversation has progressed a bit passed that) I'm not entirely sure how one would come up with a list of readings and concepts without turning to the writings of the Central Figures (I'm not even sure it's legitimate to call Camus an 'early' thinker, since the Golden Age of Existentialism was definitely when he and Sartre were publishing.) I would very much agree with your assessment for many if not most scientific fields, but in this particular instance, I happen to disagree that disregarding the Central Figures won't hurt your knowledge and understanding of the topic.
0OrphanWilde
Existentialism is just one branch of nihilistic philosophy, one which specifically attempts to address the issues inherent in nihilism.
3IlyaShpitser
I think it is more accurate to describe existentialism as a reaction to nihilism, not a branch of nihilism. Camus opposed nihilism. It is true that he (and other existentialists) took nihilism very seriously indeed.
1OrphanWilde
Retracted, sorry - figured out where the disconnect was coming from after reading your other comments, which led to confusion, which led me to try to identify the source. I was interpreting nihilism itself to be the theology of the desert, so your comment didn't make any sense; rereading the comic I realized I had missed the connection between the "Take that!" and the "And yet". It felt to me like an Objectivist complaining that a critic of free market philosophy didn't understand Ayn Rand; taking a generalized point and interpreting it very specifically. I don't think Camus opposed nihilism, though, I think he opposed the commonly-held philosophic ramifications of nihilism. Existentialism isn't a rejection of nihilism, it's a development of it, or at least that's what it looks like to me, as somebody who finds nihilism to be similar to an argument about what angels look like (given that I'm also an atheist). "What's our purpose?" "What's purpose?" - which is to say, I find the philosophy to be an answer, "Nothing!", in searching for a question. Existentialism replaces the answer with "What you make of it" (broadly speaking, as it's hard to actually pin down any concretes in existentialism, which is an umbrella term for a bunch of loosely-related concepts), but never really identifies the question. Trivially, you could say the question is "What's the meaning of life?", or something deep-sounding like that, but what is the question really asking? The only meaningful question to my mind is "What should I do with my life?", which doesn't really require deep philosophy. I'm a lifelong atheist. To me the "Purpose of life" question, as it pertains to atheists, is a concept imported from religion - that we can have a purpose - which lacks the referent which made that concept meaningful - a god or gods, being an entity or entities which can assign such purpose. Nihilism just seems confused, to me, and existentialism is an attempt to address a confused question. Which may

"My baby is dead. Six months old and she's dead."
"Take solace in the knowledge that this is all part of the Corn God's plan."
"Your god's plan involves dead babies?"
"If you're gonna make an omelette, you're gonna have to break a few children."
"I'm not entirely sure I want to eat that omelette."

-- Scenes From A Multiverse

2Eugine_Nier
This works equally well as an argument against utilitarianism, which I'm guessing may be your intent.
4Qiaochu_Yuan
I have no idea what people mean when they say they are against utilitarianism. My current interpretation is that they don't think people should be VNM-rational, and I haven't seen a cogent argument supporting this. Why isn't this quote just establishing that the utility of babies is high?
[-][anonymous]160

I aspire to be VNM rational, but not a utilitarian.

It's all very confusing because they both use the word "utility" but they seem to be different concepts. "Utilitarianism" is a particular moral theory that (depending on the speaker) assumes consequentialism, linearish aggregation of "utility" between people, independence and linearity of utility function components, utility is proportional to "happyness" or "well-being" or preference fulfillment, etc. I'm sure any given utilitarian will disagree with something in that list, but I've seen all of them claimed.

VNM utility only assumes that you assign utilities to possibilities consistently, and that your utilities aggregate by expectation. It also assumes consequentialism in some sense, but it's not hard to make utility assignments that aren't really usefully described as consequentialist.

I reject "utilitarianism" because it is very vague, and because I disagree with many of its interpretations.

0Qiaochu_Yuan
Thanks for the explanation. Reading through the Wikipedia article on utilitarianism, it seems like this is one of those words that has been muddled by the presence of too many authors using it. In the future I guess I should refer to the concept I had in mind as VNM-utilitarianism.
4Sniffnoy
Probably best not to refer to it with the word "utilitarianism", since it isn't a form of that. Calling it "consequentialism" is arguably enough, since (making appropriate asumptions about what a rational agent must do) a rational consequentialist must use a VNM utility function. But I guess not everyone does in fact agree with those assumptions, so perhaps "utility-function based consequentialism". Or perhaps "VNM-consequentialism".
2CarlShulman
A bounded utility function that places a lot of value on signaling/being "a good person" and desirable associate, getting some "warm glow" and "mostly doing the (deontologically) right thing" seems like a pretty good approximation.
1[anonymous]
I find these criticisms by Vladimir_M to be really superb.
0Qiaochu_Yuan
Okay. So none of that is an argument against VNM-rationality, it's an argument against a bunch of other ideas that have historically been attached to the label "utilitarian," right? The main thing I got out of that post is that utilitarianism is hard, not that it's wrong.
0[anonymous]
I don't know what you have in mind by your allusion to Morgenstern-von Neumann. The theorem is descriptive, right? It says you can model a certain broad class of decision-making entities as maximizing a utility function. What is VNM-rationality, and what does it mean to argue for it or against it? If your goal is "to do the greatest good for the greatest number," or a similar utilitarian goal, I am not sure how the VNM theorem helps you. What do you think of the "interpersonal utility comparison" problem? Vladimir_M regards it as something close to a defeater of utilitarianism.
1Qiaochu_Yuan
"People should aim to be VNM-rational." I think of this as a weak claim, which is why I didn't understand why people appeared to be arguing against it. I concluded that they probably weren't, and instead meant something else by utilitarianism, which is why I switched to a different term. Yes, that's why I think of "people should aim to be VNM-rational" as a weak claim and didn't understand why people appeared to be against it. It seems like a very hard problem, but nobody claimed that ethics was easy. What does Vladimir_M think we should be doing instead?
0Eugine_Nier
What definition of "should" are you using here? Do you mean that people deontologically should aim to be VNM-rational? Or do you mean that people should be VNM-rational in order to maximize some (which?) utility function?
0[anonymous]
Can you spell this out a little more? I don't know. I think this comment reveals a lot of respect for what you might call "folk ethics," i.e. the way normal people do it.
1Qiaochu_Yuan
"People should aim for their behavior to satisfy the VNM axioms." I'm not sure how to get more precise than this.
0[anonymous]
OK. But this seems funny to me as a moral prescription. In fact a standard premise of economics is that people's behavior does satisfy the VNM axioms, or at least that deviations from them are random and cancel each other out at large scales. That's sort of the point of the VNM theorem: you can model people's behavior as though they were maximizing something, even if that's not the way an individual understands his own behavior. Even if you don't buy that premise, it's hard for me to see why famous utilitarians like Bentham or Singer would be pleased if people hewed more closely to the VNM axioms. Couldn't they do so, and still make the world worse by valuing bad things? Is "people should aim for their behavior to satisfy the VNM axioms" all that you meant originally by utilitarianism? From what you've written elsewhere in this thread it sounds like you might mean something more, but I could be misunderstanding.
0Qiaochu_Yuan
Yes, but if I think that optimal moral behavior means using a specific utility function, somebody who isn't being VNM-rational is incapable of optimal moral behavior. It's all I originally meant. I gathered from all of the responses that this is not how other people use the term, so I stopped using it that way.
0Eugine_Nier
Well, Alicorn is a deontologist. In any case, as an ultafinitist you should know the problems with the VNM theorem.
8Qiaochu_Yuan
I also have no idea what people mean when they say they are deontologists. I've read Alicorn's Deontology for Consequentialists and I still really have no idea. My current interpretation is that a deontologist will make a decision that makes everything worse if it upholds some moral principle, which just seems like obviously a bad idea to me. I think it's reasonable to argue that deontology and virtue ethics describe heuristics for carrying out moral decisions in practice, but heuristics are heuristics because they break down, and I don't see a reasonable way to judge which heuristics to use that isn't consequentialist / utilitarian. Then again, it's quite likely that my understanding of these terms doesn't agree with their colloquial use, in which case I need to find a better word for what I mean by consequentialist / utilitarian. Maybe I should stick to "VNM-rational." I also didn't claim to be an ultrafinitist, although I have ultrafinitist sympathies. I haven't worked through the proof of the VNM theorem yet in enough detail to understand how infinitary it is (although I intend to).
-1Eugine_Nier
Taboo "make everything worse". At the very least I find it interesting how rarely an analogous objection against VNM-utiliterians with different utility functions is raised. It's almost as if many of the "VNM-utiliterians" around here don't care what it means to "make everything worse" as long as one avoids doing it, and avoids doing it following the mathematically correct decision theory. Well the continuity axiom in the statement certainly seems dubious from an ultafinitist point of view.
3Qiaochu_Yuan
Have worse consequences for everybody, where "everybody" means present and future agents to which we assign moral value. For example, a sufficiently crazy deontologist might want to kill all such agents in the name of some sacred moral principle. Rarely? Isn't this exactly what we're talking about when we talk about paperclip maximizers?
0Eugine_Nier
When I asked you to taboo "makes everything worse", I meant taboo "worse" not taboo "everything".
2Qiaochu_Yuan
You want me to say something like "worse with respect to some utility function" and you want to respond with something like "a VNM-rational agent with a different utility function has the same property." I didn't claim that I reject deontologists but accept VNM-rational agents even if they have different utility functions from me. I'm just trying to explain that my current understanding of deontology makes it seem like a bad idea to me, which is why I don't think it's accurate. Are you trying to correct my understanding of deontology or are you agreeing with it but disagreeing that it's a bad idea?
0Eugine_Nier
No, I'm going to respond by asking you "with respect to which utility function?" and "why should I care about that utility function?"
0[anonymous]
You've assumed vague-utilitarianism here, which weakens your point. I would taboo "make everything worse" as "less freedom, health, fun, awesomeness, happyness, truth, etc", where the list refers to all the good things, as argued in the metaethcis sequence.
-6Eugine_Nier
-2Kindly
A sufficiently crazy consequentialist might want to kill all such agents because he's scared of what the voices in his head might otherwise do. Your argument is not an argument at all. And if the sacred moral principle leads to the deontologist killing everyone, that is a pretty terrible moral principle. Usually they're not like that. Usually the "don't kill people if you can help it" moral principle tends to be ranked pretty high up there to prevent things like this from happening.
1Qiaochu_Yuan
Smells like consequentialist reasoning. Look, if I had a better example I would give it, but I am genuinely not sure what deontologists think they're doing if they don't think they're just using heuristics that approximate consequentialist reasoning.
2TsviBT
Huh? How so?
0Eugine_Nier
Replace the "corn god" in the quote with a sufficiently rational utiliterian agent.
0Wei Dai
To make sure I understand, do you mean that a sufficiently rational utilitarian agent may decide to kill a 6 month old baby if it decides that would serve its goal of maximizing aggregate utility, and if I'm pretty sure that no 6 month old baby should ever be intentionally killed, I would conclude that utilitarianism is probably wrong?
0MugaSofer
Nah, it's just a cheap shot at the theists. EDIT: not sure about the source, but the way it's edited ...
0Alicorn
I hadn't actually thought of that, but that could be part of why I liked the quote.

the decision to base your life on beliefs which not only can you not prove, but which, on the balance of the evidence, seem unlikely to be true, seems incredibly irresponsible. If religious believing had implications only for the individual believer, then it could be easily dismissed as a harmless idiosyncrasy, but since almost all religious beliefs have incredibly serious implications for many people, religious belief cannot be regarded as harmless. Indeed, a glance at the behavior of religious believers worldwide day by day makes it very clear that reli

... (read more)
0ChristianKl
If you actually care about the influence on how you treat others, why don't use that as your test whether to hold a belief? Instead of focusing on whether the belief in likely to be true you could focus on whether it's likely to be harm other people. A lot of Christian's don't believe in beheading people for mixed-sex dancing and victimizate homosexuals. For them the fact that other Christian's do those thing is no good reason to drop their beliefs.
1Toddling
It can be difficult to know what will be harmful without knowing whether certain things are true. Hypothetical example: A person kills their child in order to prevent them from committing some kind of sin and going to hell. If this person's beliefs about the existence of hell and how people get in and stay out of it are true, they have saved their child from a great deal of suffering. If their beliefs are not true, they have killed their child for nothing.

He that believes without having any Reason for believing, may be in love with his own Fancies; but neither seeks Truth as he ought, nor pays the Obedience due to his Maker, who would have him use those discerning Faculties he has given him, to keep him out of Mistake and Errour.

John Locke, Essay Concerning Human Understanding

"We are living on borrowed time and abiding by the law of probability, which is the only law we carefully observe. Had we done otherwise, we would now be dead heroes instead of surviving experts." –Devil's Guard

[-][anonymous]60

The generation of random numbers is too important to be left to chance. Robert R. Coveyou, Oak Ridge National Laboratory

It is more incumbent on me to declare my opinion on this question, because they have, on further reflection, undergone a considerable change; and although I am not aware that I have ever published any thing respecting machinery which it is necessary for me to retract, yet I have in other ways given my support to doctrines which I now think erroneous; it, therefore, becomes a duty in me to submit my present views to examination, with my reasons for entertaining them.

-- Ricardo, publicly saying "oops" in his restrained Victorian fashion, in his essay "On Machinery".

2gwern
I was actually just reading that yesterday because of Cowen linking it in http://marginalrevolution.com/marginalrevolution/2013/01/the-ricardo-effect-in-europe-germany-fact-of-the-day.html I'm not entirely sure I understand Ricardo's chapter (Victorian economists being hard to read both because of the style and distance), or why, if it's as clear as Ricardo seems to think, no-one ever seems to mention the point in discussions of technological unemployment (and instead, constantly harping on comparative advantage etc). What did you make of it?
2RolfAndreassen
That's how I found it, too. But I need the LessWrong karma and you don't. :D If I followed the discussion of circulating versus fixed capital, and gross versus net increase, Ricardo is showing that (in modern jargon as opposed to Victorian jargon) if you set the elasticities correctly, you can make a new machine decrease total wages in spite of substitution effects. He seems to think about this in terms of the "carrying capacity" of the economy, ie the total population size, presumably because Victorian economists worked much closer to true Malthusian conditions than ours do. In other words it's a bit of a model, not necessarily related to any particular economic change that has ever actually happened. Possibly you could get the same result re-published today if you put it in modern jargon with some nice equations, but it would be one of those papers that basically say "If we set variable X to extreme value Y, what happens?" So it's probably not that important when discussing actual machinery, as Ricardo acknowledges; he's exploring the edges of the parameter space.

But I've never seen the Icarus story as a lesson about the limitations of humans. I see it as a lesson about the limitations of wax as an adhesive.

Randall Munroe

3Oscar_Cunningham
This is a duplicate.
2Jay_Schweikert
And to think, I was just getting on to post this quote myself!
[-]Shmi50

even though you can’t see or hear them at all, a person’s a person, no matter how small.

Dr. Seuss

Our tragedy is that in these hyper-partisan times, the mere fact that one side says, ‘Look, there's [a problem],’ means that the other side's going to say, ‘Huh? What? No, I'm not even going to look up.’

-- Jonathan Haidt

But if either side admits that they care about disaster befalling the US economy, then if the other does not so admit, this second side can blackmail the first side for whatever they want. Therefore, the only reasonable negotiating strategy is to pretend not to care at all about the US economy.

-- Yvain, on why brinkmanship is not stupid

5A1987dM
Belief propagation fail on my part. I had already read that Yvain post when watching that Haidt talk, but I still interpreted the behaviour he described in terms of ape social dynamics, forgetting that the average politician is probably more cold-blooded (i.e. more resembling the idealized agents of game theory) than the average ape is. OTOH, the problems Haidt describes (global warming, rising public debt, rising income inequality, rising prevalence of out-of-wedlock births) don't have a hard deadline, they just gradually get worse and worse over the decades; so the dynamics of brinkmanship is probably not quite the same.

The lapse of time during which a given event has not happened, is, in [the] logic of habit, constantly alleged as a reason why the event should never happen, even when the lapse of time is precisely the added condition which makes the event imminent. A man will tell you that he has worked in a mine for forty years unhurt by an accident as a reason why he should apprehend no danger, though the roof is beginning to sink; and it is often observable, that the older a man gets, the more difficult it is to him to retain a believing conception of his own death.

... (read more)
2simplicio
Not to get too nitpicky, but the mine example doesn't really work here. Working for 40 years in a mine without accident doesn't actually make disaster imminent; I would imagine that a mine disaster is a Poisson process, in which expected duration to the next accident is independent of any previous occurrences. It seems like there might be some gambler's fallacy stuff happening here. An actually good example of this would be a bridge whose foundations are slowly eroding, and is now in danger of collapse.

Where there's smoke, there's fire... unless someone has a smoke machine.

-- thedaveoflife

0hairyfigment
Where there's smoke, there's a chemical reaction of some kind. Unless it's really someone blowing off steam.
0Manfred
Or someone with an aerosolizer.

The dissident temperament has been present in all times and places, though only ever among a small minority of citizens. Its characteristic, speaking broadly, is a cast of mind that, presented with a proposition about the world, has little interest in where that proposition originated, or how popular it is, or how many powerful and credentialed persons have assented to it, or what might be lost in the way of property, status, or even life, in denying it. To the dissident, the only thing worth pondering about the proposition is, is it true? If it is, then

... (read more)
[-]ygert130

Wile this is all very inspiring, is it true? Yes, truth in and of itself is something that many people value, but what this quote is claiming is that there are a class of people (that he calls "dissidents") that specifically value this above and beyond anything else. It seems a lot more likely to me that truth is something that all or most people value to one extent or another, and as such, sometimes if the conditions are right people will sacrifice stuff to achieve it, just like for any other thing they value.

Cartman: I can try to catch it, but I'm going to need all the resources you've got. If this thing isn't contained, your Easter Egg hunt is going to be a bloodbath.

Mr. Billings: What do you think, Peters? What are the chances that this 'Jewpacabra' is real?

Peters: I'm estimating somewhere around .000000001%.

Mr. Billings: We can't afford to take that chance. Get this kid whatever he needs.

South Park, Se 16 ep 4, "Jewpacabra"

note: edited for concision. script

2arundelo
This is a duplicate. You probably checked and didn't find it because for some reason Google doesn't know about it.

Hatred is the most accessible and comprehensive of all unifying agents. It pulls and whirls the individual away from his own self, makes him oblivious to his weal and future, frees him of jelousies and self-seeking. He becomes an anonymous partical quivering with a craving to fuse and coalesce with his like into one flaming mass. [...] Mass movements can rise and spread without a belief in God, but never without belief in a devil. Usually the strength of a mass movement is proportionate to the vividness and tangibility of its devil. When Hitler was asked

... (read more)
4taelor
-- Eric Hoffer, The True Believer

lacanthropy, n. The transformation, under the influence of the full moon, of a dubious psychological theory into a dubious social theory via a dubious linguistic theory.

(Source: Dennettations)

3MixedNuts
Is there a reason you're quoting this, or are you just being humeorous?
1simplicio
I thought it was quite Witty.

If you're commited to rationality, then you're putting your belief system at risk every day. Any day you might acquire more information and be forced to change you belief system, and it could be very unpleasant and be very disturbing.

--Michael Huemer

Perhaps the day will come when philosophy can be discussed in terms of investigation rather than controversies, and philosophers, like scientists, be known by the topics they study rather than by the views they hold.

Nelson Goodman

I don't think change can be planned. It can only be recognized.

jad abumrad, a video about the development of Radio Lab and the amount of fear involved in doing original work

-

[This comment is no longer endorsed by its author]Reply
4Vaniver
Partial dupe.
0Qiaochu_Yuan
Oy. I did not go down far enough to check whether this had been posted already. Thanks.
[-][anonymous]10

If I could offer one piece of advice to young people thinking about their future, it would be this: Don't preconceive. Find out what the opportunities are.

--Thomas Sowell

2[anonymous]
That's hopelessly vague. Advice is hard enough to absorb even if you understand it.

A straight line may be the shortest distance between two points, but it is by no means the most interesting.

The Third Doctor

[-][anonymous]00

Always do the right thing.

-The mayor, in "do the right thing"

0DanArmak
I think the bigger problem is that people mostly disagree on what the right thing to do is.
4[anonymous]
I still find it useful to play it back in my head to remind myself to actually think whether what I'm doing is right "nyan, always do the right thing". I think that we agree on enough that if people "did the right thing" it would be better than the current situation, if not perfect.
1Eugine_Nier
That's not at all clear.
0Qiaochu_Yuan
Unclear. Some people have very bad ideas about what constitutes the right thing and their impact might not be canceled out.
-1MugaSofer
In fairness, people aren't great at deciding what the right thing is, but I still agree with you; most people are not wrong about most things. For example, boycotts would work. So well. OTOH, every abortion clinic would be bombed before the week was out; terrorist attacks would probably go up generally, as would revenge killings. You could argue those would have positive net impacts (since terrorists would presumably stop once their demands are met? I think?) but it's certainly not one-sided.
0HalMorris
Funny, I was thinking for the last few days or weeks of "Do the right thing!" as a sort of summary of deontology. It's all very well if you know what the right thing is. Another classic expression is "Let justice be done though the heavens may fall" (see http://en.wikipedia.org/wiki/Fiat_justitia_ruat_caelum), apparently most famously said by the English Jurist Lord Mansfield when reversing the conviction of John Wilkes for libel while, it seems, riots and demonstrations were going on in the streets (my very brief research indicates he did not say it in the case that outlawed slavery in the British homeland long before even the British abolished elsewhere -- though a book on that case is titled "Though the Heavens may fall" -- the fact that he made that remark and that decision just made it too tempting to conflate them). Some examples in the Bible pointedly illustrate "do the right thing" (in the sense of whatever God says is right -- though in this case, "right" clearly isn't in any conflict with "the Heavens"). I.e. Abraham: Sacrifice your son to me (ha ha just kidding/testing you), or Joshua "Run around the walls of Jericho blowing horns and the walls will fall down". These are extreme cases of "Right is right, never mind how you'd imagine it would turn out -- with hour tiny human mind). Personally, since I am not an Objectivist, or a fundamentalist, or one who talks with God, I don't fully trust any set of rules I may currently have as to what "is right", though I trust them enough to get through most days. Nor am I a perfect consequentialist since I don't perfectly trust my ability to predict outcomes. An awful lot of examples given to justify consequentialism are extremely contrived, like "ticking bomb" scenarios to justify torture. Unfortunately many of us have seen these scenarios all too often in fiction (e.g. "24"), where they are quite common because they furnish such exciting plot points. Then they are on a battlefield in the real world which does no
[-][anonymous]00

Well, we have a pretty good test for who was stronger. Who won? In the real story, overdogs win.

--Mencius Moldbug, here

I can't overemphasise how much I agree with this quote as a heuristic.

7Shmi
As I noted in my other comment, he redefined the terms underdog/overdog to be based on poteriors, not priors, effectively rendering them redundant (and useless as a heuristic).
3GLaDOS
I consider this an uncharitable reading, I've read the article twice and I still understood him much as Konkvistador and Athrelon have.
2Kindly
Most of the time, priors and posteriors match. If you expect the posterior to differ from your prior in a specific direction, then change your prior. And thus, you should expect 99% of underdogs to lose and 99% of overdogs to win. If all you know is that a dog won, you should be 99% confident the dog was an overdog. If the standard narrative reports the underdog winning, that doesn't make the narrative impossible, but puts a burden of implausibility on it.

And thus, you should expect 99% of underdogs to lose and 99% of overdogs to win. If all you know is that a dog won, you should be 99% confident the dog was an overdog.

Second statement assumes that the base rate of underdogs and overdogs is the same. In practice I would expect there to be far more underdogs than overdogs.

0Kindly
Good point. I was thinking of underdog and overdog as relative, binary terms -- in any contest, one of two dogs is the underdog, and the other is the overdog. If that's not the case, we can expect to see underdogs beating other underdogs, for instance, or an overdog being up against ten underdogs and losing to one of them.
2gwern
How should I change my prior if I expect it to change in the specific directions either up or down - but not the same?
6khafra
Fat tailed distributions make the rockin' world go round.
4gwern
They don't even have to be fat-tailed; in very simple examples you can know that on the next observation, your posterior will either be greater or lesser but not the same. Here's an example: flipping a biased coin in a beta distribution with a uniform prior, and trying to infer the bias/frequency. Obviously, when I flip the coin, I will either get a heads or a tails, so I know after my first flip, my posterior will either favor heads or tails, but not remain unchanged! There is no landing-on-its-edge intermediate 0.5 coin. Indeed, I know in advance I will be able to rule out 1 of 2 hypotheses: 100% heads and 100% tails. But this isn't just true of the first observation. Suppose I flip twice, and get heads then tails; so the single most likely frequency is 1/2 since that's what I have to date. But now we're back to the same situation as in the beginning: we've managed to accumulative evidence against the most extreme biases like 99% heads, so we have learned something from the 2 flips, but we're back in the same situation where we expect the posterior to differ from the prior in 2 specific directions but cannot update the prior: the next flip I will either get 2/3 or 1/3 heads. Hence, I can tell you - even before flipping - that 1/2 must be dethroned in favor of 1/3 or 2/3!
3[anonymous]
And yet if you add those two posterior distributions, weighted by your current probability of ending up with each, you get your prior back. Magic! (Witch burners don't get their prior back when they do this because they expect to update in the direction of "she's a witch" in either case, so when they sum over probable posteriors, they get back their real prior which says "I already know that she's a witch", the implication being "the trial has low value of information, let's just burn her now".)
1gwern
Yup, sure does. Which is a step toward the right idea Kindly was gesturing at.
-1Shmi
For coin bias estimate, as for most other things, the self-consistent updating procedure follows maximum likelihood.
3[anonymous]
Max liklihood tells you which is most likely, which is mostly meaningless without further assumptions. For example, if you wanted to bet on what the next flip would be, a max liklihood method won't give you the right probability.
1A1987dM
Yes. OTOH, the expected value of the beta distribution with parameters a and b happens to equal the mode of the beta distribution with parameters a - 1 and b - 1, so maximum likelihood does give the right answer (i.e. the expected value of the posterior) if you start from the improper prior B(0, 0). (IIRC, the same thing happens with other types of distributions, if you pick the ‘right’ improper prior (i.e. the one Jaynes argues for in conditions of total ignorance for totally unrelated reasons) for each. I wonder if this has some particular relevance.)
5Oligopsony
I suppose this is a hilariously obvious thing to say, but I wonder how much leftism Marcion Mugwump has actually read. We're completely honest about the whole power-seizing thing. It's not some secret truth. (Okay, some non-Marxist traditions like anarchism have that whole "people vs. power" thing. But they're confused.)
0[anonymous]
Ehm... what? Yes but as a friend reminded me recently, saying obvious things can be necessary.
5[anonymous]
The heuristic is great, but that article is horrible, even for Moldbug.
[-]TimS130

I agree. For example:

"Civil disobedience" is no more than a way for the overdog to say to the underdog: I am so strong that you cannot enforce your "laws" upon me.

This statement is obviously true. But it sure would be useful to have a theory that predicted (or even explained) when a putative civil disobedience would and wouldn't work that way.

Obviously, willing to use overwhelming violence usually defeats civil disobedience. But not every protest wins, and it is worth trying to figure out why - if for no other reason than figuring out if we could win if we protested something.

0hairyfigment
I see no way to interpret it that would make it true. Civil disobedience serves to provoke a response that will - alone among crises that we know about - decrease people's attitudes of obedience or submission to "traditional" authority. In the obvious Just-So Story, leaders who will use violence against people who pose no threat might also kill you. We would expect this Gandhi trick to fail if the authorities get little-to-none of their power from the attitude in question. The nature of their response must matter as well. (Meanwhile, as you imply, I don't know how Moldbug wants us to detect strength. My first guess would be that he wants his chosen 'enemies' to appear strong so that he can play underdog.)
0TimS
I don't think we are disagreeing on substance. "Underdog" and similar labels are narrative labels, not predictive labels. I interpreted Moldbug as saying that treating narrative labels as predictive labels is likely to lead one to make mistaken predictions and / or engage in hindsight bias. This is a true statement, but not a particularly useful one - it's a good first step, but not a complete analysis. Thus, the extent to which Moldbug treats the statement as complete analysis is error.
1Shmi
How is it great? How would you use this "heuristic"?
1[anonymous]
I hadn't read your comment before I posted this. I assumed it meant what the terms usually mean, and lacked moldbuggerian semantics. In that sense, it would be a warning against rooting for the (usual) underdog, which is certainly a bias I've found myself wandering into in the past. In retrospect I was somewhat silly for assuming Moldbug would use a word to mean what it actually means.
0[anonymous]
I have read his comment and the article. Knowing Moldbug's style I agree with GLaDOS on the interpretation. I may be wrong in which case interpret the quote in line with my interpretation rather than original meaning.

“Our vision is inevitably contracted, and the whole horizon may contain much which will compose a very different picture.”

Cheney Bros v. Doris Silk Corporation, New York Circuit Court of Appeals, Second Circuit

Whenever I'm about to do something, I think, "Would an idiot do that?" And if they would, then I do not do that thing.

-Dwight K. Schrute

4A1987dM
Reversed stupidity is not intelligence. Would an idiot drink when they're thirsty? Yes they would.
2arborealhominid
Extremely good point! I liked this quote because I thought it was a funny way to describe taking the outside view, but you're completely right that it advocates reversed stupidity (at least when taken completely literally).
2Desrtopa
Also a duplicate, about which I made roughly the same comment the first time it was posted.

"Study everything, join nothing"

1Oscar_Cunningham
Atribution?
0Peterdjones
Paul Brunton, quotes by The Maverick Philosopher.
[-][anonymous]00

If someone tells you they solved the mystery of Amelia Earhart's fate, you might be skeptical at first, but if they have a well documented, thoroughly pondered explanation, you would probably hear them out and who knows, you might even be convinced. But what if, in the next breath, they tell you that they actually have a second explanation as well. You listen patiently and discover and are surprised to find the alternate explanation to be as well documented and thought through as the first. And after finishing the second explanation you are presented wit

... (read more)
[This comment is no longer endorsed by its author]Reply

[After analyzing the hypothetical of an extra, random person dying every second.] All in all, the losses would be dramatic, but not devastating to our species as a whole. And really, in the end, the global death rate is 100%—everyone dies.

. . . or do they? Strictly speaking, the observed death rate for the human condition is something like 93%—that is, around 93% of all humans have died. This means the death rate among humans who were not members of The Beatles is significantly higher than the 50% death rate among humans who were.

--Randall Munroe, "Death Rates"

BART: It's weird, Lis: I miss him as a friend, but I miss him even more as an enemy.
LISA: I think you need Skinner, Bart. Everybody needs a nemesis. Sherlock Holmes had his Dr. Moriarty, Mountain Dew has its Mellow Yellow, even Maggie has that baby with the one eyebrow.

Everyone may need a nemesis, but while Holmes had a distinct character all his own and thus used Dr. Moriarty simply to test formidable skills, Bart actually seems to create or define himself precisely in opposition to authority, as the other to authority, and not as some identifiable cha

... (read more)
1gwern
That's not a bad essay (BTW, essays should be in quote marks, and the book itself, The Simpsons and Philosophy, in italics), but I don't think the quote is very interesting in isolation without any of the examples or comparisons.
1elspood
Edited, thanks for the style correction. I suspect you're probably right that more examples makes this more interesting, given the lack of upvotes. In fact, I probably found the quote relevant mostly because it more or less summed up the experience of my OWN life at the time I read it years ago. I spent much of my youth being contrarian for contradiction's sake, and thinking myself to be revolutionary or somehow different from those who just joined the cliques and conformed, or blindly followed their parents, or any other authority. When I realized that defining myself against social norms, or my parents, or society was really fundamentally no different from blind conformity, only then was I free to figure out who I really was and wanted to be. Probably related: this quote.
[-][anonymous]-20

Intellectuals may like to think of themselves as people who "speak truth to power" but too often they are people who speak lies to gain power.

--Thomas Sowell

[This comment is no longer endorsed by its author]Reply
4hairyfigment
Dupe.
1[anonymous]
Thank you!
0simplicio
Seems like an implausible view of the motivations of said intellectuals. Otherwise, agreed.
0[anonymous]
The first clause was a statement about what they think, not really about motivations, and quite plausible anyway. The second statement was about what they do. Related to "adaptation executors not fitness maximizers".
[-]taelor-20

It is easy to see how the faultfinding man of words, by persistent ridicule and denunciation, shakes prevailing beliefs and loyalties, and familiarizes the masses with the idea of change. What is not so obvious is the process by which the discrediting of existing beliefs and institutions makes possible the rise of a fanatical new faith. For it is a remarkable fact that the millitant man of words who "sounds the established order to its source to mark its want of authority and justice" often prepares the way not for a society of freethinking indi

... (read more)

He shook his head. "No, for the purposes of this discussion, Asuka... only I have the power to decide humanity's fate. And I refuse that power to give it back to them. Humanity is made of neither heaven or hell; that with freedom of choice and honor, as though the maker and molder of itself... that they may fashion themselves in whatever form they shall prefer. People, individuals, are not single things but always tip from order to chaos and back again. Those with order are needed for stability. Those who espouse chaos bring change. Only humanity may ... (read more)

-2MugaSofer
How is this a rationality quote? Also, I haven't read the text in question, but I for one would be very wary about letting Warhammer!humanity "fashion themselves in whatever form they shall prefer."

...since the 1930s, self-driving cars have been just 20 years away.

-Bryant Walker Smith

But we've had self-driving cars for multiple years now...

2IlyaShpitser
By principle of charity (everyone knows about google cars by now), I took the grandparent to mean "past performance is not a guarantee of future returns."
1Jayson_Virissimo
Obviously, Smith knows this, since he has published papers on the legality of self-driving cars as late as 2012. The purpose of the quote (for me) is to draw an analogy between Strong AI and self-driving cars, both of which have had people saying "it is just 20 years away" for many decades now (and one of which is now here, making the people that made such a prediction 20 year ago correct).
-1MugaSofer
Considering there are working prototypes of such cars driving around right now ... EDIT: Damn, ninja'd by Luke.
[+][anonymous]-120