Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Rationality Quotes August 2014

7 Post author: RolfAndreassen 04 August 2014 03:12AM
  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.
  • Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.

Comments (233)

Comment author: Pablo_Stafforini 04 August 2014 05:59:45PM 27 points [-]

A good rule of thumb to ask yourself in all situations is, “If not now, then when?” Many people delay important habits, work and goals for some hypothetical future. But the future quickly becomes the present and nothing will have changed.

Scott Young

Comment author: alanwil2 06 August 2014 07:28:02PM 6 points [-]

I just bought a book on procrastination. I am going to start reading it tomorrow.

Comment author: CronoDAS 04 August 2014 08:24:52AM 22 points [-]

The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it.

-- Alberto Brandolini (via David Brin)

Comment author: Viliam_Bur 04 August 2014 09:11:07AM *  8 points [-]

Refuting frequently appearing bullshit could be made more efficient by having a web page with standard explanations which could be linked from the debate. Posting a link (perhaps with a short summary, which could also be provided on the top of that web page) does not require too much energy.

Which would create another problem, of protecting that web page from bullshit created by reversing stupidity, undiscriminating skepticism, or simply affective death spirals about that web page. (Yes, I'm thinking about RationalWiki.) Maybe we could have multiple anti-bullshit websites, which would sometimes explain using their own words, and sometimes merely by linking to another website's explanation they agree with.

Comment author: CronoDAS 04 August 2014 09:18:56AM 6 points [-]

http://www.talkorigins.org/indexcc/ is considered a good one on the single issue of creationism vs. evolution.

Comment author: fubarobfusco 04 August 2014 09:07:20PM *  3 points [-]

Yes, it is, and The Counter-Creationism Handbook sits next to Darwin, Dawkins, and Diamond on my shelf. It would be a Good Thing if folks in other bullshit-fighting arenas had the level of scholarship exhibited by Mark Isaak and his collaborators.

(Hell, every time I see a "bingo card" ridiculing an Other Side's arguments, I wish its creators had the time and scholarly dedication of the talk.origins folk.)

Comment author: Gunnar_Zarncke 07 August 2014 10:13:13PM 2 points [-]

But you still have to find the proper entry. This just shifts the burden around and in total refutation is probably still much more expensive than creation (esp. as the same BS can be copied but the refutation can't..

Comment author: ChristianKl 11 August 2014 10:47:13AM 4 points [-]

undiscriminating skepticism

I think that's a bad description. The kind of people on RationalWiki are very discriminating. When something is said by an Authority they trust they aren't skeptic and when something is said by someone they don't trust they are very "skeptic".

Maybe we could have multiple anti-bullshit websites

I don't think that framing yourself as "anti-bullshit" is helpful. It's makes more sense to frame yourself as being pro-evidence. We already do have multiple websites that do explain issues.

I personally like Skeptics Stackexchange. If I come about a new claim I often simply go and open a question over there.

When it comes to an issue such as vaccination I think Vox has a decent primer: http://www.vox.com/cards/vaccines/what-is-vaccine

Comment author: Jiro 08 August 2014 07:01:22PM 3 points [-]

How does this differ from religious groups refusing to answer questions that dispute things said by their religion, and instead referring you to scripture passages or Christian apologetics?

Of course it's different in that you will link to refutations that are good arguments, and the religious person will link to apologetics that are bad arguments, but aside from that, how is it different? After all, you can't very well say that certain tactics are acceptable or unacceptable based on whether the associated arguments are good or bad.

Comment author: Viliam_Bur 09 August 2014 06:20:49PM *  6 points [-]

Depends on the audience and topic. Also, sometimes the goal is not to convince your opponent, but to convince the bystanders.

Imagine that you are on a web forum where someone comes and writes a long comment about "Isn't it horrible that vaccination causes autism, and yet the government wants us to vaccinate our children? I would do anything to protect my child from autism!" and some information probably copied from some other webpage. It's not just you and them; there are also other readers who don't have a clue and may be frightened by the message. (And they will not use google, because... well, humans are stupid.)

If nobody opposes the message, it seems like their is a clear consensus among the people who care about the topic. If you opposed them, you are wasting your time. -- But if you post a link to a good explanation, then the people frightened by the message can read the explanation and hear a dissenting voice, while you wouldn't have to spend a lot of time... assumming there is a good anti-bullshit page where you just enter "vaccination, autism" in the search box, and it shows you a well-written page about the topic. Where well-written means a short layman-accessible summary at the top, and then detailed arguments and references below.

Comment author: Jiro 10 August 2014 04:01:33PM *  2 points [-]

But by that same reasoning, a fundamentalist Christian could come here, see that someone has written a long comment about, say, evolution, and reply with a link to a prewritten web page listing 100 arguments against evolution. He reasons that if he posts a good explanation, people who are frightened by the idea of fundamentalists being a menace can read the explanation and hear a "dissenting voice",,,.

As far as he is concerned, he has followed your recommendations exactly. Is there something you could say which explains why his behavior is unacceptable, but the behavior you describe is acceptable, that does not involve "our anti-anti-vaccination page is well-written and your anti-evolution page is not"?

(Alternatively, would you find his behavior acceptable? This seems odd.)

Comment author: Viliam_Bur 11 August 2014 07:50:42AM *  3 points [-]

A fundamentalist Christian who would post here a link to a page listing arguments against evolution would be more effective than the one who would try to debate, because they would achieve the same (in this situation: zero) effect while spending much less resources. The people who would try to debate them, each of them would waste more of their time by reading the linked page and composing the reply. So, I believe this is a good strategy.

Specifically on LW we have an (unwritten?) norm that if you post a link, you should also provide a summary using your own words. Which probably was designed to counter this strategy. But there are website which don't have this norm, e.g. Facebook.

Comment author: RichardKennaway 11 August 2014 09:00:46AM 2 points [-]

Specifically on LW we have an (unwritten?) norm that if you post a link, you should also provide a summary using your own words. Which probably was designed to counter this strategy.

It is not specific to LW, but a custom of good practice that personally, I have followed ever since there has been such a thing as a link (and before then, when the equivalent was posting to an email list a cut-and-paste of someone else's words without any words from the person posting). I also practice the custom of ignoring links that come to me without context.

I recommend both parts of this practice to everyone.

Comment author: RichardKennaway 11 August 2014 10:19:15AM 1 point [-]

How does this differ from religious groups refusing to answer questions that dispute things said by their religion, and instead referring you to scripture passages or Christian apologetics?

How does answering questions at length in one's own words differ from religious groups answering questions at length in their own words?

Comment author: Jiro 11 August 2014 03:52:10PM 2 points [-]

It doesn't differ. But it doesn't have to, since we consider it acceptable behavior for religious people to come here and answer questions in their own words.

We generally don't consider it acceptable behavior for religious people to come here and respond to posts by giving links to apologetic sites. It should not, then, be acceptable behavior for us except maybe in a few specialized cases (such as where the dispute is purely over facts, like for vaccines).

Comment author: AndHisHorse 08 August 2014 05:52:15PM 1 point [-]

Refuting frequently appearing bullshit is more than a matter of making the facts available. After all, anti-vaccination folks appear with enough frequency to be a curious news item (which I admit is a horrendous metric, but let's pretend it means something), and I'm sure that a quick Google search would yield enough facts to disabuse them of their notions. The trick is building up enough credibility and charisma - if such a property could be applied to an argument - to make such a site not just correct, but convincing. That's where the order of magnitude comes in.

Comment author: Viliam_Bur 09 August 2014 06:26:02PM *  3 points [-]

a quick Google search would yield enough facts to disabuse them of their notions

Most people are not strategic enough to use google. Or if they get many contradicting results from google search, they are not smart enough to decide.

Also, there is this bias that if an information was brought to you by a person you know, it has much stronger impact. (For most people, good relationships matter more than truth. If someone brings you an information, disbelieving the information is a potential conflict with given person.) You only get an equal opposing force if another person you know opposes the original information. For example, by saying "it's bullshit" and posting a link to refutation.

Comment author: Benito 05 August 2014 12:50:36PM *  17 points [-]

But if that were the case, then moral philosophers - who reason about ethical principles all day long - should be more virtuous than other people. Are they? The philosopher Eric Schwitzgebel tried to find out. He used surveys and more surreptitious methods to measure how often moral philosophers give to charity, vote, call their mothers, donate blood, donate organs, clean up after themselves at philosophy conferences, and respond to emails purportedly from students. And in none of these ways are moral philosophers better than other philosophers or professors in other fields.

Schwitzgebel even scrounged up the missing-book lists from dozens of libraries and found that academic books on ethics, which are presumably mostly borrowed by ethicists, are more likely to be stolen or just never returned than books in other areas of philosophy. In other words, expertise in moral reasoning does not seem to improve moral behavior, and it might even make it worse (perhaps by making the rider more skilled at post hoc justification). Schwitzgebel still has yet to find a single measure on which moral philosophers behave better than other philosophers.

  • Jonathon Haidt, discussing the idea that ethical reasoning causes good behaviour, in his book 'The Righteous Mind'.

I found the book-stealing thing quite funny, although I imagine that some of the results described could be explained by popularity; if more people get into / like ethics, then there are more people who might steal library books, more antisocial people who don't respond to emails, etc. This hasn't been demonstrated to my knowledge though, and I'm otherwise inclined to believe that people who spend their days thinking about ethics in the abstract, are simply better at coming up with rationales for their instinctive feelings. Joshua Greene says rights are an example of this, where we need a dictum against whatever our emotions are telling us is despicable, even though we can't find any utilitarian justification for it.

Comment author: dspeyer 06 August 2014 10:56:19PM 6 points [-]

There's probably a selection effect at work. Would a highly moral person with a capable and flexible mind become a full-time moral philosopher? Take their sustenance from society's philanthropy budget?

Or would they take the talmudists' advice and learn a trade so they can support themselves, and study moral philosophy in their free time? Or perhaps Givewell's advice and learn the most lucrative art they can and give most of it to charity? Or study whichever field allows them to make the biggest difference in peoples' lives? (Probably medicine, engineering or diplomacy.)

Granted, such a person might think they could make such a large contribution to the field of moral philosophy that it would be comparable in impact to other research fields. This seems unlikely.

The same reasoning would keep highly moral people out of other sorts of philosophy, but people who don't have an interest in moral philosophy per se might not notice the point. It's hard to avoid if you specifically study it.

Comment author: CCC 07 August 2014 07:40:53AM 3 points [-]

Or would they take the talmudists' advice and learn a trade so they can support themselves, and study moral philosophy in their free time?

If someone's studying moral philosophy in their free time, then wouldn't they be taking academic books on ethics out of the library?

Comment author: Viliam_Bur 09 August 2014 09:59:57PM *  3 points [-]

Granted, such a person might think they could make such a large contribution to the field of moral philosophy that it would be comparable in impact to other research fields. This seems unlikely.

Unlikely that they would make such contribution? Yes. Unlikely that they think they would make such contribution? Maybe no.

But I guess they probably don't even think this way, i.e. don't try to maximize their impact. More likely it is something like: "My contribution to society exceeds my salary, so I am a net benefit to the society". Which is actually possible. Yeah, some people, especially the effective altruists, would consider such thinking an evidence against their competence as a moral philosopher.

Comment author: eli_sennesh 31 August 2014 02:31:19PM 2 points [-]

This could happen, but I think it's mostly dwarfed by the far larger selection effect that people who are not financially privileged mostly don't attempt to become humanities academics these days -- and for good reason.

Comment author: dspeyer 01 September 2014 04:05:13PM 1 point [-]

Are you saying that financially privileged people tend to be less moral?

Comment author: eli_sennesh 03 September 2014 10:09:41AM 2 points [-]

While that case has been made in a few isolated studies, I was more generally referring to the fact that people who don't come from money will usually choose careers that make them money, and humanities academia doesn't.

Comment author: Nornagest 03 September 2014 05:49:08PM *  4 points [-]

Wasn't sure about that, so I tracked down some research (Goyette & Mullen 2006). Turns out you're right: conditioned on getting into college in the first place, higher socioeconomic status (as proxied by parents' educational achievement) is correlated with going into arts and sciences over vocational fields (engineering, education, business). The paper also finds a nonsignificant trend toward choosing arts and humanities over math and science, within the arts and science category.

(Within the vocational majors, though, engineering is the highest-SES category. Business and education are both significantly lower. I don't know which of those would be most lucrative on average but I suspect it'd be engineering.)

Comment author: eli_sennesh 03 September 2014 10:17:26PM 1 point [-]

(Within the vocational majors, though, engineering is the highest-SES category. Business and education are both significantly lower. I don't know which of those would be most lucrative on average but I suspect it'd be engineering.)

I think there are several trade-offs there: engineering looks like the highest expected value to us, because we (on LessWrong, mostly) had pre-university educations focused on math, science, and technology. People from lower SES... did not, so fewer of them will survive the weed-out courses taught in "we damn well hope you learned this in AP class" style. And then there's the acclimation to discipline and acclimation to obsessive work-habits (necessary for engineering school) that come from professional parentage... and so on. And then of course, many low-SES people probably want to go into teaching as a helping profession, but that's not a very quantitative explanation and I'm probably just making it up.

On the other hand, engineering colleges tend to have abnormally large quantities of international students and immigrants blatantly focused on careerism. So yeah.

Comment author: dspeyer 03 September 2014 05:19:55PM 1 point [-]

How does that fact impact the morality of moral philosophers as measured?

Comment author: RichardKennaway 13 August 2014 08:51:09AM 4 points [-]

This hasn't been demonstrated to my knowledge though, and I'm otherwise inclined to believe that people who spend their days thinking about ethics in the abstract, are simply better at coming up with rationales for their instinctive feelings.

I think it more likely they're better at coming up with rationales to ignore their instinctive feelings.

Comment author: Jiro 13 August 2014 08:30:43PM 2 points [-]

I think that someone can believe that their instinctive feelings are an approximation to what is ethical, then try to formalize it, then conclude that they have identified areas where the approximation is in error. So their ethics code could be highly based on their instinctive feelings without following them 100% of the time.

Comment author: VAuroch 13 August 2014 09:25:02AM 1 point [-]

That seems unlikely. People's instinctive feelings are generally pretty selfish. (Small sample size, obviously. I think 2 other people where I've spoken with enough about this kind of thing to judge.)

Comment author: RichardKennaway 13 August 2014 10:03:49AM *  3 points [-]

None of your sample were people with children, then?

And there's also the question of what is "instinctive" versus whatever the opposite is. What is this distinction and how do you tell?

Comment author: VAuroch 13 August 2014 09:08:51PM 0 points [-]

No, but I don't see why children should have an effect; favoring your children over strangers is no less selfish than favoring yourself over strangers, and both are strong instincts.

By instinctive I just mean system 1; the judgments made before you take time to think through what you should do.

Comment author: RichardKennaway 14 August 2014 04:01:27AM 4 points [-]

No, but I don't see why children should have an effect; favoring your children over strangers is no less selfish than favoring yourself over strangers, and both are strong instincts.

I had intended to draw attention to the phenomenon of favouring one's children over oneself. It appears I was right about the test demographic.

And "no less selfish"? At what point would you consider the widening circle to be "less selfish"? To favour your village over others, your country over others, humanity over animals; are these are all no less selfish? Is nothing unselfish but a life of exaninition and unceasing service to everyone and everything but oneself?

By instinctive I just mean system 1; the judgments made before you take time to think through what you should do.

System 1 is susceptible to training -- that is what training is. We may be born with the neurological mechanism, but not its entire content. "Instinct" more usually means (quoting Wikipedia) "performed without being based upon prior experience". A human without prior experience is a baby.

Comment author: VAuroch 14 August 2014 06:55:48AM 0 points [-]

Standard definitions of system 1 describe it as 'instinctive', but if you need a separate definition of instinctive responses, 'untrained system 1 responses' works.

At what point would you consider the widening circle to be "less selfish"? To favour your village over others, your country over others, humanity over animals; are these are all no less selfish? Is nothing unselfish but a life of exaninition and unceasing service to everyone and everything but oneself?

That depends. Any of those things can be unselfish, if you're doing it because you think it's a good thing to do independent of whether it's an outcome/action you like, and the wider the circle the more likely that's the motivation. If it's based on 'I like these people and want them to be happy, therefore I will take this action' that's still selfish.

Lest this sound like I'm saying anything that isn't done for abstract reasons is selfish, I'd contrast it with things done for reasons of compassion. The lines there can get blurry when the people you're feeling compassion for are in your ingroup, but things like the place-quarters-here-for-adorable-sad-children variety of charity are clearly trying to induce compassionate motivation (and it works).

From conversations I have had with my own parents (not as comprehensive or in-depth, but heartfelt), it seemed pretty clear that the parenting instinct is much more 'these kids are mine and I will take care of them come hell or high water' than a compassionate reflex.

Comment author: NancyLebovitz 12 August 2014 03:11:47PM 3 points [-]

Hypothesis: At least some of the people who are interested in ethics are concerned because they have a problem behaving ethically.

Comment author: Torello 05 August 2014 01:43:56PM 5 points [-]

"In 1971, John Rawls coined the term "reflective equilibrium" to denote "a state of balance or coherence among a set of beliefs arrived at by a process of deliberative mutual adjustment among general principles and particular judgments". In practical terms, reflective equilibrium is about how we identify and resolve logical inconsistencies in our prevailing moral compass. Examples such as the rejection of slavery and of innumerable "isms" (sexism, ageism, etc.) are quite clear: the arguments that worked best were those highlighting the hypocrisy of maintaining acceptance of existing attitudes in the face of already-established contrasting attitudes in matters that were indisputably analogous."

-Aubrey de Grey, The Overdue Demise Of Monogamy

This passage argues that reasoning does impact ethical behavior. Steven Pinker and Peter Singer make similar arguments, which I find convincing.

Comment author: Benito 05 August 2014 06:48:21PM 3 points [-]

I actually put up another quote arguing for it, by Joshua Greene, making an analogy between successsful moral argument and the invention of new technology; even though a person rarely invents a whole new piece of technology, our world is defined by technological advance. Similarly, even though it is rare for a moral norm to change as a result of abstract argument, our social norms have change dramatically since times gone by.

Nonetheless, the quote works with empirical evidence, the ultimate arbiter of reality. It looks like, whilst moral argument can change our thoughts (and behaviour) on ethical issues, a lot of the time it doesn't. Like technology, the big changes transform our world, but for the most part we're just playing angry birds.

Comment author: eli_sennesh 31 August 2014 02:32:33PM *  1 point [-]

I find it quite arguable whether or not "reflective equilibrium" is a real thing that actually happens in our cognition, or a little game played by philosophy academics. Actual cognitive dissonance caused by holding mutually contradicting ideas in simultaneous salience is well-evidenced, but that's not exactly an equilibrium across all ideas we hold, merely across the ones we're holding in short-term verbal memory at the time.

Comment author: grendelkhan 18 August 2014 06:45:56PM 13 points [-]

Sometimes the biggest disasters aren't noticed at all -- no one's around to write horror stories.

Vernor Vinge, A Fire Upon the Deep

Comment author: Iydak 13 August 2014 11:57:54AM *  13 points [-]

We try things. Occasionally they even work.

Parson Gotti

Comment author: Bugmaster 14 August 2014 04:47:42AM 2 points [-]
Comment author: lalaithion 15 August 2014 06:13:56AM 3 points [-]

" 'striving for the impossible' doesn't mean 'toiling in vain'. It means growth, it means improvement in the directions of your ideas, not futility."

Comment author: lmm 25 August 2014 09:31:24PM 1 point [-]

Link is broken

Comment author: Iydak 26 August 2014 10:50:48PM 1 point [-]

Looks like they decided to swap over to a new site not two weeks after I posted it. Should be fixed now.

Comment author: lmm 27 August 2014 07:01:25AM 1 point [-]

Nope, still broken

Comment author: Iydak 28 August 2014 02:21:12PM *  1 point [-]

My link, or Bugmaster's?

Comment author: lmm 28 August 2014 11:28:56PM 1 point [-]

Bugmaster's

Comment author: Stabilizer 04 August 2014 11:24:33PM *  11 points [-]

Most of the time what we do is what we do most of the time.

-Daniel Willingham, Why Don't Students Like School. The point is that, quite often the reason we're doing something is that that's what we're used to doing in that situation.

Note: He attributes the quote to some other psychologists.

Comment author: Ixiel 19 August 2014 12:55:58AM 10 points [-]

Most of the time he asked questions. His questions were very good, and if you tried to answer them intelligently, you found yourself saying excellent things that you did not know you knew, and that you had not, in fact, known before. He had "educed" them from you by his question. His classes were literally "education" - they brought things out of you, they made your mind produce its own explicit ideas.

Thomas Merton, about professor Mark Van Doren

Comment author: rule_and_line 23 August 2014 01:47:44AM 9 points [-]

After describing

blind certainty, a close-mindedness that amounts to an imprisonment so total that the prisoner doesn't even know he's locked up.

David Foster Wallace continues

The point here is that I think this is one part of what teaching me how to think is really supposed to mean. To be just a little less arrogant. To have just a little critical awareness about myself and my certainties. Because a huge percentage of the stuff that I tend to be automatically certain of is, it turns out, totally wrong and deluded. I have learned this the hard way, as I predict you will, too.

Comment author: soreff 23 August 2014 05:47:42PM 4 points [-]

Because a huge percentage of the stuff that I tend to be automatically certain of is, it turns out, totally wrong and deluded.

There is a very large amount of stuff that one is automatically certain of that is correct, though trivial, data like "liquid water is wet". I'm not sure how one would even practically quantify an analysis of what fraction of the statements one is certain of are or are not true. Even if one could efficiently test them, how would one list them (in the current state of science - tracing a full human neural network (and then converting its beliefs into a list of testable statements) is beyond our current capabilities).

Comment author: rule_and_line 23 August 2014 07:14:17PM 1 point [-]

I'm curious about this "liquid water is wet" statement. Obviously I agree, but for the sake of argument, could you taboo "is" and tell me the statement again? I'm trying to understand how your algorithm feels from the inside.

If you're curious how to quantify fractions of statements, you might enjoy this puzzle I heard once. Suppose you're an ecological researcher and you need to know the number of fish in a large lake. How would you get a handle on that number?

Comment author: soreff 23 August 2014 07:34:19PM 3 points [-]

One of the parts of "liquid water is wet" is that a droplet of it will spread out on many common surfaces - salt, paper, cotton, etc. Yes, it is a bit tricky to unpack what is meant by"wet" - perhaps some other properties, like not withstanding shear are also folded in - but I don't think that it is just a tautology, with "wet" being defined as the set of properties that liquid water has.

Re the catch/count/mark/release/recapture/count puzzle - the degree to which that is feasible depends on how well one can do (reasonably) unbiased sampling. I'm skeptical that that will work well with the set of testable statements that one is automatically certain of.

Comment author: Qwake 17 August 2014 03:32:17AM 9 points [-]

Few people are capable of expressing with equanimity opinions which differ from the prejudices of their social environment. Most people are even incapable of forming such opinions.

Albert Einstein

Comment author: somervta 17 August 2014 10:25:28AM 12 points [-]

I don't suppose you have a source for the quote? (at this point, my default is to disbelieve any attribution of a quote unknown to me to Einstein)

Comment author: jazmt 17 August 2014 07:47:16PM 5 points [-]

according to this website (http://ravallirepublic.com/news/opinion/viewpoint/article_876e97ba-1aff-11e2-9a10-0019bb2963f4.html) it is part of 'aphorisms for leo baeck' (which I think is printed in 'ideas and opinions' but I don't have access to the book right now to check)

Comment author: Qwake 18 August 2014 05:31:57AM 1 point [-]

Thank you for finding the source (I read it in a book and was to lazy to fact check it).

Comment author: somervta 17 August 2014 10:40:07PM 1 point [-]

Thanks! I didn't fine it with my minute of googling, good to know it's legit.

Comment author: arundelo 17 August 2014 08:34:59PM 1 point [-]
Comment author: arundelo 05 August 2014 05:19:21AM 24 points [-]

That's why I'm skeptical of people who look at some catastrophic failure of a complex system and say, "Wow, the odds of this happening are astronomical. Five different safety systems had to fail simultaneously!" What they don't realize is that one or two of those systems are failing all the time, and it's up to the other three systems to prevent the failure from turning into a disaster.

-- Raymond Chen

Comment author: satt 07 August 2014 01:16:20AM 8 points [-]

In other words, some of the slices in one's Swiss cheese model are actually missing entirely.

Comment author: dspeyer 05 August 2014 09:24:34PM 8 points [-]

Correlary: if you're running a system for which five simultaneous failures is a disaster, monitor each safety system seperately and treat any three simultaneous failures as if it were a disaster.

Comment author: Gunnar_Zarncke 07 August 2014 10:02:13PM 7 points [-]

Also known as Fundamental Failure Mode. From Systemantics:

System failure

The Fundamental Failure-Mode Theorem (F.F.T.): Complex systems usually operate in failure mode.

A complex system can fail in an infinite number of ways. (If anything can go wrong, it will.) (See Murphy's law.)

The mode of failure of a complex system cannot ordinarily be predicted from its structure.

The crucial variables are discovered by accident.

The larger the system, the greater the probability of unexpected failure.

"Success" or "Function" in any system may be failure in the larger or smaller systems to which the system is connected.

The Fail-Safe Theorem: When a Fail-Safe system fails, it fails by failing to fail safe.

Comment author: Stabilizer 04 August 2014 04:01:20AM 24 points [-]

Surgeons finally did upgrade their antiseptic standards at the end of the nineteenth century. But, as is often the case with new ideas, the effort required deeper changes than anyone had anticipated. In their blood-slick, viscera-encrusted black coats, surgeons had seen themselves as warriors doing hemorrhagic battle with little more than their bare hands. A few pioneering Germans, however, seized on the idea of the surgeon as scientist. They traded in their black coats for pristine laboratory whites, refashioned their operating rooms to achieve the exacting sterility of a bacteriological lab, and embraced anatomic precision over speed.

The key message to teach surgeons, it turned out, was not how to stop germs but how to think like a laboratory scientist. Young physicians from America and elsewhere who went to Germany to study with its surgical luminaries became fervent converts to their thinking and their standards. They returned as apostles not only for the use of antiseptic practice (to kill germs) but also for the much more exacting demands of aseptic practice (to prevent germs), such as wearing sterile gloves, gowns, hats, and masks. Proselytizing through their own students and colleagues, they finally spread the ideas worldwide.

-Atul Gawande

Comment author: EGarrett 04 August 2014 04:23:40PM 8 points [-]

"Science alone of all the subjects contains within itself the lesson of the danger of belief in the infallibility of the greatest teachers of the preceding generation." -Richard Feynman

Comment author: dspeyer 05 August 2014 09:34:00PM 22 points [-]

It was a gamble: would people really take time out of their busy lives to answer other people’s questions, for nothing more than fake internet points and bragging rights?

It turns out that people will do anything for fake internet points.

Just kidding. At best, the points, and the gamification, and the focused structure of the site did little more than encourage people to keep doing what they were already doing. People came because they wanted to help other people, because they needed to learn something new, or because they wanted to show off the clever way they’d solved a problem.

...

An incredible number of people jumped at the chance to help a stranger

-- Jay Hanlon, Five year retrospective on StackOverflow

Comment author: satt 07 August 2014 01:38:38AM 51 points [-]

On the other hand, a Slashdot comment that's stuck in my mind (and on my hard disks) since I read it years ago:

In one respect the computer industry is exactly like the construction industry: nobody has two minutes to tell you how to do something...but they all have forty-five minutes to tell you why you did it wrong.

When I started working at a tech company, as a lowly new-guy know-nothing, I found that any question starting with "How do I..." or "What's the best way to..." would be ignored; so I had to adopt another strategy. Say I wanted to do X. Research showed me there were (say) about six or seven ways to do X. Which is the best in my situation? I don't know. So I pick an approach at random, though I don't actually use it. Then I wander down to the coffee machine and casually remark, "So, I needed to do X, and I used approach Y." I would then, inevitably, get a half-hour discussion of why that was stupid, and what I should have done was use approach Z, because of this, this, and this. Then I would go off and use approach Z.

In ten years in the tech industry, that strategy has never failed once. I think the key difference is the subtext. In the first strategy, the subtext is, "Hey, can you spend your valuable time helping me do something trivial?" while in the second strategy, the subtext is, "Hey, here's a chance to show off how smart you are." People being what they are, the first subtext will usually fail -- but the second will always succeed.

— fumblebruschi

Comment author: NancyLebovitz 12 August 2014 03:15:08PM *  16 points [-]

In addition to the specific advice, this is an excellent example of rationality because it's about getting the best from people as they are rather than being resentful because they aren't behaving as they would if they were ideally rational.

Comment author: satt 16 August 2014 04:50:29PM 5 points [-]

I can't be sure, because I first read that comment so long ago, but I think I took it as an inspiration to be better than the co-workers at the coffee machine. It's repellent to imagine myself as a person who'd spend 45 minutes on a Yer Doin It Rong lecture but wouldn't spend 2 minutes to explain how to do something properly in the first place.

Comment author: Sarunas 20 August 2014 10:36:37PM 4 points [-]

This is known as Cunningham's Law. Another example.The explanation (non-competitive vs. competitive mindsets, the latter of which is more motivating to act) seems quite convincing. In addition, could there also be an analogy to loss aversion (a tendency to prefer avoiding losses to acquiring gains)? Would people feel more urgency to correct what they see as wrong (and thus challenging what they see as correct) rather than explain what is right ("less wrong" vs. "more right", if we are not trying to avoid puns)?

Comment author: TheMajor 07 August 2014 06:30:05AM 4 points [-]

A reply because an upvote doesn't begin to cover it. I might start using this!

Comment author: ike 06 August 2014 02:05:47AM 6 points [-]

Well, did they test popularity of sites without fake internet points vs popularity of sites with, controlling for relevant factors? I skimmed through the post, and there wasn't much actual data on what people do and why, just assertions.

Comment author: Azathoth123 06 August 2014 02:57:57AM 6 points [-]

I thought the point of the points was to weed out the people whose "help" you don't want.

Comment author: ike 06 August 2014 08:07:31PM 2 points [-]

That would account for reputation, not badges. (No one says "Hey, I got two answer from people with the same rep, but one has twice as many badges, so I'll go with that one.")

On the actual question, I've seen meta-posts on Stack Exchange complaining that they qualified for a badge and didn't get it, so the stuff does matter somewhat.

Comment author: cody-bryce 07 September 2014 04:36:43AM *  3 points [-]

Convincing people to offer others programming help on the internet isn't a special accomplishment of SO. From usenet to modern mailing lists to forums to IRC, there are tons and tons of thriving venues for it. The gamification might have helped SO's popularity some, but taking time out of their busy lives to answer others' questions was alive and well.

SO is a dangerous trash heap. It doesn't encourage helping people make good programs; it answers extremely literal questions. Speed of post is important. Style of post is important. Blatantly wrong answers are upvoted by people who don't know what they're looking at when they are early, indicating that vote count isn't telling ever. Doing anything but answering a question completely literally is treated with extreme hostility. These sorts of things have gotten worse with time.

The community relations are bizarre. Active members of the community buy into cheap salesman lines by the owners that are meant to favor the owners. The idea that the community can direct itself is thrown around as if it wasn't blatantly untrue.

Yes, an incredible people jump at the chance to help strangers. SO didn't invent that, they're just one of the more popular current hosts to these people. It's distasteful to act like it started by wondering if such people exist.

Comment author: Lumifer 08 September 2014 01:49:06AM 2 points [-]

It doesn't encourage helping people make good programs; it answers extremely literal questions.

So? That's fine. "Helping people make good programs" is awfully fuzzy and is likely to start by major holy wars breaking out. SO is useful, at least for me, because it offers fast concise answers to very specific and literal questions I have on a regular basis.

I can't say anything about the internal politics of SO since I don't play there.

Comment author: arundelo 04 August 2014 10:33:02PM 7 points [-]

The power is not in the choice of metaphor, it is in the ability to shift among metaphors. Teaching people this other metaphor [...] but not leaving them with the flexibility to move freely in and out is not having enabled them at all.

-- Kent Pitman

Comment author: arundelo 05 August 2014 12:43:40PM 8 points [-]

Elsewhere in the thread he says the following. I have corrected some typos and added emphasis.

  • I expect a firestorm of complaining over the use of the word `stack'. Maybe I'll be pleasantly surprised. I prefer to use such metaphors because I think such abstractions give people a useful handhold when they are coming from other backgrounds. I get jumped on a lot for using a stack metaphor when talking about Scheme because people apparently think I've forgotten that it's not a strict stack; personally, I think the people who are so quick to jump on me have forgotten that even a metaphor that has a flaw can be a powerful way to reason and express even when not speaking rigorously. The remark here is intended to allow someone who is just barely reading along to confirm that something he may have strong knowledge of in another domain is in fact what is being discussed here. To not offer that handhold seems to me to be impolite.
Comment author: Benito 08 August 2014 09:35:08PM 29 points [-]

Hollywood is filled with feel-good messages about how robotic logic is no match for fuzzy, warm, human irrationality, and how the power of love will overcome pesky obstacles such as a malevolent superintelligent computer. Unfortunately there isn’t a great deal of cause to think this is the case, any more than there is that noble gorillas can defeat evil human poachers with the power of chest-beating and the ability to use rudimentary tools.

From the British Newspaper 'The Telegraph', and their article on Nick Bostrom's awesome new book 'Superintelligence'.

I just thought it was a great analogy. Nice to see AI as an X-Risk in the mainstream media too.

Comment author: elharo 10 August 2014 10:28:05AM *  7 points [-]

Probably true. It's not like Hollywood is an accurate source of information about anything. (Climate change, asteroid impacts, the legal system, the military, romance, sex, business, anything.) But I fail to see how this is a rationality quote. I'm sure there are many more quotes of the form "Group X is wrong about Topic Y."

I would prefer to limit quotes to those that that teach us how to tell whether or how much Group X is right or wrong about Topic Y, and skip quotes that merely turn on the applause lights on a particular topic.

Comment author: NancyLebovitz 12 August 2014 03:20:21PM 5 points [-]

The quote isn't just about Hollywood being wrong, it's about a specific way that it's wrong.

Comment author: RolfAndreassen 04 August 2014 05:40:54AM 26 points [-]

A man is walking on the moon with his eyes turned up toward space And the bright blue world that watches him reflected on his face. The whole world sees the hero there and the module crew also. But few can see the guiding team that guards him from below.

Here's a health to the man who walked the moon, and the module crew above, And the team that watches from the sky with worry, joy, and love. To all who blazed the sky-trail come raise your glasses 'round; And a health to the unknown heroes, too, who never left the ground.

Here's a health to the ship's designers, and the welders of her seams, And all who man the radar-scan to watch our dawning dreams. For all the unknown heroes, sing out to every shore: "What makes one step a giant leap is all the steps before".

Leslie Fish, musically praising the Hufflepuff virtues.

Comment author: rule_and_line 23 August 2014 01:27:12AM *  6 points [-]

There is a real joy in doing mathematics, in learning ways of thinking that explain and organize and simplify. One can feel this joy discovering new mathematics, rediscovering old mathematics, learning a way of thinking from a person or text, or finding a new way to explain or to view an old mathematical structure.

This inner motivation might lead us to think that we do mathematics solely for its own sake. That’s not true: the social setting is extremely important. We are inspired by other people, we seek appreciation by other people, and we like to help other people solve their mathematical problems.

-- William Thurston

The entire essay is a beautiful discussion of success and failure in practicing the art of mathematics. Changing the things that need to be changed, much of it applies to practicing the art of rationality.

Comment author: NancyLebovitz 15 August 2014 05:05:01PM 6 points [-]

Challenge my assumption, not my conclusion, and do it with new evidence, instead of trying to twist the old stuff.

"The Originist", by Orson Scott Card

I believe the first part is frequently good advice. The second half is good, but not quite as good-- there still may be good new angles on old evidence.

Comment author: shminux 26 August 2014 09:09:12PM *  13 points [-]

is consciousness more like the weather, or is it more like multiplication?

Scott Aaronson

More context:

a perfect simulation of the weather doesn’t make it rain—at least, not in our world. On the other hand, a perfect simulation of multiplying two numbers does multiply the numbers: there’s no difference at all between multiplication and a “simulation of multiplication.” Likewise, a perfect simulation of a good argument is a good argument, a perfect simulation of a sidesplitting joke is a sidesplitting joke, etc.

Maybe the hardware substrate is relevant after all. But [...] I think the burden is firmly on those of us who suspect so, to explain what about the hardware matters and why. Post-Turing, no one gets to treat consciousness’s dependence on particular hardware as “obvious”—especially if they never even explain what it is about that hardware that makes a difference.

Comment author: jaime2000 05 August 2014 04:24:47AM *  25 points [-]

"I want information. I want to understand you. To understand what exactly I'm fighting. You can help me."
"I obviously won't."
"I will kill you if you don't help me. I'm not bluffing, Broadwings. I will kill you and you will die alone and unseen, and frankly you are far too intelligent to simply believe that the stories of ancestral halls are true. You will die and that will probably be it, and nobody will ever know if you talked or not—not that conversing with an enemy in a war you don't support is dishonorable in the first place."
"You'll let me leave if I stonewall, because you don't want to set a precedent of murdering surrendered officers."
"We'll see. Would you like another cup?"
"No."
Derpy smiled deviously. "You know, in that last battle? We didn't fly our cannon up there to the cliffs. Nope. We had Earth ponies drag them. Earth ponies are capable of astounding physical feats, you know. We're probably going to be using more mobility in our artillery deployment going forward, now that they've demonstrated how effective the concept is."
"...why did you tell me that? What would drive you to tell me that?"
"I'll ask again before I continue. Would you like to assist me, Broadwings?"
"I am a gryphon. Telling me your plans will do nothing to change that. I will not barter secrets."
She leaned back, gesturing with a hoof as she talked. "My biggest strengths are that I understand the way crowds think and that I am good at thinking up unexpected ways to solve simple problems. My army's biggest weakness is that my soldiers are inexperienced, and that unexpected developments have an inordinate effect on their morale. Also, my infantry will never be able to stand against a sustained lion charge, so I have to keep finding ways to nullify that disadvantage, and frankly I won't be able to forever."
"I don't understand. What are you doing, Mare? Why are you--"
"--my personal biggest weaknesses," she continued, her smile now malicious, "are my struggles with morality, identity, and my desire to be loved. There's also my relationship with the stallion Macintosh Apple, who is usually called Big Macintosh, with whom I spend upwards of ten hours a day, and on whom I am completely emotionally dependent. If he were to be killed, I'd probably fall apart emotionally. I also have a daughter named Dinky—not by him, mind you—who is in the Southmarch, and who I am very, very guilty about abandoning. If anything were to happen to her I might kill myself. Do you understand yet, Broadwings?"
"Mare, this is insanity. I cannot--"
"--All right then, we'll continue. I also have in this camp Sweetie Belle, Apple Bloom, and Scootaloo, three little fillies, though they're growing quite quickly now. Sweetie Belle is the writer of many propaganda songs, Apple Bloom is Big Mac's sister, who he protects like a daughter, and I believe Scootaloo has no special importance but the other two would defend her to the death. They would be quite easy to kill as well. Do you understand yet?"
"Mare! Are you mad?! Do you have any idea how dangerous it is to tell me these things? Aren't you afraid I would tell--"
"--Good," she nodded. "You're beginning to understand. Let's see. My logistics framework right now is nonexistent. I'm entirely reliant on local villages bringing me food and materiel, and on capturing food and materiel meant for your armies. My army is nowhere near as mobile as it appears, since it can only operate in areas where I have established relationships with each particular village. A bit of simple recon work would let you figure out where I can and cannot go. Do you understand yet?"
Broadwings' eyes opened and his pupils shrank with dawning recognition. "...If I came back to my army, I would use this to defeat you. If I told any other gryphon, they would use it to defeat you. You...you have..."
"Yes. I have sealed your fate; you will not see your home. I can't let you leave now. I absolutely can't. I can now either kill you or keep you prisoner until this war is over—and I don't keep useless prisoners. It's now out of my hooves. One or the other. You pick."

~emkajii, Equestria: Total War

Comment author: devas 05 August 2014 10:37:41AM 12 points [-]

This sounds like something from Schelling's strategy of conflict, although I haven't read it

Comment author: jaime2000 05 August 2014 04:51:38PM 10 points [-]

Yes, that's exactly what I was thinking. General Broadwings thinks General Derpy is bluffing, so Derpy credibly precommits herself to not releasing him by telling him information that would surely doom her army if she did. She gives up the choice of freeing Broadwings, and comes out ahead for it.

Comment author: satt 07 August 2014 02:53:38AM 7 points [-]

It's kind of reminiscent of this, from pages 43-44 of the 1980 edition:

It is not always easy to make a convincing, self-binding, promise. Both the kidnapper who would like to release his prisoner, and the prisoner, may search desperately for a way to commit the latter against informing on his captor, without finding one. If the victim has committed an act whose disclosure could lead to blackmail, he may confess it; if not, he might commit one in the presence of his captor, to create the bond that will ensure his silence. But these extreme possibilities illustrate how difficult, as well as important, it may be to assume a promise.

Compare also Daniel Ellsberg's Kidnap game.

Comment author: EGarrett 05 August 2014 11:53:20PM 4 points [-]

"Just as eating against one’s will is injurious to health, so studying without a liking for it spoils the memory, and it retains nothing it takes in." -Da Vinci

Comment author: Stabilizer 06 August 2014 12:30:15AM *  7 points [-]

Well...

Just as eating only what one likes is injurious to health, so studying only what one likes spoils the memory, and what is retained isn't very useful.

-Not Da Vinci

Comment author: EGarrett 06 August 2014 09:15:24AM *  5 points [-]

Compare Da Vinci's quote to Kubrick's...

"Interest can produce learning on a scale compared to fear as a nuclear explosion to a firecracker.”

They both seem quite clearly to be saying that the knowledge they gained studying what they were forced to study was essentially nothing in comparison to what they gained studying what they themselves found interesting.

From personal experience, I agree totally with both statements.

Comment author: Torello 04 August 2014 05:37:45PM 9 points [-]

This seems like an elegant and funny take on Ben Franklin's wisdom.

Walter Sobchak: "Am I wrong?"

The Dude: "No you're not wrong."

Walter Sobchak: "Am I wrong?"

The Dude: "You're not wrong Walter. You're just an asshole."

-The Big Lebowski, Directed by Joel Coen and Ethan Coen, 1998

Comment author: Benito 04 August 2014 09:58:03AM *  10 points [-]

A good argument is like a piece of technology. Few of us will ever invent a new piece of technology, and on any given day it’s unlikely that we’ll adopt one. Nevertheless, the world we inhabit is defined by technological change. Likewise, I believe that the world we inhabit is a product of good moral arguments. It’s hard to catch someone in the midst of reasoned moral persuasion, and harder still to observe the genesis of a good argument. But I believe that without our capacity for moral reasoning, the world would be a very different place.

-Joshua Greene, “Moral Tribes”, Endnotes

Comment author: James_Miller 04 August 2014 03:35:52AM 15 points [-]

Come back with your shield - or on it.

Our kind might not be able to cooperate, but the Spartans certainly could. The Spartans were masters of hoplite phalanx warfare where often every individual would have been better off running away but collectively everyone was better off if none ran away than if all did. The above quote is what Plutarch says Spartan mothers would tell their sons before battle. (Because shields were heavy if you were going to run away you would drop it, and coming back on your shield meant you were dead.) Spreading memes to overcome collective action problems is civilization level rational.

Comment author: RolfAndreassen 04 August 2014 03:45:21AM 4 points [-]

Well... most of what we "know" about the Spartans was written down by their enemies, and may be inaccurate. It is not at all clear that any actual Spartan ever said the words you attribute to them; it may be Plutarch making things up to illustrate how he thought a city ought to work. Which doesn't necessarily make it bad rationality, but does mean it is fictional evidence, not historical.

Comment author: VAuroch 04 August 2014 09:06:39PM 3 points [-]

We have significant amounts written by the Ancient Greek equivalent of a Sparta otaku, Thucydides. He lived there for a significant period (IIRC, he was in exile from Athens at the time) and was firsthand familiar.

Comment author: Gunnar_Zarncke 07 August 2014 10:37:08PM 2 points [-]

An example of the orderly battle of the hellens from Xenophons Anabasis where the enemy has ten-fold numeric superiority:

Clearchus, though he could see the compact body at the centre, and had been told by Cyrus that the king lay outside the Hellenic left (for, owing to numerical superiority, the king, while holding his own centre, could well overlap Cyrus's extreme left), still hesitated to draw off his right wing from the river, for fear of being turned on both flanks; and he simply replied, assuring Cyrus that he would take care all went well.

...

At this time the barbarian army was evenly advancing, and the Hellenic division was still riveted to the spot, completing its formation as the various contingents came up.

...

And now the two battle lines were no more than three or four furlongs apart, when the Hellenes began chanting the paean, and at the same time advanced against the enemy. But with the forward movement a certain portion of the line curved onwards in advance, with wave-like sinuosity, and the portion left behind quickened to a run; and simultaneously a thrilling cry burst from all lips, like that in honour of the war-god—eleleu! eleleu! and the running became general. Some say they clashed their shields and spears, thereby causing terror to the horses (4); and before they had got within arrowshot the barbarians swerved and took to flight. And now the Hellenes gave chase with might and main, checked only by shouts to one another not to race, but to keep their ranks. The enemy's chariots, reft of their charioteers, swept onwards, some through the enemy themselves, others past the Hellenes. They, as they saw them coming, opened a gap and let them pass. One fellow, like some dumbfoundered mortal on a racecourse, was caught by the heels, but even he, they said, received no hurt, nor indeed, with the single exception of some one on the left wing who was said to have been wounded by an arrow, did any Hellene in this battle suffer a single hurt.

Comment author: RolfAndreassen 08 August 2014 10:00:33AM 1 point [-]

...according to Xenophon, at any rate. I don't see what that has to do with the alleged Spartan quote.

Comment author: Gunnar_Zarncke 08 August 2014 12:09:13PM 1 point [-]

The hellenes mentioned in the quote were likely spartans.

Comment author: RolfAndreassen 09 August 2014 01:49:03AM 2 points [-]

Some of the commanders were Spartans, yes; and it does seem likely that the mercenaries segregated themselves at least somewhat by city of origin, so the Spartan commanders probably had Spartan troops. But the tactics described are standard Hellenic ones; there is nothing about them that is special to Sparta, as far as I can see.

Comment author: Gunnar_Zarncke 09 August 2014 08:27:12AM 2 points [-]

I'm neither istoriean nor expert on acient warfare. My quote was intended to substantiate the claim

Our kind might not be able to cooperate, but the Spartans certainly could. The Spartans were masters of hoplite phalanx warfare where often every individual would have been better off running away but collectively everyone was better off if none ran away than if all did.

My given quote indeed doesn't distinguish between Spartans and Athens... but that isn't needed as it appears that all hellenes were able to much better cooperate than their enemies. And from my reading of Anabasis this is substantiated. And my given quote is no bad one at that.

Comment author: lmm 09 August 2014 10:09:34PM 4 points [-]

The reason it's relevant is that some of us consider the Athenians to be "our kind", or at least the closest thing at the time.

Comment author: KnaveOfAllTrades 04 August 2014 05:49:07AM 3 points [-]

Plaudits for actually explaining and justifying your rationality quote. May others follow your example!

Comment author: Torello 04 August 2014 05:26:28PM *  -2 points [-]

I find it ironic that you use a military example to illustrate how we can achieve collective action at the civilization level.

Isn't the fact the Spartans were willing to "come back with their shields - or on it" the epitome of our kind not being able to cooperate?

I always interpreted "our kind" as the whole of humanity, so for me one sub-set of humanity banding together to destroy another subset (or die trying) isn't a good example of civilization-level cooperation, or the kind of meme that would be useful to spread.

Comment author: Azathoth123 05 August 2014 03:13:44AM *  9 points [-]

I always interpreted "our kind" as the whole of humanity,

Did you read the linked article? In it Eliezer is contrasting rationalist and religious institutions. You may also want to read this to get an idea for the problem James Miller is trying to address. Here is a relevant quote:

Suppose that a country of rationalists is attacked by a country of Evil Barbarians who know nothing of probability theory or decision theory.

Now there's a certain viewpoint on "rationality" or "rationalism" which would say something like this:

"Obviously, the rationalists will lose. The Barbarians believe in an afterlife where they'll be rewarded for courage; so they'll throw themselves into battle without hesitation or remorse. Thanks to their affective death spirals around their Cause and Great Leader Bob, their warriors will obey orders, and their citizens at home will produce enthusiastically and at full capacity for the war; anyone caught skimming or holding back will be burned at the stake in accordance with Barbarian tradition. They'll believe in each other's goodness and hate the enemy more strongly than any sane person would, binding themselves into a tight group. Meanwhile, the rationalists will realize that there's no conceivable reward to be had from dying in battle; they'll wish that others would fight, but not want to fight themselves. Even if they can find soldiers, their civilians won't be as cooperative: So long as any one sausage almost certainly doesn't lead to the collapse of the war effort, they'll want to keep that sausage for themselves, and so not contribute as much as they could. No matter how refined, elegant, civilized, productive, and nonviolent their culture was to start with, they won't be able to resist the Barbarian invasion; sane discussion is no match for a frothing lunatic armed with a gun. In the end, the Barbarians will win because they want to fight, they want to hurt the rationalists, they want to conquer and their whole society is united around conquest; they care about that more than any sane person would."

And that's assuming the rationalists don't simply surrender without a fight on the grounds that "war is a zero sum game".

Comment author: Torello 05 August 2014 02:04:11PM *  3 points [-]

I didn't read the linked article--it certainly seems to frame the issue as rationalists vs. barbarians, not humanity vs. the environment (and the flaws of humanity), so thanks for pointing that out.

I do think fundamentalists/extremists/terrorists have an asymmetrical advantage in the short term in that it's always easier to cause damage/disorder than improvement/order. This quote above seems to be a particular example of this phenomenon.

However, I have to agree with Jiro's comment. Extremists may be able to destroy things and kill people, but I wouldn't say they've been able to conquer anything. To me, "conquer" implies taking control of a country, making its economy work for you, dominating the native population, building a palace, etc. Modern extremists commit suicide and then their mastermind hides silently for a decade until helicopters fly in and soldiers kill him.

Comment author: Jiro 05 August 2014 04:38:40AM *  3 points [-]

When referring to actual barbarians, the description of the barbarians seems to lie by omission--even if all the things described above are mostly true, the barbarians have wrecked their economy because central planning doesn't work no matter how many orders they give, burning people at the stake is bad for investment, their belief in an afterlife is associated with other beliefs that prevent them from making or even efficiently using scientific advances, and their inability to have sane discussion means they can't make tactical decisions or really plan anything well at all. (Etc.) That sort of thing is pretty much the reason that the West hasn't been conquered by Muslim fundamentalists yet:

Also, barbarism doesn't arise at random. Some social structures are more conducive to barbarism than others and they may have inherent flaws which reduce the efficiency of conquest even as their encouragement of barbarism increases it.

Comment author: Azathoth123 06 August 2014 02:43:07AM *  9 points [-]

the barbarians have wrecked their economy because central planning doesn't work no matter how many orders they give

Not all barbarians do that. The communists did that, but they also considered themselves rationalists and were considered such by many people at the time. Muslim fundamentalists generally don't.

burning people at the stake is bad for investment

Depends on whose being burned and why. Having the highest per capita rate of capital punishment doesn't seem to have hurt Singapore's ability to get investment.

their belief in an afterlife is associated with other beliefs that prevent them from making or even efficiently using scientific advances

I don't think so. It might hurt their ability to make scientific advancements, but they're perfectly capable of using them once someone else makes them.

Also, barbarism doesn't arise at random. Some social structures are more conducive to barbarism than others and they may have inherent flaws which reduce the efficiency of conquest even as their encouragement of barbarism increases it.

'Rationalist' societies can also have inherent flaws, like say problems solving the collective action problems associated with wars.

Comment author: Lumifer 05 August 2014 04:31:31PM *  3 points [-]

That sort of thing is pretty much the reason that the West hasn't been conquered by Muslim fundamentalists yet

For a counterpoint, look at the speed and magnitude of the original spread of Islam in the VII-VIII centuries.

Also there is Iran.

Comment author: Jiro 05 August 2014 04:54:26PM 2 points [-]

I don't think the spread of Islam many centuries ago counts. Fanaticism isn't as much of a disadvantage when fighting medieval socieities as it is when fighting modern ones.

Comment author: Lumifer 05 August 2014 05:06:52PM 2 points [-]

Fanaticism isn't as much of a disadvantage when fighting medieval socieities as it is when fighting modern ones.

Why is that so?

Comment author: Jiro 05 August 2014 07:00:43PM 2 points [-]

To summarize: Fanaticism keeps the culture from escaping the dark ages. If everyone is in the dark ages anyway, not being able to escape the dark ages isn't much of a disadvantage.

Comment author: Lumifer 05 August 2014 07:43:45PM *  4 points [-]

That looks to me like one of those sentences which sound pretty but don't actually mean much.

In your comment upthread you listed things which make a barbarian society "uncompetitive". They apply to medieval societies as well. Essentially, you would expect the non-fanatic society to be richer, have better technology, and be governed more effectively. That holds in any epoch (as long as we don't get too far into stone age :-/).

When Islam erupted out of the Arabian Peninsula, the "fanatics" easily took over huge -- amazingly huge -- territories. And it wasn't just pillage-and-burn, they conquered the lands and established their own rule.

Comment author: Jiro 05 August 2014 10:07:17PM 1 point [-]

Essentially, you would expect the non-fanatic society to be richer, have better technology, and be governed more effectively.

Why would I expect this when the society exists hundreds of years ago? The point is that back then, everyone lacked many of the things that fanaticism would cause a society to lack. The fanatics are not at such a disadvantage under such circumstances. The loss in efficiency from it taking weeks to communicate between distant parts of your empire is going to make the loss in efficiency from having a theocracy look like noise. The disadvantage of not getting investors in your country won't matter when there's no international investment anyway. The disadvantage of having little in the way of science and engineering won't matter if there's hardly any science yet and engineering is at the state of building bridges instead of launching satellites.

Comment author: Viliam_Bur 09 August 2014 06:50:13PM *  5 points [-]

This feels to me like a just world fallacy, or perhaps choosing the most convenient world. Yes, if the barbarians are completely stupid, they are probably not so much of a danger these days. If they are completely anti-science, we probably have better guns.

Now imagine somewhat smarter barbarians, who by themselves are unable to do sophisticated science, but have no problem kidnapping a few scientists and telling them to produce a lot of weapons for them, otherwise their families will be burned at stake. (Even if their religion prevents them from doing science, they may compartmentalize and believe it is okay to use the devil's tools against the devil himself.) Suddenly, the barbarians have good guns, too.

Maybe the reason why the West hasn't been conquered by Muslim fundamentalists yet is that Muslims don't have an equivalent of Genghis Khan. Someone who would have the courage to conquer the nearest territory, kill horribly everyone who opposed them, let live those who didn't (and make this fact publicly known), take some men and weapons from the conquered territory and use them to attack the next territory immediately, et cetera, spreading like a wildfire. First attacking some smaller but civilized countries to get better weapons for attacking the next ones. With multiple leaders, so that dropping a bomb won't stop the war. (Maybe one Osama hiding in secret, giving commands to dozen wannabe Genghis Khans who don't mind getting to paradise too soon.)

Comment author: SilentCal 11 August 2014 06:08:45PM 3 points [-]

Jiro's fallacy is not in saying that the world is or has been just in this respect, but rather in implicitly saying it must be. I don't think it's a coincidence that liberal/secular/enlightenment nations are the most powerful today, but that fact doesn't negate the point of the barbarian hypothetical.

I seriously doubt the viability of your Genghis Khan plan for modern fundamentalist Islam, seeing as that same M.O. was tried recently except starting with one of the world's top industrial and scientific powers. But that's a fact about our world, and the point of the barbarian example is more universal than that.

Comment author: Azathoth123 12 August 2014 05:42:39AM 2 points [-]

I seriously doubt the viability of your Genghis Khan plan for modern fundamentalist Islam, seeing as that same M.O. was tried recently except starting with one of the world's top industrial and scientific powers.

I'm not sure. The liberal world seems to have gotten "softer" since then. Compare the general reaction in the US to the death toll in Iraq (maybe one or two US soldiers a day) with the death toll in WWII.

Comment author: Jiro 11 August 2014 07:27:53PM 1 point [-]

If the country of rationalists is attacked by a country of barbarians who are perfectly optimized for conquest, the rationalists will get conquered.

But there's no way to get from here to there except by Omega coming down and constructing exactly the race of barbarians necessary for the hypothetical to work. And if you're going to say that, there's no point in referring to them as barbarians and describing their actions in terms like "believes in an afterlife" and "obeys orders" that bring to mind real-life human cultures; you may as well say that Omega is just manipulating each individual barbarian like a player micromanaging a video game and causing him to act in exactly the way necessary for the conquest to work best.

Except of course that if you say "rationalists could be conquered by a set of drones micromanaged by Omega", without pretending that you're discussing a real-world situation, most people (assuming they know what you're talking about) would reply "so what?"

Comment author: Wes_W 11 August 2014 07:57:09PM 4 points [-]

If the country of rationalists is attacked by a country of barbarians who are perfectly optimized for conquest, the rationalists will get conquered.

This is not inconsistent with the claim that, if the country of rationalists is attacked by a country of barbarians who are imperfectly optimized for conquest, the rationalists might get conquered, with the risk depending on how optimized the barbarians are. And, for that matter, the rationalist nation probably isn't theoretically optimal either...

On balance, believing true things is an advantage, but there are other kinds of advantages which don't automatically favor the rationalist side. Sheer numbers, for example.

Comment author: Jiro 11 August 2014 08:06:24PM 1 point [-]

This is not inconsistent with the claim that, if the country of rationalists is attacked by a country of barbarians who are imperfectly optimized for conquest, the rationalists might get conquered, with the risk depending on how optimized the barbarians are.

How imperfectly optimized, though? Imperfectly optimized like Omega controlling each barbarian but occasionally rolling the barbarian's morale check, which fails on a 1 on a D100? Or imperfectly optimized like real life barbarians?

Comment author: V_V 11 August 2014 08:44:40PM 2 points [-]

What about the Bolsheviks? Or the WW2-era Japanese?

Comment author: SilentCal 11 August 2014 08:39:58PM 2 points [-]

Try the following obviously-unrealistic yet not-obviously-uninteresting hypothetical: There are two approximately equal-strength warring tribes of barbarians, Tribe A and Tribe B. One day Omega sprinkles magic rationality dust on Tribe A, turning all of its members into rationalists. Tribe B is on the move towards their camp and will arrive in a few days. This is not enough time for Tribe A to achieve any useful scientific or economic advances, nor to accumulate a significant population advantage from non-stake-burning.

Can you see, in that hypothetical, how Eliezer's points in the linked posts are important?

Or another approach: the quote about the rationalists losing says "Barbarians have advantages A, B, and C over rationalists." Your response is "But rationalists have larger advantages X, Y, and Z over barbarians, so who cares?" Eliezer's response is "screw that, if barbarians have any advantages over rationalists, the "rationalists" aren't rational enough". My hypothetical's purpose is to try to control for X, Y, and Z so we have to think about A, B, and C.

Comment author: Jiro 11 August 2014 10:17:05PM 1 point [-]

My hypothetical's purpose is to try to control for X, Y, and Z so we have to think about A, B, and C.

Advantages are usually advantages under a specific set of circumstances. If you "control" for X, Y, and Z by postulating a set of circumstances where they have no effect, then of course A, B, and C are better. The rationalists have a set of ideals that works better in a large range of circumstances and in more realistic circumstances. They will not, of course, work better in a situation which is contrived so that they lose, and that's fairly uninteresting--it's impossible to have ideals that work under absolutely all circumstances.

Think of being rationalist like wearing a seatbelt. Asking what if the rationalists' advances over the barbarians just happen not to apply is like asking what if not having a seatbelt would let you be thrown out of the car onto something soft but wearing a seatbelt gets you killed. I would not conclude that there is something wrong with seatbelts just because there are specific unlikely situations where wearing one might get you killed and not wearing one lets you survive.

Comment author: SilentCal 11 August 2014 10:37:57PM *  3 points [-]

"It's impossible to have ideals that work under absolutely all circumstances."

This is the essentially the proposition Eliezer wrote those posts to refute.

A seat belt is a dumb tool which is extremely beneficial in real-world situations, but we can easily contrive unrealistic cases where it causes harm. The point of those posts is that rationality is 'smart'; it's like a seat belt that can analyze your trajectory and disengage itself iff you would come to less harm that way, so that even in the contrived case it doesn't hurt to wear one.

Comment author: Jiro 10 August 2014 03:03:04AM -1 points [-]

By the same reasoning which says that fundamentalists could do better with more efficient methods of conquest, they could do better with more efficient methods of making peace, too. They won't do as well as with conquest, but they'll do better than they are doing now. Yet they don't.

Barbarism is not optimized for conquest. It's optimized for supporting a set of social structures. Those social structures make them more dangerous as conquerers than the average society, but they're still not optimized for conquest; there are things which would make conquest more efficient which they would not do.

(To use just one example, for a country to embark on conquest and use the men from the conquered country to continue conquering more countries, they'd have to grant equal rights to conquered people who agreed to work with them. Rome did that except in a few rare but famous cases. So did the Mongols. But Muslim fundamentalists can't give non-Muslims or rival Muslims equal rights without no longer being Muslim fundamentalists.)

Comment author: Azathoth123 10 August 2014 07:07:32PM *  5 points [-]

To use just one example, for a country to embark on conquest and use the men from the conquered country to continue conquering more countries, they'd have to grant equal rights to conquered people who agreed to work with them.

Not necessarily. Muslims, in particular, have a history of using slave soldiers to good effect.

But Muslim fundamentalists can't give non-Muslims or rival Muslims equal rights without no longer being Muslim fundamentalists.

You do realize it's possible to convert to fundamentalist Islam?

Comment author: Nornagest 11 August 2014 04:59:34PM *  2 points [-]

Muslims, in particular, have a history of using slave soldiers to good effect.

I seem to recall, and a glance over the Wikipedia articles suggests, that the Mamluk and Janissary systems involved raising (enslaved) boys into a military environment from a fairly young age. These boys might come from subjugated territories, but they'd in effect have been part of the dominant culture for much of their lives: it's not a system that could be used to quickly convert conquered territories into additional manpower.

That said, it hasn't been unusual for empires, modern and otherwise, to make substantial use of auxiliary forces drawn from client states. The Roman military probably relied on them as much as they did on the legions, or more in the late empire.

Comment author: Azathoth123 12 August 2014 05:39:06AM 3 points [-]

The Roman military probably relied on them as much as they did on the legions, or more in the late empire.

The late Roman Empire wasn't exactly successful at conquering anything, or even at keeping the Empire from falling apart.

Comment author: Jiro 10 August 2014 07:53:34PM *  1 point [-]

You do realize it's possible to convert to fundamentalist Islam?

Yes, but requiring that soldiers do so makes the process of conquest less optimized, since it's easier for obvious reasons to get soldiers without this requirement than with it. (The same goes for using slaves.)

Comment author: Vaniver 10 August 2014 10:22:58PM 2 points [-]

Yes, but requiring that soldiers do so makes the process of conquest less optimized, since it's easier for obvious reasons to get soldiers without this requirement than with it.

You seem to be focusing solely on cost; the difference between benefit and cost is what matters, and the benefits of a fighting force with shared values (particularly shared religious ones) are many and obvious.

Comment author: Jiro 11 August 2014 04:08:45PM 1 point [-]

By that reasoning, it's the Romans and the Mongols who are un-optimized for conquest.

Comment author: eli_sennesh 31 August 2014 03:03:32PM 0 points [-]

Maybe the reason why the West hasn't been conquered by Muslim fundamentalists yet is that Muslims don't have an equivalent of Genghis Khan. Someone who would have the courage to conquer the nearest territory, kill horribly everyone who opposed them, let live those who didn't (and make this fact publicly known), take some men and weapons from the conquered territory and use them to attack the next territory immediately, et cetera, spreading like a wildfire.

"Caliph" Abu Bakr al-Baghdadi and his group ISIS have been behaving exactly like this. They are quite young, but don't appear quite able to take on a Western military yet.

This feels to me like a just world fallacy, or perhaps choosing the most convenient world.

And yet, by definition, a group who are better at rationality win more often. We ought to expect that rational civilizations can beat irrational ones, because rationality is systematized cross-domain winning.

Comment author: Viliam_Bur 31 August 2014 04:07:30PM 2 points [-]

by definition, a group who are better at rationality win more often

Well, there is this "valley of bad rationality" where being more rational about part of the problem but not yet more rational about other part can make people less winning.

Sometimes I feel are we are there at a society level. We have smart individuals, we have science, we fly to the moon, etc. However, superstition and blind hate can be an efficient tool for coordinating a group to fight against another group. We don't use this tool much (because it doesn't fit well with rationality and science), but we don't have an equally strong replacement. Also, only a few people in our civilization do the rationality and science. So even if there is a rationality-based defense, most of our society is too stupid to use it efficiently. On the scale from "barbarians" to "bayesians", most of our society is somewhere in the middle: not barbaric enough, but still far from rational.

Comment author: Jiro 31 August 2014 03:30:57PM 1 point [-]

A group that is better at rationality will win more often, but winning more often is not the same thing as "winning in a superset of the situations in which the irrational win".

Comment author: James_Miller 05 August 2014 04:55:49PM *  4 points [-]

That sort of thing is pretty much the reason that the West hasn't been conquered by Muslim fundamentalists yet.

Another reason: many members of our military do have the courage of the Spartans. U.S. soldiers don't put on suicide vests to kill children, but they do fall on grenades and hold hopeless positions under fire so their friends can escape death.

Comment author: James_Miller 04 August 2014 06:12:00PM 3 points [-]

I see competition among different groups of people, with those able to overcome their collective action problems gaining power and resources.

Comment author: eli_sennesh 31 August 2014 02:46:53PM 0 points [-]

"Our kind cannot cooperate" is a common meme for which I've seen comparatively little evidence. Mailing lists are not the real world, and while most people might start flame wars over the tiniest bullshit on mailing lists, their real-world behavior is largely cooperative and prosocial.

Comment author: RichardKennaway 31 August 2014 06:17:51PM *  1 point [-]

Would those be the same people you characterised by these words?

(ChristianKl) Normal civilized humans don't really want to kill other humans.

(eli_sennesh) Well, certainly not nearby humans who have similar skin coloration and evince membership in the same tribe. Those people, on the other hand, are disgusting, and the lot of them simply have to go.

Comment author: Salemicus 21 August 2014 04:49:48PM 5 points [-]

In the fields of observation, chance favours only the prepared mind.

Louis Pasteur.

Comment author: StephenR 04 August 2014 04:20:31AM *  5 points [-]

"We must not criticize an idiom [...] because it is not yet well known and is, therefore, less strongly connected with our sensory reactions and less plausible than is another, more 'common' idiom. Superficial criticisms of this kind, which have been elevated into an entire 'philosophy', abound in discussions of the mind-body problem. Philosophers who want to introduce and to test new views thus find themselves faced not with arguments, which they could most likely answer, but with an impenetrable stone wall of well-entrenched reactions. This is not at all different from the attitude of people ignorant of foreign languages, who feel that a certain colour is much better described by 'red' than by 'rosso'.

Paul Feyerabend, Against Method, 4th Edition, p. 59.

Comment author: Vaniver 31 August 2014 03:17:08PM *  4 points [-]

Two mares, each convinced she was standing firmly on The Shores Of Rationality, stared helplessly into The Sea Of Confusion and despaired over their inability to ever rescue the friend helplessly floundering within.

A vivid description of inferential distance from Twilight's Escort Service.

Edit: It's from a comedy that relies on misunderstandings; Twilight chooses the word "escort" to advertise her teleportation abilities. If you don't enjoy awkwardness-based comedies, I recommend you stay away. The actual quote is about explaining a value difference.

Comment author: eli_sennesh 31 August 2014 03:19:26PM 4 points [-]

Explain, as I am not clicking on anything associating "Twilight Sparkle" and "Prostitution".

Comment author: Leonhart 31 August 2014 07:55:13PM 6 points [-]

Haven't had time to read it; but from the story description, it seems to be a comic affair where Twilight decides to monetise her teleportation skillz, and picks the wrong word to advertise with. Hilarity presumably prevails?

Comment author: RichardKennaway 01 September 2014 11:17:52AM 3 points [-]

Pretty much. I stopped reading at the point where her first "client" showed up, with supposed "hilarity" about to begin, as I can't stand comedy based on misunderstanding and embarrassment.

Comment author: Vaniver 01 September 2014 03:51:06PM 1 point [-]

Yep.

Comment author: Qwake 22 August 2014 05:05:02AM 4 points [-]

Language exists only on the surface of our consciousness. The great human struggles are played out in silence and in the ability to express oneself.

Franz Xavier Kroetz

Comment author: rule_and_line 22 August 2014 04:45:29PM *  2 points [-]

Could you give this some more context? My reaction was to downvote.

The word "only" gives me vibes like "language exerts a trivial or insignificant influence on our consciousness". I don't know any of Kroetz's plays, but given that he is a playwright I feel like I'm getting the wrong vibe.

Comment author: Qwake 24 August 2014 04:15:07AM 2 points [-]

My interpretation of the quote was not that language exerts a trivial influence on our consciousness but that language is an imperfect form of communication.

Comment author: Benito 04 August 2014 10:01:06AM *  3 points [-]

'Deep pragmatism' is Joshua Greene's name for 'utilitarianism'.

Today we, some of us, defend the rights of gays and women with great conviction. But before we could do it with feeling, before our feelings felt like “rights,” someone had to do it with thinking. I’m a deep pragmatist, and a liberal, because I believe in this kind of progress and that our work is not yet done.

Joshua Greene, “Moral Tribes"

Comment author: Azathoth123 05 August 2014 03:27:30AM 5 points [-]

'Deep pragmatism' is Joshua Greene's name for 'utilitarianism'.

And yet he's talking about 'rights', which are a deontological not a utilitarian concept.

Comment author: blacktrance 05 August 2014 08:56:49AM 1 point [-]

Consequentialists can believe in something that can reasonably be called rights.

Comment author: fubarobfusco 04 August 2014 09:02:35PM 1 point [-]

I'm starting a new 30 day challenge: the month of no "should." Instead of tediously working down a list of all the little chores and errands that I "should" be doing, I'll work to listen to what that little voice inside me wants to do. I think it will be interesting.

Matt Cutts

Comment author: Azathoth123 05 August 2014 03:33:05AM *  7 points [-]

I don't really want to pay the electric bill, or the rent.

Oh dear, now I'm sitting in the dark and the landlord is evicting me onto the street.

Comment author: fubarobfusco 05 August 2014 04:53:55PM 4 points [-]

I'm pretty sure you've construed the quote entirely backwards — and that Matt's point is that any "I should do X" statement can be rephrased as "part of me wants to do X."

Comment author: Azathoth123 06 August 2014 02:51:29AM 5 points [-]

I really don't like that guy and want him dead, and hey we're in the middle of nowhere and nobody knows he's here.

Comment author: ChristianKl 25 August 2014 10:14:13PM 1 point [-]

I really don't like that guy and want him dead

If you are a psychopath than simply doing what you want to do is bad. Normal civilized humans don't really want to kill other humans.

Comment author: Lumifer 26 August 2014 12:26:47AM 5 points [-]

Normal civilized humans don't really want to kill other humans.

That REALLY depends on the circumstances.

Comment author: army1987 26 August 2014 12:55:07PM 1 point [-]

Isn't that covered by the first two words (especially the second) of the sentence you quoted?

Comment author: Lumifer 26 August 2014 03:10:48PM 3 points [-]

No. Normal civilized humans find themselves in different circumstances. In some of these circumstances they DO want to kill other humans.

Comment author: eli_sennesh 31 August 2014 03:14:40PM *  0 points [-]

Normal civilized humans don't really want to kill other humans.

Well, certainly not nearby humans who have similar skin coloration and evince membership in the same tribe. Those people, on the other hand, are disgusting, and the lot of them simply have to go.

Comment author: lmm 25 August 2014 09:34:48PM 0 points [-]

Maybe you should kill him then? I mean, do you actually want to?

Comment author: ChristianKl 25 August 2014 10:14:18PM 1 point [-]

Given that you can predict the results of your choices is there really no part in you that wants to choose the road that includes paying the electric bill?

It about where you put your attention. If you focus on the fact that you want to have electricity in your house and therefore pay the electric bill you feel agent and good. If you focus on the fact that you have an obligation to pay a bill you will feel bad.

Comment author: Dorikka 26 August 2014 01:25:45AM 2 points [-]

Textbook case of YMMV due to inferential distance/loss of resolution in verbal/textual communication.

Comment author: Qwake 06 August 2014 07:02:47PM 1 point [-]

Never let your sense of morals get in the way of doing what's right.

-Isaac Asimov

Comment author: hairyfigment 06 August 2014 08:40:19PM 4 points [-]
Comment author: hairyfigment 06 August 2014 05:36:58PM *  -1 points [-]

There is, to the [Slytherin adept], only one reality governing everything from quarks to galaxies. Humans have no special place within it. Any idea predicated on the special status of the human — such as justice, fairness, equality, talent — is raw material for a theater of mediated realities that can be created via subtraction of conflicting evidence, polishing and masking.

Comment author: Stabilizer 06 August 2014 06:55:12PM *  5 points [-]

While I find Venkatesh Rao to be insightful, his writing can be quite frustrating. He seems to be allergic towards speaking plainly. Here is a possible re-write of the above quote:

Slytherin-adepts use human ideals -- like justice, fairness, equality, talent -- to deceive people. They employ these ideals in rhetoric, often to turn attention away from conflicting evidence.

Comment author: Qwake 06 August 2014 08:00:15PM 1 point [-]

The impression I got is more that Slytherin adepts believe that human ideals such as justice, fairness, equality, and talent distort reality because they rely on the assumption that humans hold a special place in the universe which Slytherin adepts believe not to be true.

Comment author: hairyfigment 06 August 2014 08:41:08PM 0 points [-]

Yes to both this and the grandparent - though in principle, a Slytherin might try to produce an environment where those ideals make sense, out of personal preference.

Comment author: hairyfigment 06 August 2014 08:51:34PM 0 points [-]

Actually, in addition to the sibling comment, I should point out that "rhetoric" implies people claiming all the time that they're serving justice or what have you. Mostly (as I understand the quote) they just need to hide contrary evidence from view. Provide a distraction, and people will continue to believe their existing ideals determine reality.

Comment author: hairyfigment 06 August 2014 06:13:41AM 0 points [-]

But the more central point is that trying to explain or predict [institutional] behavior idealistically, in terms of things called "values" or moral fortitude, is foolish. It's magical thinking. "One party believes in..." Institutions don't have beliefs. They have incentives.

  • Internet commenter
Comment author: Lumifer 06 August 2014 02:56:39PM 3 points [-]

I don't know about moral fortitude, but institutions certainly have values. It is precisely their values, combined with the environment around them, that create the incentives.

Don't forget that e.g. money and power are values, too.

Comment author: hairyfigment 06 August 2014 04:15:59PM -1 points [-]

As a general statement that seems flatly untrue, unless you mean that people in them have (often conflicting) values. Even thinking that for-profit corporations seek to make money for the corporation, rather than for decision-makers, seems like a dangerous mistake.

Comment author: Lumifer 06 August 2014 04:25:50PM *  1 point [-]

As a general statement that seems flatly untrue

I am sorry, I'm not going to read an extra-long rumination on a TV series I neither watch nor have any interest in.

Can you provide the argument in a.. condensed form? Preferably without relying on fictional evidence.

Comment author: eli_sennesh 31 August 2014 03:18:30PM 0 points [-]

Institutions certainly have optimization targets, which are what we normally call values. Just because you don't share them doesn't mean they're not there.

Comment author: Qwake 06 August 2014 06:59:22PM -1 points [-]

Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and common sense.

-Buddha

Comment author: TheMajor 06 August 2014 08:06:44PM *  2 points [-]

You mean never really change your mind? Sounds kinda dumb...

If the last half had said "own reason or common sense" all would be fine, I think.

Comment author: Qwake 10 August 2014 06:46:52AM 3 points [-]

I interpreted it to mean not to believe information simply because you hold the source of the information in high regard. It is very possible to change your mind and keep within your own reason and common sense.

Comment author: wedrifid 10 August 2014 06:26:07AM 3 points [-]

Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and common sense.

-Buddha

This is the first time I've been prompted to advocate the merit of this related quote.

Comment author: Lumifer 06 August 2014 08:17:35PM 3 points [-]

Isn't that, pretty much, a classic description of confirmation bias?

Comment author: Stabilizer 06 August 2014 08:12:27PM *  2 points [-]

That one's a misquote. The original is:

Now, Kalamas, don’t go by reports, by legends, by traditions, by scripture, by logical conjecture, by inference, by analogies, by agreement through pondering views, by probability, or by the thought, ‘This contemplative is our teacher.’ When you know for yourselves that, ‘These qualities are skillful; these qualities are blameless; these qualities are praised by the wise; these qualities, when adopted & carried out, lead to welfare & to happiness’ — then you should enter & remain in them.

Not exactly a rationality quote, is it? Here is another famous misquote of the same passage.

Comment author: RichardKennaway 06 August 2014 08:22:39PM 3 points [-]

Not exactly a rationality quote, is it?

I think it is, and it has been so regarded on LessWrong several times already, first here.

Comment author: shminux 12 August 2014 06:13:40PM -2 points [-]

society should not be looking for ways to maintain privacy. It should be looking for ways to make privacy unnecessary. We will never be free until we lose our unnecessary secrets and discover we are better off without them.

Scott Adams

(Please read the link for context before commenting on the quote alone)

Comment author: Lumifer 12 August 2014 06:24:29PM 8 points [-]

I disagree with the premise that there are only two reasons to want privacy.

Comment author: soreff 17 August 2014 12:25:06AM 6 points [-]

Agreed. If nothing else, in a bargaining process, keeping the maximum/minimum price that one would accept private during the negotiation doesn't fit into either category.

Comment author: army1987 21 August 2014 10:08:51AM 1 point [-]

But if both parties were forbidden from keeping their reservation price secret the problem would be less bad, so it does kind-of fit into the spirit of the second category, though not its letter.

Comment author: RichardKennaway 13 August 2014 07:50:52AM *  4 points [-]

I agree with your disagreement. For context, here are those two reasons, with which Adams begins his essay. It's only a click away, but I think it deserves to be dragged into the light:

There are only two reasons to have privacy and both of them involve dysfunction. You might want privacy because...

1. you plan to do something illegal or unethical.

or

2. to protect you from a dysfunctional world.

That pretty much condemns the rest of the article. If he can't think of protecting oneself from other people's criminal activities, protecting oneself from other people's judgements, protecting one's creative activities from dissipation, protecting one's investigations from being scooped, protecting business secrets, and the basic feeling of GODDAMMIT THIS IS NONE OF YOUR BUSINESS, then what planet is he oh forget it. He's writing this tosh just to get responses like that.

Scott Adams is a humorist, not a philosopher. Dilbert was worth reading. Since mining out that seam it's been a downhill journey into clickbait. He even admits to the game at the end:

I know this sort of topic gets massive down votes because you don't want to risk losing privacy. But please do me a favor and rate this post on the entertainment value alone. I'm trying to gauge how interesting this topic is to you. Thank you!

Comment author: CCC 13 August 2014 10:42:21AM 2 points [-]

If he can't think of protecting oneself from other people's criminal activities, protecting oneself from other people's judgements, protecting one's creative activities from dissipation, protecting one's investigations from being scooped, protecting business secrets, and the basic feeling of GODDAMMIT THIS IS NONE OF YOUR BUSINESS, then what planet is he oh forget it.

I think most of these (all with the exception of "protecting one's investigations from being scooped" and possibly "protecting business secrets" or "THIS IS NONE OF YOUR BUSINESS") could fall under "protect you from a dysfunctional world", depending on the definition of "dysfunctional". That is a very broad reason, after all; almost as broad as "to protect you from negative consequences".

Of course, that implies that a non-"dysfunctional" world would be some variant of utopia - presumably one where everyone more-or-less accepts Adams' basic viewpoints.

Comment author: RichardKennaway 13 August 2014 11:02:37AM 6 points [-]

Yes, if you label every reason to keep the world and his dog out of your business "dysfunctional" then the whole thing reduces to tautology.

Of course, that implies that a non-"dysfunctional" world would be some variant of utopia - presumably one where everyone more-or-less accepts Adams' basic viewpoints.

As I say, Adams is not a deep thinker, he just plays one on the net.

Comment author: Lumifer 13 August 2014 02:35:25PM *  3 points [-]

Adams is not a deep thinker, he just plays one on the net

Well, first it's much better to play a deep thinker on the 'net than do the usual thing and play an idiot on the 'net...

Second, it doesn't look like he necessarily commits to everything he throws out in his blog. He plays with ideas, tries them on for size, puts them on a stick and waves them at people, etc. I think that's fine and useful as long as you don't take everything he writes very very seriously.

Comment author: Azathoth123 14 August 2014 02:23:21AM 4 points [-]

Well, first it's much better to play a deep thinker on the 'net than do the usual thing and play an idiot on the 'net...

I'm not sure about that given what happens when someone who's not a deep thinker tries to play one.

Comment author: Lumifer 14 August 2014 02:24:12AM 2 points [-]

So, what happens?

Comment author: CCC 14 August 2014 04:17:05AM 0 points [-]

Yes, if you label every reason to keep the world and his dog out of your business "dysfunctional" then the whole thing reduces to tautology.

Well, yes. I read his argument as less of an argument in favour of openness and more a sort of a whinge about how people make too much of a big deal about certain things (like homosexuality) which then leads to people keeping those certain things secret.

I'm not sure if that's what he intended with his argument, but that's what I got from it.