Another month, another rationality quotes thread. The rules are:

  • Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
  • Post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.

New to LessWrong?

New Comment
37 comments, sorted by Click to highlight new comments since: Today at 5:34 PM

In response to the Quora question "What are some important, but uncomfortable truths that many people learn when transitioning into adulthood?"

  1. Every person is responsible for their own happiness -- not their parents, not their boss, not their spouse, not their friends, not their government, not their deity.

  2. One day we will all die, and 999 out of 1,000 people will be remembered by nobody on earth within a hundred years of that date.

  3. Practically all of the best opportunities (in business, in romance, etc) are only offered to people who already have more than they need.

  4. The idea that you will be happy after you make X amount of dollars is almost certainly an illusion.

  5. The idea that you will be happy after you meet [some amazing person] is almost certainly an illusion.

  6. For most people, death is pretty messy and uncomfortable.

  7. When you don't possess leverage (go look up "BATNA"), people will take advantage of you, whether they mean to or not.

  8. Almost everybody is making it up as they go along. Also, many (most?) people are incompetent at their jobs.

  9. When talking about their background and accomplishments, almost everybody is continually overstating their abilities, impact, relevance, and contributions.

  10. Physical beauty decays.

  11. Compared to others, certain ethnicities and races (and genders, and sexual orientations, and so on) are just plain royally f*cked from the day they're born.

  12. Bad things constantly happen to good people. Good things constantly happen to bad people.

  13. Very few people will ever give you 100% candid, honest feedback.

  14. People are constantly making enormous life decisions (marriage, children, etc) for all of the wrong reasons.

  15. Certain people -- some of whom are in positions of enormous power -- just do not give a damn about other human beings. A certain head of state in Syria comes to mind.

  16. Often, the most important and consequential moments of our lives (chance encounter, fatal car accident, etc) happen completely at random and seemingly for no good reason.

  17. Your sense of habitating a fully integrated reality is an illusion, and a privilege. Take the wrong drug, suffer a head injury, or somehow trigger a latent psychotic condition like schizophrenia -- and your grip on reality can be severed in an instant. Forever.

From Patrick Mathieson

"What are some important, but uncomfortable truths that many people learn when transitioning into adulthood?"

This long list needs a post scriptum: Very few people manage to accomplish this transition :-/

Certain people -- some of whom are in positions of enormous power -- just do not give a damn about other human beings. A certain head of state in Syria comes to mind.

I'd also say that your ability to care about other people, along with overall sanity, will diminish under constant stress. That's why "Preserve own sanity" is #1 on my rules to be followed in case of sudden world domination list and something I need to stay aware of even in my current (and normally not that stressful or important) job.

[-][anonymous]8y20

I'll try to reframe those that hit me the hardest

  • One day we will all die, and 999 out of 1,000 people will be remembered by nobody on earth within a hundred years of that date.

The duration of our relevance to others is just one of many dimensions of relevance. And, does the duration of the meaningful experience have diminishing returns?

  • Practically all of the best opportunities (in business, in romance, etc) are only offered to people who already have more than they need.

In absolute terms, yes, but in relative terms, that's irrelevant except that increasing inequality increases the opportunity for exploitation. And, awareness of inequality as a health problems appears to be bigger than I expected before engaging with the peripheral academic literature.

  • The idea that you will be happy after you make X amount of dollars is almost certainly an illusion.

This is empirical untrue. See 80,000 Hours article on money and happiness. You will be happy(ier) after 50k ;) among other factors.

  • The idea that you will be happy after you meet [some amazing person] is almost certainly an illusion.

the PERMA model of positive psychology suggests relationships are critical for happiness. And, personality factors which are distributed among populations are important to successful relationships. Plus, prespecified wants in a partner best correlate with relationship satisfaction. Additionally, positive illusions about partners are one of the best predictors of relationship success anyway.

  • For most people, death is pretty messy and uncomfortable.

Discomfort doesn't have to be distressful, it can be eustressful. Same with mess.

  • When you don't possess leverage (go look up "BATNA"), people will take advantage of you, whether they mean to or not.

But not everyone will. You can select your social circle (or at least, most people reading thing can) to choose those who are normatively non-exploitative. Make friends with deontologists!

  • Almost everybody is making it up as they go along. Also, many (most?) people are incompetent at their jobs.

This inefficiency provides the opportunity for improvement :)

  • When talking about their background and accomplishments, almost everybody is continually overstating their abilities, impact, relevance, and contributions.

I hope that's indicative of a high self esteem

  • Physical beauty decays.

Helping people realise the importance of other kinds of beauty, for self worth reasons.

  • Compared to others, certain ethnicities and races (and genders, and sexual orientations, and so on) are just plain royally f*cked from the day they're born.

As a non-white, I think the author is royally fucked thinking that things can't and aren't changing. Fuck you're white supremacy.

  • Very few people will ever give you 100% candid, honest feedback.

Not if you take things so negatively, like you clearly are Mr. Author.

  • People are constantly making enormous life decisions (marriage, children, etc) for all of the wrong reasons.

Who are you to judge the wrongness of their decision?

  • Certain people -- some of whom are in positions of enormous power -- just do not give a damn about other human beings. A certain head of state in Syria comes to mind.

You can't read their minds and intentions

  • Often, the most important and consequential moments of our lives (chance encounter, fatal car accident, etc) happen completely at random and seemingly for no good reason.

I feel few rationalists would think this way

  • Your sense of habitating a fully integrated reality is an illusion, and a privilege. Take the wrong drug, suffer a head injury, or somehow trigger a latent psychotic condition like schizophrenia -- and your grip on reality can be severed in an instant. Forever.

Reading this really stung me. This has been a really important concept in my life. I've taken the wrong drug, suffer some sort of cognitive disorder, unspecified, but not due to any head trauma from brain scans, and at various times have been given given psychotic diagnoses (later retracted). Living in a fully integrated reality really is a privellage. I'm grateful for what grip I have on it right now as I write, which is better than it has been when I've been a little psychotic. But, it's not a clear linear descent into hell. As psychosis worsens it goes hand in hand with a flattening of affect and worse insight. And, as insight increases, depression is seen to increase from both personal and academic experience. Really the descent out of reality is the worse part, and moments of insight, rather than a lifetime of misery. I am happy usually, not right now, but that's why I decided to write a long-form response - to improve my mood. And sure, taking the wrong drug has one at least one occasion distorted my sense of time enough for it to feel like forever in hell, but you know what, that's not right now. So I mean...at least it's not eternity? I really hope someone can help me flesh out a more positive reframe than this.

Edit: To this I can also refer you to the weakness of strength. My pain sacrificially broadens the minds of others to be grateful for their sanity.

Thank you for sharing this James_Miller and author Patrick Mathieson.

False hopes are more dangerous than fears.

J. R. R. TOLKIEN, The Children of Hurin

Well, an evolutionary purpose of fear and our reactions to it is to protect us from dangers. It would be doing a bad job if it wasn't at least a little bit helpful.

[-][anonymous]8y40

...Newton's monochromator turned out to be not perfect enough, despite him using a collimator with a narrow slit; fluorescence in homogeneous lighting Newton did not notice, and the principle was saved. We see here a not-so-rare case of the imperfection of experiment facilitating the development of science. It is hard to imagine the confusion in optical ideas if Stokes's shift had been discovered in the XVIIth century.

  • S. I. Vavilov, The principles and hypotheses of Newton's optics (a kind of introduction to two Newton's optical memoirs he translated into Russian), 1927.

Fuck-you money is not "you will be happy", it's "you will be able to remove one particular cause of unhappiness".

Well, quite a few particular causes of unhappiness. (But not all of them, I agree.)

I don't think that it makes sense to see all Western archievements as coming out of the Western analytical mindset. Western society has always been diverse and contained people with different mindsets.

The Greek had steam engines before the invention of the modern scientific method, so I don't see how the modern scientific method was a requirement for the invention of steam engines.

I do grant that the scientific method has important effect on the way our modern technology works, but I think you get into problems when you start to claim that all of our modern technology is due to the scientific method and analytic thinking.

The Greek had steam engines before the invention of the modern scientific method

Actually, you brought the invention of the steam engine into the conversation.

And, while the Greeks invented a rudimentary steam engine, the ancient Greek engine was not really of any practical use. Developing a commercially viable steam engine did not occur until much later. Developing a steam engine that could be used reliably, safely and efficiently for transportation, etc., required scientific knowledge of thermodynamics, behavior of gases, metallurgy, etc.

I do grant that the scientific method has important effect on the way our modern technology works

Which is what led to my question about why the author thinks that the Eastern mode of thought is superior to the Western mode of thought for "understanding the contribution knowledge makes to the technical accomplishment of our civilization". When I phrased the question, I did not mean it in an argumentative sense, I actually meant I am interested to hear his thoughts on the subject - which is one of the reasons I intend to read the book.

Actually, you brought the invention of the steam engine into the conversation.

I spoke about the invention of the steam engine as a means for pumping water out of mines. The Greeks never tried to use it for that purpose.

required scientific knowledge of thermodynamics, behavior of gases, metallurgy, etc.

I don't think that Thomas Newcomen had much scientific knowledge of thermodynamics. Most of thermodynamics developed after there were already commercial steam engines.

I think knowledge about metallurgy at the time wasn't mainly scientific but based on trades. You had smiths who learned it by being smiths and who then passed it down to an apprentice.

I spoke about the invention of the steam engine as a means for pumping water out of mines. The Greeks never tried to use it for that purpose.

True, but you and two other people pointed out that the Greeks had invented the steam engine as if that somehow invalidated something that I had said.

Most of thermodynamics developed after there were already commercial steam engines.

This is not really true. Scientific work in the area of thermodynamics had been done in the 17th century by Denis Papin, Otto von Guericke , Robert Boyle, Thomas Savery and others. Some of this work was directly applicable to steam engines.

I don't think that Thomas Newcomen had much scientific knowledge of thermodynamics.

I think it is likely that Newcomen was familiar with Savery's earlier work on steam engines, at a minimum. And, whereas you are focused on the invention (or reinvention, in Newcomen's case) of the steam engine, I think that the ongoing development of the steam engine is at least as relevant. The development of the steam engine continued well past then end of Newcomen's life - the late 18th century engines and the 19th century steam engines used on trains, ships and in industry were much improved over the versions produced by Newcomen - and many of these improvements came about from scientific knowledge in the areas of gas laws, thermodynamics, etc.

I think knowledge about metallurgy at the time wasn't mainly scientific but based on trades.

This is largely true, particularly in the 18th century. But as noted above, the steam engine continued to be developed and improved throughout the 19th century. Some of these improvements were possible by improved materials (metals), and by the latter half of the 19th century, metallurgy was becoming more scientific, particularly with regard to improvements in steel production.

19th century steel manufacturing also gave a big boost to the steam engine industry in an indirect manner - quality steel greatly improved the strength and longevity of railroad tracks and trestles, leading to increased use of rail and increased demands for more powerful and more efficient steam engines. Since the quote that started this conversation was about the "technical accomplishment of our civilization" and "the ingenuity of the inventions, the range and density of technical mediation, the multiplicity of artifactual interfaces in a global technoscientific economy", I think that it is useful to look at how various industries (such as the steel and railroad industry) affected the later development of the steam engine rather than focusing exclusively on its early commercialization by Newcomen.

I agree with your general point. A lot of science was needed to create the transistor. But in this particular case, the design of Newcomen's engine is very simple, needs no science beyond the notion of 'hot water expands', and could certainly be comprehended and built by the ancient Greeks and Romans. Of course, it may be that the ancients didn't use (or use up) enough coal to be in need of a mine water pumping solution, and a steam locomotive is rather more complex.

Western theories of knowledge tend to fix on statements and beliefs—symbolic, linguistic, propositional entities—and have developed highly technical concepts of evidence, warrant, and justification, all to explain a preposterously small fragment of knowledge—the part that is true, “the truth.” This contemplative, logocentric approach, much favored in antiquity and never really shaken from later tradition, is counterproductive for understanding the contribution knowledge makes to the technical accomplishment of our civilization. The ingenuity of the inventions, the range and density of technical mediation, the multiplicity of artifactual interfaces in a global technoscientific economy attest to the reach and depth of contemporary knowledge. But this knowledge resists logical analysis into simpler concepts, seldom climaxes in demonstrable truth, and does not stand to pure theory as mere application or derivative “how- to” knowledge. Thus does the best knowledge of our civilization become unaccountable in the epistemologies of the epistemologists.

Barry Allen in Vanishing into Things

What reason is there to think that Allen is correct when he says that the "contemplative, logocentric approach" is a poor match for understanding the relationship between knowledge and technology? In the passage you quote, he makes a number of claims that seem (at best) extremely doubtful. Does he justify them elsewhere?

(Perhaps he -- or you -- might consider this a fruitlessly contemplative and logocentric question, too much concerned with evidence, warrant and justification. Too bad.)

Let's take the best computer programmer. Imagine he tries to write down all his important knowledge in a book. He writes down all statements where he believes that he can justify that they are true in a book.

Then he gives the book to a person who never programmed with equal IQ.

How much of the knowledge of the expert knowledge get's passed down through this process? I grant that some knowledge get's passed down, but I don't think that all knowledge does get passed down. The expert programmer has what's commonly called "unconscious competence".

Allen might call that kind of knowledge part of the best knowledge of our civilization. It's crucial knowledge for our technological progress.

But to get back to the main point, to accept that the contemplative, logocentric approach has flaws is not simply about focusing on it itself but on demonstrating alternatives.

This seems to be a complicated, abstruse way of saying "reading statements of knowledge doesn't thereby convey practical skills".

This seems to be a complicated, abstruse way of saying "reading statements of knowledge doesn't thereby convey practical skills".

If I explain one paradigm in the concepts of another paradigm that leads in it's nature to complicated and abstruse ways of making a statement.

But in this case the claim is more general. There are cases where the programmer can describe a heuristic that he uses to make decisions without pointing to a statement that has justified veracity.

Google for example wants to give it's managers good management skills. To do that they give them checklists of what they are supposed to do when faced with a new recruit. Lazlo Bock from Google's People Operations credits the email that gives that checklist with a resulting productivity improvement of 25% due to new recruits coming up to speed faster.

You don't need to understand the justification for a checklist item to be able profit from following a ritual that goes through all the items on checklist. Following a ritualistic checklist would be knowledge in the Chinese sense where there's a huge emphasis of following proper protocols but it wouldn't be seen as knowledge in the philosophic western tradition.

But why does it matter? What harm can come from thinking that knowledge is about demonstrable truths? If generating knowledge is about generating demonstrable truths you can use the patent system to effectively reward knowledge creation.

You don't need to understand the justification for a checklist item to be able profit from following a ritual that goes through all the items on checklist. Following a ritualistic checklist would be knowledge in the Chinese sense where there's a huge emphasis of following proper protocols but it wouldn't be seen as knowledge in the philosophic western tradition.

I don't understand your point. The western tradition is perfectly capable of talking about the knowledge that following this checklist results in a measurable 25% improvement. So you must mean something else but I don't know what.

Nobody knows everything at the same time. The knowledge is split between the person following the checklist and the one who designed it. That doesn't make it a different kind of knowledge. And if the person who designed it just tested lots of random variations and has no idea why this one works, or if the designer is dead and didn't pass on his ideas, then there is less knowledge, but it's still the same kind of knowledge.

The programmer is a paradigm case. He works with very well defined logical or mathematical models of code execution. But he constantly relies on the correct functioning of a myriad other pieces of software and hardware. He doesn't know in full detail why he has to talk to these other things the way he does; he just memorizes a great deal of API details which are neither arbitrary not clearly self-evident, and trusts that the hardware designers knew what they were doing.

So when you say:

There are cases where the programmer can describe a heuristic that he uses to make decisions without pointing to a statement that has justified veracity.

It seems to me that almost everything the programmer ever does can be framed this way. Suppose I know that under high contention I should switch to a different lock implementation; but I don't know how the two implementations actually work, so I don't know why each one is better in a different case. I also don't know where exactly the cutoff is, because it's hard to measure, so there's an indeterminate zone in between where I'm not sure what to use; so I have a heuristic that uses an arbitrary cutoff.

Is this a heuristic that has no "justified veracity", or is this a kind of knowledge where I can prove (with benchmarks) that the heuristic leads to good results, with an underlying model (map) of 'lock A has less contention overhead, but lock B takes less time to acquire'?

He doesn't know in full detail why he has to talk to these other things the way he does; he just memorizes a great deal of API details which are neither arbitrary not clearly self-evident, and trusts that the hardware designers knew what they were doing.

I don't think knowing the API is sufficient to being a good programmer. The productivity difference between what Google sees in a 10x programmer from a normal programmer is not about the 10x programmer having memorized more API calls.

Simply teaching someone about API calls and about the specifics about the cutoff between different lock implementations isn't going to make someone a 10x programmer.

Part of being a good programmer is knowing when to check in your code and when it makes sense to write additional tests to check your code. One way to check that some people at the local LW dojo use is to ask themselves: "Would I be surprised if my changes crash the program?" If there system I wouldn't be suprised they go and spent additional time writing tests or cleaning up the code.

That heuristic is knowledge that's able to be verbalized but that moves farther away from justified veracity. You can go a step further and talk about how to pass down the system I sense of surprise from one programmer to another.

If you go and study computer science you won't find classes on developing an appropriate level of getting surprised. It's not the kind of knowledge that professors of computer science work to create.

I don't understand your point. The western tradition is perfectly capable of talking about the knowledge that following this checklist results in a measurable 25% improvement. So you must mean something else but I don't know what.

How many checklists have you used the last week? How many thing you do follow strict checklists? How many serious philosophers deal with the issues surrounding checklists?

I think that there societal resistance towards adopting more checklists.

In Google's case they have hard data to justify their checklist but a lot of checklists that are in usage don't have hard data that backs them up but are still useful.


To move meta, of course the ideas that I try to express on LW can be expressed in the English language. Trying to express ideas that can't be expressed in English doesn't make any sense.

I don't think knowing the API is sufficient to being a good programmer.

Of course it isn't (but it is necessary). I didn't mean to imply that it was. But I do think this example generalizes to almost all the other things that a very good programmer needs to do.

That heuristic is knowledge that's able to be verbalized but that moves farther away from justified veracity. You can go a step further and talk about how to pass down the system I sense of surprise from one programmer to another.

That heuristic is knowledge that's able to be verbalized but that moves farther away from justified veracity.

Why do you think so? To me (as a programmer) heuristics about when to check what feel perfectly knowable and able to be verbalized. To be sure, they would take a lot words. Maybe more importantly, they're highly entanged with many other things a programmer needs to know and do. But I don't see what would make them less justified or less explicit, just more complex.

You can go a step further and talk about how to pass down the system I sense of surprise from one programmer to another.

It's a truism that you can't gain habits of thought, or mental heuristics, just by abstractly understanding and memorizing a bunch of facts; that's just not how humans learn things.

That doesn't necessarily mean there's a lot of information in the heuristics that isn't contained in the dry facts. You can't get the heuristics by practicing without being aware of the facts. If you can't explain why you act out the heuristics you do in terms of the facts you learned, or if you can't verbalize what heuristics you're acting on, that is more likely to be a failure of introspection, rather than evidence that your mind developed extra incommunicable knowledge the facts didn't imply.

If you go and study computer science you won't find classes on developing an appropriate level of getting surprised. It's not the kind of knowledge that professors of computer science work to create.

Because they're studying computer science, not programming.

Yes, if you look at software engineering, its state of formal education is quite bad compared to some other engineering professions. I even have a good idea of the historical causes of this. But that doesn't mean programming can't be taught or even that nobody learns it well formally, just that most programmers don't, as a social fact. They're encouraged to experiment and self-teach; they start working as soon as someone will pay them, which is much earlier than 'when they've mastered programming'; they influence one another; and the industry on average doesn't have a lot of quality control, quality standards or external verification, just 'ship it once it's ready'.

How many checklists have you used the last week? How many thing you do follow strict checklists? How many serious philosophers deal with the issues surrounding checklists?

No checklists that I can think of. I have no idea what philosophers en masse spend their time on, serious or otherwise.

Checklists are a specific solution which need to be justified wrt. specific problems, most of which have alternative solutions. I don't think 'not using checklists' is a good proxy for 'not doing a job as well as possible' without considering alternatives and the details of the job involved. At least, as long as you're talking about explicit checklists consulted by humans, and not generalized automated processes that reify dependencies in a way that doesn't let you proceed without completing the "checklist" items.

Going back to your general argument, are you saying that Eastern philosophical traditions are better at getting people to use checklists (or other tools) without understanding them, while Western ones encourage people not to use things they don't understand explicitly?

Going back to your general argument, are you saying that Eastern philosophical traditions are better at getting people to use checklists (or other tools) without understanding them, while Western ones encourage people not to use things they don't understand explicitly?

In Confucism a wise person is a person who follows the proper rituals for every occasion (as the book argues). I think checklists do define rituals. A person who values following rituals is thus more likely to accept a checklist and follow it.

Culturally there's a sense that asking a Western doctor to use a checklist means to assumes that he's not smart enough to do the right thing. I don't think that exists to the same extend in China.

Before germ theory Western doctors refused to wash their hands because they didn't see the point of cleanness as a value. I need to do a bit of research to get data about Chinese medicine but from what I have seen of Ajuvedic medicine they do tons of saucha rituals that are about producing cleanness like tongue-scrapping.

Why do you think so? To me (as a programmer) heuristics about when to check what feel perfectly knowable and able to be verbalized. To be sure, they would take a lot words.

I think you can describe me easily a system II heuristic that you use to decide when to check more. I don't think you can easily describe how you feel the emotion of surprise that exists on a system I level. Transfering triggers of the emotion of surprise from one person to another is hard.

Yes, if you look at software engineering, its state of formal education is quite bad compared to some other engineering professions. I even have a good idea of the historical causes of this.

I would say it's because the relevant professors see issues of algorithm design as higher status than asking themselves when programmers should recheck their code. It seems no Computer Science professor took the time to setup a study to test whether teaching programmers to be faster at typing increases their programming output. That's because the mathematical knowledge get's seen as more pure and more worthy. It has to do with the kind of the knowledge that's valued.

Mathematical proofs can provide strong justification and are thus more high status than messy experiements about teaching programming that can be confounded by all sorts of factors.

This leads to a misallocation of intellectual resources.

Culturally there's a sense that asking a Western doctor to use a checklist means to assumes that he's not smart enough to do the right thing. I don't think that exists to the same extend in China.

Before germ theory Western doctors refused to wash their hands because they didn't see the point of cleanness as a value.

Checklists are known to be very helpful with certain things, even if the relevant profession (e.g. doctors) don't always widely recognize this. On the other hand, why should I wash my hands if you can't give me a reason for cleanliness, neither theoretical (germ theory) nor empirical (it reduces disease incidence)?

Ideally, we should value checklists and rituals as a tool, but also require there to be good reasons for rituals, and trust that those who institute or choose the rituals know what they're doing. We should also be open to changing rituals, sometimes quickly, as new evidence comes in.

Maybe Eastern traditions achieve a better social balance than Western ones on this matter; I wouldn't know.

I think you can describe me easily a system II heuristic that you use to decide when to check more. I don't think you can easily describe how you feel the emotion of surprise that exists on a system I level. Transfering triggers of the emotion of surprise from one person to another is hard.

I think everyone agrees on this. Humans can't fully learn new behaviors just through abstract knowledge without practice.

I would say it's because the relevant professors see issues of algorithm design as higher status than asking themselves when programmers should recheck their code. It seems no Computer Science professor took the time to setup a study to test whether teaching programmers to be faster at typing increases their programming output. That's because the mathematical knowledge get's seen as more pure and more worthy. It has to do with the kind of the knowledge that's valued.

I would say it's because most CS professors don't really care about programming, and certainly not about typing speed. Programming isn't computer science! CS is a branch of applied math. The professors don't care about misallocation of intellectual resources across different fields, because they've already chosen their own field. You'd see the same problems if electrical engineers all studied physics instead, and picked up all the missing knowledge outside of formal education.

There are dedicated software engineering majors, some of them are even good (or at least better at teaching to program than CS ones), but numerically they produce far fewer graduates.

On the other hand, why should I wash my hands if you can't give me a reason for cleanliness, neither theoretical (germ theory) nor empirical (it reduces disease incidence)?

At the time where the hand washing conflict happened there wasn't much of evidence-based medicine.

Today there is some evidence for checklists improving medical outcomes but they don't get easily adopted.

I think there's decent evidence that combining hypnosis and anesthetic drugs is an improvement over just using anesthetic drugs.

I think everyone agrees on this. Humans can't fully learn new behaviors just through abstract knowledge without practice.

I think the ability to be suprised by the right things is reasonably called knowledge and not only behavior.

There are dedicated software engineering majors, some of them are even good (or at least better at teaching to program than CS ones), but numerically they produce far fewer graduates.

According to Google some of their programmers are 10x as productive as the average. Can a decidated software engineering major teach the knowledge to be required to reach that level reliably? I don't think so. I don't think it even get's 2x.

Is there any software engineering major that tested whether they produce better programmers if they also teach typing? I don't think so.

At the time where the hand washing conflict happened there wasn't much of evidence-based medicine. Today there is some evidence for checklists improving medical outcomes but they don't get easily adopted. I think there's decent evidence that combining hypnosis and anesthetic drugs is an improvement over just using anesthetic drugs.

This is all true, but it's a rather far jump from here to 'and a culture permeated by Eastern philosophy handles this better, controlling for the myriad unrelated differences, and accounting for whatever advantages Western philosophy may or may not have.'

I think the ability to be suprised by the right things is reasonably called knowledge and not only behavior.

I agree.

According to Google some of their programmers are 10x as productive as the average.

Google hires programmers who are already 10x as productive as the average. It doesn't hire average programmers and train them to be 10x as productive using checklists or anything else. Maybe it hires programmers 9x as productive as the average and then helps them improve, but that's a lot harder to measure than a whole order of magnitude improvement.

Can a decidated software engineering major teach the knowledge to be required to reach that level reliably? I don't think so. I don't think it even get's 2x.

If you're asking whether there exist two different institutions with software engineering majors, where the graduates of one are 2x as good as those of the other, or 2x better than the industry average, then the answer is clearly yes.

If you're asking the same, but want to control for incoming freshman quality (i.e. measure the actual improvement due to teaching), then you hit the problem that there are no RCTs and there's no control group (other than those who don't go to college at all). There's also no way to make two test groups of college students not learn anything 'on the side' from the Internet or from their friends, or to do so in the same way. So it's really hard to measure anything on the scale of a whole major.

Lots of people have measured interventions on the scale of a single course. Some of them may help (like typing); in fact I hope some of them do help, otherwise the whole major would only give you credentials. I'm not disputing this, but I also don't see the relation between there being some useful skills that aren't explicit knowledge (in this case they're motor skills everyone has explicit knowledge about) and a grand difference between societal or philosophical differences.

I'm a programmer, and the only part of college that was useful in my field was the freshman "intro to coding" courses. Six months in I was able to do the job I was hired for out of college.

College is a racket.

I read the source before reading the quote and was expecting a quote from The Flash.

I just now looked up Vanishing into Things on Amazon and it looks quite interesting. Have you read the book in its entirety? What are your thoughts about it?

I haven't yet finished it.

I bring it up because many people here still equate knowledge with justified truth and see it as only one form of knowledge.

Being clear about the fact that there are different ways of knowing is very important for the quest of rationality. The example of Chinese philosophy then is relatively benign and doesn't trigger mindkilling reflexes they way that postcolonial thought does.

The Chinese also actually act based on their idea of knowledge with makes it more believable. As China becomes culturally more influential it's also useful to understand their thought better.

The book sounds interesting. When I read your quote from the book, I initially misinterpreted it as a anti-philosophy comment of the sort one occasionally encounters but after reading the blurb for the book on Amazon, realized the quote was contrasting Eastern vs Western thought.

One thing I am curious about - if the Eastern mode of thought is really superior to the Western mode of thought for "understanding the contribution knowledge makes to the technical accomplishment of our civilization", how does the author explain the fact that the scientific method, the industrial revolution, and (to use his words), "the multiplicity of artifactual interfaces in a global technoscientific economy" grew out of the Western intellectual tradition?

However, I do think that there are interesting differences between the traditional Eastern way of thinking and the traditional Western way of thinking, and that each has its unique strengths. An interesting book on this topic is The Geography of Thought by Richard Nisbett; it points out the differences between Eastern and Western thought without really painting one as "better" than the other. Note that Nisbett's book is aimed at a general audience whereas I suspect that Allen's may be aimed at an academic audience.

I'd be interested in hearing your thoughts about Allen's book once you've finished reading it. I'm putting it on my "to read" list, but I'm not sure when I'll get to it.

how does the author explain the fact that the scientific method, the industrial revolution, and (to use his words), "the multiplicity of artifactual interfaces in a global technoscientific economy" grew out of the Western intellectual tradition?

Whether the industrial revolution came out of the intellectual tradition is up for debate. If you take Henry Ford as of of the core people of the industrial revolution, Ford didn't go to university. I think most of the knowledge that made Ford successful wasn't about him believing in justified true statements but of more implicit nature.

The people who invented the steam engine also didn't have university degrees. They were rather tradesman who relied on mechanical skill for their inventions. Western intellectuals didn't concerns themselves with optimal systems of pumping water out of mines like Thomas Newcomen did.

The Industrial Revolution was pretty much complete decades before Henry Ford was born. Newcomen is much more to the point.

[-][anonymous]8y-40

Remember the only thing you lose is time

If you simply the university to dimensions of space and time, I guess that could be true. This quote got me to really stretch to see its truth.

"simply the university" => "simplify the universe"?

[-][anonymous]8y00

Yes, thanks for catching my mistake :) Upvote for you!