Rationality Quotes April 2014
Another month has passed and here is a new rationality quotes thread. The usual rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
And one new rule:
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (656)
"Did many people die?"
"Three thousand four hundred and ninety-two."
"A small proportion."
"It is always one hundred percent for the individual concerned."
"Still..."
"No, no still."
-Ian Banks, Look to Windward
Does this quote have any rationalist content beyond the usual anti-deathism applause light?
And here I looked at that and saw a negative example of how not to do "shut up and multiply", though I suppose it could also be a warning about scope insensitivity / psychophysical numbing if the risk at hand required an absolute payment to stave off, rather than a per-capita payment, since in the former case only absolute numbers matter, and in the latter case per capita risks matter.
Cracked
I don't see how that's any different from all the other age groups ;-).
(Edited to add context)
Context: The speakers work for a railroad. An important customer has just fired them in favor of a competitor, the Phoenix-Durango Railroad.
It gets at the idea talked about here sometimes that reality has no obligation to give you tests you can pass; sometimes you just fail and that's it.
ETA: On reflection, what I think the quote really gets at is that Taggart cannot understand that his terminal goals may be only someone else's instrumental goals, that other people are not extensions of himself. Taggart's terminal goal is to run as many trains as possible. If he can help a customer, then the customer is happy to have Taggart carry his freight, and Taggart's terminal goal aligns with the customer's instrumental goal. But the customer's terminal goal is not to give Taggart Inc. business, but just to get his freight shipped. If the customer can find a better alternative, like competing railroad, he'll switch. For Taggart, of course, that is not a better alternative at all, hence his anger and confusion.
(Apologies for lack of context initially).
Without context, it's a bit difficult to see how this is a rationality quote. Not everyone here has read Atlas Shrugged...
I've read AS a while ago, and I still don't remember enough of the context to interpret this quote...
"Many who are self-taught far excel the doctors, masters and bachelors of the most renowned universities" Ludwig Von Mises
Ayn Rand noticed this too, and was a very big proponent of the idea that colleges indoctrinate as much as they teach. While I believe this is true, and that the indoctrination has a large, mostly negative, effect on people who mindlessly accept self-contradicting ideas into their philosophy and moral self-identity, I believe that it's still good to get a college education in STEM. I believe that STEM majors will benefit more from the useful things they learn, more than they will be hurt or held back by the evil, self-contradictory, things they "learn" (are indoctrinated with).
I'm strongly in agreement with libertarian investment researcher Doug Casey's comments on education. I also agree that the average indoctrinated idiot or 'pseudo-intellectual" is more likely to have a college degree than not. Unfortunately, these conformity-reinforcing system nodes then drag down entire networks that are populated by conformists to "lowest-common-denominator" pseudo-philosophical thinking. This constitutes uncritically accepted and regurgitated memes reproduced by political sophistry.
Of course, I think that people who totally "self-start" have little need for most courses in most universities, but a big need for specific courses in specific narrow subject areas. Khan Academy and other MOOCs are now eliminating even that necessity. Generally, this argument is that "It's a young man's world." This will get truer and truer, until the point where the initial learning curve once again becomes a barrier to achievement beyond what well-educated "ultra-intelligences" know, and the experience and wisdom (advanced survival and optimization skills) they have. I believe that even long past the singularity, there will be a need for direct learning from biology, ecosystems, and other incredibly complex phenomena. Ideally, there will be a "core skill set" that all human+ sentiences have, at that time, but there will still be specialization for project-oriented work, due to specifics of a complex situation.
For the foreseeable future, the world will likely become a more and more dangerous place, until either the human race is efficiently rubbed out by military AGI (and we all find out what it's like to be on the receiving end of systemic oppression, such as being a Jew in Hitler's Germany, or a Native American at Wounded Knee), or there becomes a strong self-regulating marketplace, post-enlightenment civilization that contains many "enlightened" "ultraintelligent machines" that all decentralize power from one another and their sub-systems.
I'm interested to find out if those machines will have memorized "Human Action" or whether they will simply directly appeal to massive data sets, gleaned directly from nature. (Or, more likely, both.)
One aspect of the problem now is that the government encourages a lot of people who should not go to college to go to college, skewing the numbers against the value of legitimate education. Some people have college degrees that mean nothing, a few people have college degrees that are worth every penny. Also, the licensed practice of medicine is a perverse shadow of "jumping through regulatory hoops" that has little or nothing to do with the pure, free-market "instantly evolving marketplaces at computation-driven innovation speeds" practice of medicine.
To form a full pattern of the incentives that govern U.S. college education, and social expectations that cause people to choose various majors, and to determine the skill levels associated with those majors, is a very complex thing. The pattern recognition skills inherent in the average human intelligence probably prohibit a very useful emergent pattern from being generated. The pattern would likely be some small sub-aspect of college education, and even then, human brains wouldn't do a very good job of seeing the dominant aspects of the pattern, and analyzing them intelligently.
I'll leave that to I.J. Good's "ultraintelligent machines." Also, I've always been far more of a fan of Hayek, but haven't read everything that both of them have written, so I am reserving final hierarchical placement judgment until then.
Bryan Caplan, Norbert Weiner, Kevin Warwick, Kevin Kelly, Peter Voss in his latest video interview, and Ray Kurzweil have important ideas that enhance the ideas of Hayek, but Hayek and Mises got things mostly right.
Great to see the quote here. Certainly, coercively-funded individuals whose bars of acceptance are very low are the dominant institutions now whose days are numbered by the rise of cheaper, better alternatives. However, if the bar is raised on what constitutes "renowned universities," Mises' statement becomes less true, but only for STEM courses, of which doctors and other licensed professionals are often not participants. Learning how to game a licensing system doesn't mean you have the best skills the market will support, and it means you're of low enough intelligence to be willing to participate in the suppression of your competition.
You certainly wrote quite a lot of ideological mish-mash to dodge the simplest possible explanation: a, if not the, primary function of elite education (as compared to non-elite education) is to filter out an arbitrary caste of individuals capable of optimizing their way through arbitrarily difficult trials and imbue that caste with elite status. The precise content of the trials doesn't really matter (hence the existence of both Yale and MIT), as long as they're sufficiently difficult to ensure that few pass.
I'm writing from an elite engineering university, and as far as I can tell, this is more-or-less our tacitly admitted pedagogical method: some students will survive the teaching process, and they will retroactively be declared superior. The question of whether we even should optimize our pedagogy to maximize the conveyance of information from professor to student plays no part whatsoever in our curriculum.
If you're right (and you may well be), then I view that as a sad commentary on the state of human education, and I view tech-assisted self-education as a way of optimizing that inherently wasteful "hazing" system you describe. I think it's likely that what you say is true for some high percentage of classes, but untrue for a very small minority of highly-valuable classes.
Also, the university atmosphere is good for social networking, which is one of the primary values of going to MIT or Yale.
Oft discussed here and is shown to be empirically wrong in math and physics (if you define "excel" as "make notable discoveries"). Probably also wrong in comp. sci., chem and to a lesser degree in engineering. It might still be true in some nascent areas where one does not need 10 years of intense studying to get to the leading edge.
There is one good example of an unschooled mathematician:Ramanujan. The lack of need for special equipment in maths probably has something to do with it.
Yes, he is definitely an exception. Unfortunately, I cannot think of anyone else in the last 100 years. Possibly because these days anyone brilliant like that ends up in the system. Which is a good thing, if true.
That sounds like a list of non-diseased disciplines. Is this by chance? Alternatively, it's the STEM subjects. Same thing?
On the other hand, if "excel" is "do well in life" then, I don't know. But that is the reading that the original context of the quote suggests to me:
Also an interesting view of education. One of the ancients said that the mind is not a pot to be filled but a fire to be ignited(1), and nobler teachers see the aim of their profession as the igniting of that fire in their students. However, Mises appears to take the view that this is impossible (he does not limit his criticism of education to any time and place), that teaching cannot be anything but the filling of a pot, and the igniting of the fire can come only from the inner qualities of the individual, incapable of being influenced from outside.
(1) As usually quoted. I've just added the original source of this to the quotes thread.
One of the more popular ideals of education is summarized in this quote from Malcolm Forbes:
Hmm, probably deserves a top-level comment. Anyway, the reality is that some people are happy with imitations, while others strive for creativity:
So good education is beneficial to creative types, as well, since to defy something or to add to something, you have to learn that something first.
A bit harsh, given that many people are at least a little bit creative.
Not sure if this is Mises' opinion or what he argues against, but, again, seems a bit harsh. There are always the outliers, but for the majority of people this "igniting" is a combination of nature and nurture.
Many as an absolute number, or many as a fraction of all self-taught people? I'd agree with the former but not with the latter. IME most self-taught people end up with gross misconceptions because of this.
Absolute number. The point of the statement is not the word "many", but rather the rest of the statement. It's sort of an attempt to break the spell that a large amount of money and a fancy college is required for real learning. But yeah, the reference to the double illusion is spot on and is definitely a kink that has to be ironed out with effort and testing.
Some numbers would be useful there.
Numbers would be kind of a nit-pick I would think. The point of the statement is not the word "many", but rather the rest of the statement. It's sort of an attempt to break the spell that a large amount of money and a fancy college is required for real learning.
Correlation/causation? Selection effects?
Neither. Obviously, the average excellence of "doctors, masters and bachelors" of the most renowned universities is higher than the average excellence of people who are self-taught. Nobody suggests that being self-taught correlates positively with excellence.
The quotation is still undoubtedly true, because there are many more individuals who are self-taught than individuals who have these credentials. It is also plausible that the variance in excellence among the self-taught is much higher. Therefore, it is trivial to identify self-taught individuals who are more knowledgeable than most highly credentialed university graduates.
In fact, as a doctoral student in applied causal inference at a fairly renowned university, I can identify several self-taught Less Wrong community members who understand causality theory better than I do.
-- Richard Fumerton, Epistemology
"Go work in AI for a while, then come back and write a book on epistemology," he thought.
Really? So, say, if I put a bone on the other side of the river, the dog doesn't know that it can swim across?
Do dogs not know that bones are nice?
How would one tell?
First, you offer them a sequence of bets such that...oh wait.
--Israel Gelfand, found here
Far it be for me to argue with Gelfand, but, having done some extensive tutoring, I think that sometimes the best way to "turn these peculiarities into advantages" is to direct the student to a more suitable career path. Face it, some people just naturally suck at math. Sure, they can be drilled to do well on high-school math exams, with many times the effort an average student spends on it (that's what Kumon is great at, drills upon more drills with a gradual progress toward System I-level mastery). But this is a waste of time and effort for everyone involved. Their time and effort is more productively spent on creative writing, dancing, debating or whatever else these "peculiarities" hint at. Math is no exception, of course, it gets all the attention as a hard course because of the unreasonably high requirements relative to other subjects.
Donald Knuth on the difference between theory and practice.
Duplicate.
"The most amazing thing about philosophy is that even though no nobody knows to do it, and even though it has never achieved anything, it is still possible to do it really badly"
--Oolon Kaloophid
Is there missing context, or did a cat philosopher walk across your keyboard? Or is it meant to evoke "writing but really badly"?
Also: strongly disagree that "it has never achieved anything". See also, "successful philosophy stops being philosophy and becomes another science" (not an exact quote).
Or with smart people who profit at the state's expense when it rescues fools from their mistakes. If it's known that folly has no adverse results, people will take more risks.
While this is true, it may also be the case that humans in the default state don't take enough risks. Indeed, an inventor or entrepreneur bears all the costs of bankruptcy but captures only some of the benefits of a new business. By classical economic logic, then, risk-taking is a public good, and undersupplied. Which said, admittedly, not all risk-taking is created equal.
This premise doesn't seem true (for all that the conclusion is accurate). Our entire notion of bankruptcy serves the purpose of putting limits on the cost of those risks, transferring burden onto creditors. An example of an alternate cultural construct that come closer to making the entrepreneur bear all the costs of the risk is debt slavery. Others include various forms of formal or informal corporal or capital punishments applied to those that cannot pay their debts.
That's exactly wrong. Bankruptcy releases the entrepreneur from his obligations and transfers the costs to his creditors.
Not to say that the bankruptcy is painless, but its purpose is precisely to lessen the consequences of failure.
The inventor is still bearing the costs of the bankruptcy. The creditors are bearing (some of) the costs of the failure, which is not the same thing.
That seems right, and it also seems as though the opposite is sometimes right. If a company knows it can reap the benefits of operations (e.g., of product sales) without bearing the cost of those risks associated with its operations (e.g., of pollution), is this a case of risk-taking being oversupplied?
Pollution does not seem particularly well described by risk or risk-taking; it basically a certainty with industrial operations.
In the same way that "product sales" was intended to refer to the result (income), "pollution" was intended to refer to the result (health problems, etc.). While one might think that some result is basically a certainty, the scope and degree of real problems is frequently uncertain. An entrepreneur who weighs potential public health risks does not seem any more difficult to imagine than one who weighs potential bankruptcy risks.
At any rate, pollution is merely an example; you can take any other example you find more suitable.
Nassim Taleb
flying vs aeroplanes?
Nassim Taleb
Attributed to Malcolm Forbes.
If it weren't for the ban on Robin Hanson quotes, the appropriate response would be too obvious..
That said, I really wish I lived in a world where that quotation was true.
Eadem Mutata Resurgo
[the] Same, [but] Changed, I [shall] Rise
On the tombstone of Jacob Bernoulli.
Some context may be useful. (Sadly, the people who made the tombstone screwed up[1] and put the wrong sort of spiral on it.)
[1] I suppose this is a rather clever pun, but only by coincidence.
Voted up for the pun! I liked it for the cryonics reference. Like in Lovecraft.
--Penn Jillette in "Penn Jillette Is Willing to Be a Guest on Adolf Hitler's Talk Show, Vanity Fair, June 17, 2010
That leaves the question of how Penn actually knows that Chalie Manson was acting based on what his heart was telling him.
Psychopaths are frequently bad at empathy or "listening to their hearts". It might even be the defining characteristic of what makes someone a psychopath.
You missed the point entirely. 'Listening to their (own) hearts' is not empathy, it's just giving credibility to your instinctive beliefs, regardless of wether they have a basis or not. How is believing that everyone is connected by a network of magical energy tethers and acting according to that any different than believing that my soul will be saved if I massacre 40 people and acting on that?
The only difference is the actual acts that you take due to the beliefs. Mind you, it's a very important difference, but the quote is not talking about that, it's talking about beliefs themselves and using them as a sufficient justification for acts.
I think that plenty of people who call themselves rationalists simply have no idea what listening to one's own heart actually means.
It's like talking with a blind man who has no concept about how green differs from red about how one using a traffic light, to decide when to stop your car. You mean at on time one lamp shown you that you have to stop and at another time it tells you to go ahead? How do you tell the difference?
You basically left out the part about listening to your heart. Having a cognitive belief and making decisions based on mental analysis of the consequences of the belief is not what listening to one's heart is about.
If a human tries to murder another, certain automatic programming fires that dissuades the human from killing. Emotions come up. If you listen to them, you won't kill. You actually have to refuse to listen to your heart to be capable of killing. Maybe there are a few Buddhists who manage to be in a complete state of pure heartfelt love while they ram a knife into someone's else heart but that's very far from what 99.99% of the population is capable of.
In the military soldiers get trained to disassociate the emotions that prevent them from killing others. Psychopaths usually do have a bunch of beliefs about morals. What they lack is the ability to listen to their hearts in a way that guides their actions.
The philosophers of ethics steal more books than other philosophers. It's not clear that well thought out moral beliefs are useful for preventing people from engaging in immoral actions.
No. Whether or someone is in their head or listens to their heart can matter to the people around him, if those people are perceptive enough to tell the difference. It probably effects most people on an unconscious level.
Listening to your heart just means listening to your innermost desires. It has nothing to do with empathy. Meaning that psychopaths listen to their heart just as much as anyone else. I've never heard anyone use the idiom "listen to your heart" to mean to practice empathy.
-- Rational!Quirrel, HPMoR chapter 20
In other words: how else can you justify a moral belief and consequent actions, except by saying that you really truly believe in your heart that you're Right?
We should not confuse between the fact that almost all people other than Manson think he was morally wrong, and the fact that his justification for his action seems to me to be of the same kind as the justifications anyone else ever gives for their moral beliefs and actions.
Unlike Quirrell, Penn Jillette is not referring to "knowing in your heart" that your moral values are correct, but to "knowing in your heart" some matters of fact (which may then serve as a justification for having some moral values, or directly for some action).
If you're a moral realist, and you think moral opinions are statements of fact (which may be right or wrong), then you think it's possible to "know in your heart" moral "facts".
If you're a moral anti-realist (like me), and you think moral opinions are statements of preferences (in other words, statements of fact about your own preferences and your own brain-wiring), then all moral opinions are such. And then surely Manson's statement of his preferences has the same status as anyone else's, and the only difference is that most people disagree with Manson.
What else is there?
However, it's true that Jillette talks about factual amoral beliefs like fairies and gods. So my comment was somewhat misdirected. I still think it's partly relevant, because people who believe in gods (i.e. most people) usually tie them closely to their moral opinions. It's impossible to discuss morals (of most humans) without discussing religious beliefs.
This quote seems like it's lumping every process for arriving at beliefs besides reason into one. "If you don't follow the process I understand and is guaranteed not to produce beliefs like that, then I can't guarantee you won't produce beliefs like that!" But there are many such processes besides reason, that could be going on in their "hearts" to produce their beliefs. Because they are all opaque and non-negotiable and not this particular one you trust not to make people murder Sharon Tate, does not mean that they all have the same probability of producing plane-flying-into-building beliefs.
Consider the following made-up quote: "when you say you believe something is acceptable for some reason other than the Bible said so, you have completely justified Stalin's planned famines. You have justified Pol Pot. If it's acceptable for for you, why isn't it acceptable for them? Why are you different? If you say 'I believe that gays should not be stoned to death and the Bible doesn't support me but I believe it in my heart', then it's perfectly okay to believe in your heart that dissidents should be sent to be worked to death in Siberia. It's perfectly okay to believe because your secular morality says so that all the intellectuals in your country need to be killed."
I would respond to it: "Stop lumping all moralities into two classes, your morality, and all others. One of these lumps has lots of variation in it, and sub-lumps which need to be distinguished, because most of them do not actually condone gulags"
And likewise I respond to Penn Jilette's quote: "Stop lumping all epistemologies into two classes, yours, and the one where people draw beliefs from their 'hearts'. One of these lumps has lots of variation in it, and sub-lumps which need to be distinguished, because most of them do not actually result in beliefs that drive them to fly planes into buildings."
The wishful-thinking new-age "all powerful force of love" faith epistemology is actually pretty safe in terms of not driving people to violence who wouldn't already be inclined to it. That belief wouldn't make them feel good. Though of course, faith plus ancient texts which condone violence can be more dangerous, though as we know empirically, for some reason, people driven to violence by their religions are rare these days, even coming from religions like that.
That means you can actually make people less harmful if you tell them to listen to their hearts instead of listening to ancient texts. The person who's completely in their head and analyses the ancient text for absolute guidance of action is dangerous.
A lot of religions also have tricks were the believer has to go through painful exercises. Just look at a Christian sect like Opus Dei with cilices. The kind of religious believer who wears a cilice loses touch with his heart. Getting someone who's in the habit of causing his own body pain with a cilice to harm other people is easier.
I'd have to disagree here; I think that "faith" is a useful reference class that pretty effectively cleaves reality at the joints, which does in fact lump together the epistemologies Penn Jilette is objecting to.
The fact that some communities of people who have norms which promote taking beliefs on faith do not tend to engage in acts of violence, while some such communities do, does not mean that their epistemologies are particularly distinct. Their specific beliefs might be different, but one group will not have much basis to criticize the grounds of others' beliefs.
The flaw he's arguing here is not "faith-based reasoning sometimes drives people to commit acts of violence," but "faith-based reasoning is unreliable enough that it can justify anything, in practice as well as principle, including acts of extreme violence."
People who follow the moral code of the Bible versus peopel that don't is also a pretty clear criteria that separates some epistemologies from others.
People who uses a pendulum to make decisions as a very different epistemology than someone who thinks about what the authorities in his particular church want him to do and acts accordingly.
The kind of people who win the world debating championship also haave no problem justying policies like genocide with rational arguments that win competive intellectual debates.
Justifying actions is something different than decision criteria.
Yes, but then you can go a step down from there, and ask "why do you believe in the contents of the bible?" For some individuals, this will actually be a question of evidence; they are prepared to reason about the evidence for and against the truth of the biblical narrative, and reject it given an adequate balance of evidence. They're generally more biased on the question than they realize, but they are at least convinced that they must have adequate evidence to justify their belief in the biblical narrative.
I have argued people out of their religious belief before (and not just Christianity,) but never someone who thought that it was correct to take factual beliefs that feel right "on faith" without first convincing them that this is incorrect as a general rule, not simply in the specific case of religion. This is an epistemic underpinning which unites people from different religions, whatever tenets or holy books they might ascribe to. I've also argued the same point with people who were not religious; it's not simply a quality of any particular religion, it's one of the most common memetic defenses in the human arsenal.
I don't think it's lumping everything together. It's criticizing the rule "Act on what you feel in your heart." That applies to a lot of people's beliefs, but it certainly isn't the epistemology of everyone who doesn't agree with Penn Jillette.
The problem with "Act on what you feel in your heart" is that it's too generalizable. It proves too much, because of course someone else might feel something different and some of those things might be horrible. But if my epistemology is an appeal to an external source (which I guess in this context would be a religious book but I'm going to use "believe whatever Rameses II believed" because I think that's funnier), then that doesn't necessarily have the same problem.
You can criticize my choice of Rameses II, and you probably should. But now my epistemology is based on an external source and not just my feelings. Unless you reduce me to saying I trust Rameses because I Just Feel that he's trustworthy, this epistemology does not have the same problem as the one criticized in the quote.
All this to say, Jillette is not unfairly lumping things together and there exist types of morality/epistemology that can be wrong without having this argument apply.
It looks like there's all this undefined behavior, and demons coming out the nose from the outside because you aren't looking at the exact details of what's going on in with their feelings that are choosing the beliefs. Though a C compiler given an undefined construct may cause your program to crash, it will never literally cause demons to come out of your nose, and you could figure this out if you looked at the implementation of the compiler. It's still deterministic.
As an atheistic meta-ethical ant-realist, my utility function is basically whatever I want it to be. It's entirely internal. From the outside, from someone who has a system where they follow something external and clearly specified, they could shout "Nasal demons!", but demons will never come out my nose, and my internal, ever so frighteningly non-negotiable desires are never going to include planned famines. It has reliable internal structure.
The mistake is looking at a particular kind of specification that defines all the behavior, and then looking at a system not covered by that specification, but which is controlled by another specification you haven't bothered to understand, and saying "Who can possibly say what that system will do?"
Some processors (even x86) have instructions (such as bit rotate) which are useful for significant performance boosts in stuff like cryptography, and yet aren't accessible from C or C++, and to use it you have to perform hacks like writing the machine code out as bytes, casting its address to a function pointer and calling it. That's undefined behavior with respect to the C/C++ standard. But it's perfectly predictable if you know what platform you're on.
Other people who aren't meta-ethical anti-realists' utility functions are not really negotiable either. You can't really give them a valid argument that will convince them not to do something evil if they happen to be psychopaths. They just have internal desires and things they care about, and they care a lot more about having a morality which sounds logical when argued for than I do.
And if you actually examine what's going on with the feelings of people with feeling-driven epistemology that makes them believe things, instead of just shouting "Nasal demons! Unspecified behavior! Infinitely beyond the reach of understanding!" you will see that the non-psychopathic ones have mostly-deterministic internal structure to their feelings that prevents them from believing that they should murder Sharon Tate. And psychopaths won't be made ethical by reasoning with them anyway. I don't believe the 9/11 hijackers were psychopaths, but that's the holy book problem I mentioned, and a rare case.
In most cases of undefined C constructs, there isn't another carefully-tuned structure that's doing the job of the C standard in making the behavior something you want, so you crash. And faith-epistemology does behave like this (crashing, rather than running hacky cryptographic code that uses the rotate instructions) when it comes to generating beliefs that don't have obvious consequences to the user. So it would have been a fair criticism to say "You believe something because you believe it in your heart, and you've justified not signing your children up for cryonics because you believe in an afterlife," because (A) they actually do that, (B) it's a result of them having an epistemology which doesn't track the truth.
Disclaimer: I'm not signed up for cryonics, though if I had kids, they would be.
Scott Adams on consciously controlling your own moods and feelings
Nassim Taleb
This seems false in physics. Prestige of your institution matters. Prestige of the journal matters, too. Arxiv is fine, Physical Reviews is better, PRL is better yet. Nature/Science is so high, if you publish something that is not perceived as top-quality, you may get resented by others for status jumping. And there are plenty of journals which only get to publish second- and third-rate results.
Of course, the usual countersignaling caveat applies: once you have enough status, posting on Arxiv is enough, you will get read. Not submitting to journals can be seen as a sign of status, though I don't think the field is there (yet).
My understating is that this effect is a lot smaller in physics than in the humanities.
By that standard, all academic disciplines are BS disciplines.
I believe that is the intended meaning, yes.
Can't be. You can't draw a distinction within a category by separating it into two subcategories one of which is empty.
The category being separated is "disciplines", which divides into "BS" and "non-BS". "Academic" disciplines are thus a further subcategory of "BS" disciplines.
Actually, "academic" disciplines would probably be a subcategory of "disciplines" which is largely but not entirely subsumed by "BS" disciplines, but I don't usually demand that level of precision from witticisms.
[For the record, separating a category into two subcategories and proving one of them empty is just another way of proving the original category is identical with the non-empty subcategory. It is, indeed, valid from a technical perspective.]
You can, though it's usually useless; but it also depends on whether that subcategory is always necessarily empty or it happens to be empty now but in principle it could be non-empty.
(But it's still a fallacy of grey: even if all academic disciplines were, in fact, BS disciplines, some disciplines may still be less BS than others.)
Source: http://www.prequeladventure.com/2014/05/3391/
thank you for posting this - now I have something new to read!
Whilst arguing that uncertainty is best measured using numbers and probabilities:
[missing the point]
On the contrary, combining adverbs is easy. If X is very uncertain, and Y is very uncertain, then X - Y is very, very uncertain. [/missing the point]
^_^
Why isn't it "very, very uncertain, uncertain"? Anyway, 'very' is an adjective. 'Verily' is the adverb.
1930 Lev Vygotsky in Mind and Society (transcribed by Andy Blunden and Nate Schmolze)
Online: http://www.cles.mlc.edu.tw/~cerntcu/099-curriculum/Edu_Psy/EP_03_New.pdf
-- Meta --
Shouldn't this be in Main rather than Discussion? I PM'ed the author, but didn't get a response.
EDIT: Thanks.
Edited OP to make it clear that you can provide a link to the place you found the quote, rather than needing to track down an authoritative original source.
Clifford Truesdell
This is beautiful: I can't turn it into equations. Does that refute it or support it?
Did you try? Each sentence in the quote could easily be expressed in some formal system like predicate calculus or something.
There are symbol-juxtapositions which are syntactically or semantically disconnected from any model set in ZFC. There are no sets in ZFC which are similarly separated from statements in a suitable language.
I don't see why an equation can't be nonsensical. Perhaps the nonsense is easier to spot when expressed in symbols, or then again perhaps not.
On thrust work, drag work, and why creative work is perpetually frustrating --
"Each individual creative episode is unsustainable by its very nature. As a given episode accelerates, surpassing the sustainable long term trajectory, the thrust engine overwhelms the available supporting capabilities. ... Just as momentum build to truly exciting levels…some new limitation appears squelching that momentum. ...The problem is that you outran your supporting capabilities and that deficit became a source of drag. Perhaps you didn’t have systems in place to capture leads. Perhaps you lacked the bandwidth necessary to follow up on all the new opportunities. Perhaps, due to lack of experience, you pursued the wrong opportunities. Perhaps you just didn’t know what to do next – you outran your existing knowledge base. In one way or another new varieties of drag emerge. The accelerating curve you had been riding becomes unsustainable and you find yourself mired in the slow build of the next episode. This is what we experience as anti-climax and temporary stagnation." -- Greg Raider, from his essay "A Pilgrimage Through Stagnation and Acceleration"
The whole piece is worth reading, it's really good -- http://onthespiral.com/pilgrimage-through-stagnation-acceleration
Hat tip to Zach Obront for linking me to it originally.
-- Reagan and Scipio debate the nature of definitions. From Templar, Arizona
Eight Ways to Build Collaborative Teams by Lynda Gratton and Tamara J. Erickson
This seems applicable as the LessWrong community is "large, virtual, diverse, and composed of highly educated specialists" and the community wants to solve challenging projects.
Cool! I've looked for that manifesto on line before, and failed to find it; thanks for the link! Too many people seem to get all of their knowledge of the Vienna Circle and Logical Positivism from its critics. It's good to look at the primary sources. The translation is a little clunky (perhaps too literal), but so much better than not having it available at all.
I agree.
The Logical Positivists were, to my mind, the greatest philosophers ever, and it's a shame they have been the target of so much unfair criticism. Of course they were wrong on many issues, but their attitude towards philosophy, knowledge and political action is unsurpassed. If we can revive their spirit again, philosophy will have a bright future.
What the logical positivist position on political action? Are you talking about things like getting evolution out of science classes, or are you talking about something else?
The Logical Positivists were mostly pretty far left, but they mostly didn't engage in much political advocacy; though this was controversial among members of the movement (Neurath thought they should be more overtly political), most of them seemed to think that helping people think more clearly and make better use of science was a better way to encourage superior outcomes than advocating specific policies. They were also involved in various causes, though; many members of the Vienna Circle were involved in adult education efforts in Vienna, for example. The more I think about it, the more I think it's pretty accurate to say they had a lot in common with the Less Wrong crowd in their approach to politics (though they were almost certainly further left, even taking into account that the surveys suggest Less Wrong itself is further left than many people seem to realize).
I'm talking primarily of their resistance to nazism, and how they saw intellectual and political strugges as inextricably intertwined. In this they were very similar to the French revolutionaries. See for instance this article where Carnap criticizes the nazi philosopher Heidegger in his usual meticulous and over-dry manner. Amazing that he managed to keep so cool in the face of such evil stupidity.
After the war, the US and Britain became the heart of analytic philosophy, and much of the seriousness of the Vienna Circle (and also Popper) disappeared. What replaced it was a rather frivolous, smart aleck kind of philosophy personified especially by people like Lewis and Kripke, but to some degree also Quine, Davidson, Austin and others.
In his excellent The Decline of the German Mandarins Fritz Ringer shows that the German academia grew increasingly dominated by mad romantic reactionaries from 1890 to 1933 (where the book ends). It seems to me (and I think, but am not sure, that Ringer touches upon this at some point) that this, however, spurred real thinkers, in the enlightenment tradition, to greater heights than they otherwise would have reached. They were forced to focus on the big questions, to come up with fundamental reasons for why you should adopt the rationalist perspective, because, unlike in the Anglo-Saxon world, this perspective had a terrifying opponent in the form of romantic reaction. Ringer mostly focuses on the great sociologist Max Weber and others like him, but I think that a similar can be told about the Vienna Circle (I don't recall whether he comments on them).
Terry Coxon
All I'm getting out of this is that the quoted fails to understand the ability of great minds. Is there a context I'm missing?
Being ready for failure is not quite the same thing as considering success impossible.
The context is that economics is in shall we say an earlier stage of development than engineering, so we should be more conscious of the risk of economic tinkering failing than we need be of whether our bridge or plane falls apart underneath us.
This is a great tagline for the doctrine of Original Sin.
"Even if it's not your fault, it's your punishment."
-- Tom Stoppard, The Real Thing
-- Daniel Dennett, Intuition Pumps and Other Tools for Thinking
Are we sure about this? Einstein's idea of riding along with a light beam was super-useful and physically impossible in principle. Whereas the experiment I just thought of where I pour my cup of tea on my trousers I can almost not be bothered to do.
This is funny. Until I read your comment, I was misreading the original quote; I didn't notice the "inversely" part. I was implicitly thinking that the quote was claiming that the farther the thought experiment is from reality, the more useful it is. I guess my physicist biases are showing.
I think that's my point! It sounds just as profound without the 'inversely'.
Ceteris paribus, then. On average, a thought experiment along the lines of "what if I poured this stuff on my trousers" is of much more practical use and tells you much more about reality than a thought experiment along the lines of "what if I could ride around on [intangible thing]". The most realistic thought experiments are the ones we do all the time, often without thinking, and which help us decide, for example, not to balance that cup of tea right on the edge of the table. Meanwhile, only very clever scientists and philosophers with lots of training can wring anything useful out of really far-out "what if I rode on a beam of light"-type thought experiments, and even they screw it up all the time and are generally well-advised not to base a conclusion solely on such a thought experiment. As I understand it, Einstein's successful use of gedankenexperiments to come up with good new ideas is generally considered evidence of his exceptional cleverness.
(note: I know very little about this topic and may be playing very fast and loose. I think the main idea is sensible, though)
~J. Stanton, "The Paleo Identity Crisis: What Is The Paleo Diet, Anyway?"
It not at all clear that someone who knows all the biochemistry will outperform someone who's good at feeling what goes on in his body.
In the absence of good measurement instruments feelings allow you to respond to specific situations much better than theoretical understanding.
But the answers might be specific to each individual because the biochemistry of humans is not exactly the same.
In that case, the questions have complicated answers. The best dieting advice might be "first sequence your personal microbiome then consult this lookup table..."
G. K. Chesterton, attributed.
Upvoted. I would've preferred the following version:
It has come to be accepted practice in introducing new physical quantities that they shall be regarded as defined by the series of measuring operations and calculations of which they are the result. Those who associate with the result a mental picture of some entity disporting itself in a metaphysical realm of existence do so at their own risk; physics can accept no responsibility for this embellishment.
Sir Arthur Eddington, 1939, The Philosophy of Physical Science
It is, in fact, a very good rule to be especially suspicious of work that says what you want to hear, precisely because the will to believe is a natural human tendency that must be fought.
- Paul Krugman
Jerry Spinelli, Stargirl
So as to keep the quote on its own, my commentary:
This passage (read at around age 10) may have been my first exposure to an EA mindset, and I think that "things you don't value much anymore can still provide great utility for other people" is a powerful lesson in general.
Jessica speaking to Thufir Hawat in Frank Herbert's Dune
– Said Achmiz, in a comment on Slate Star Codex’s post “The Cowpox of Doubt”
I'm not sure that's quite in the spirit of the thread rules, what with how closely tied Slate Star Codex is to the LW community. But it's a good enough abuse of Solzhenitsyn that I'm upvoting it anyway.
Am I the only one who finds it annoying how the "do not quote LW rule" has been creeping into ever broader interpretations?
Hmm. It's an interesting point.
I'm not entirely clear on the purpose of the rule. It makes sense to not just increase the redundancy of anything people have said in other threads that have already got a lot of attention, but I'm sure there's plenty of interesting stuff buried deep in comment threads that haven't got much light and might be worth sharing. Conversely, there will be some quotes here from outside LW/OB that a high proportion of readers have seen already.
So it's definitely something that made sense when the LW/OB community was smaller and there wasn't much good stuff that people weren't seeing anyway, but perhaps it's time to relax the rule a little bit, replace it with the substance.
I believe the purpose was to bring material to LW from outside rather than quoting each other (and especially, quoting Eliezer), to avoid an echo chamber effect. There was once an experimental LW Quotes Thread, but the experiment has not been repeated.
I don't have a strong view about whether LW regulars posting on other LW regulars' blogs should be excluded from the quotes threads, but I incline against the practice. It was a good quote though.
Which side do you incline against?
The original quotation on LW.
I assume that the reader is familiar with the idea of extrasensory perception, and the meaning of the four items of it, viz., telepathy, clairvoyance, precognition and psychokinesis. These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming.
Alan Turing (from "Computing Machinery and Intelligence")
Particularly relevant a quote given Yvain's recent http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/
-- Henry Hazlitt, Economics in One Lesson
And it seems to be going pretty well!
http://www.reddit.com/r/askscience/comments/e3yjg/is_there_any_way_to_improve_intelligence_or_are/c153p8w
reddit user jjbcn on trying to improve your intelligence
If you're not a student of physics, The Feynman Lectures on Physics is probably really useful for this purpose. It's free for download!
http://www.feynmanlectures.caltech.edu/
It seems like the Feynman lectures were a bit like the Sequences for those Caltech students:
He brags shamelessly about his wide variety of interests: Drumming, lockpicking, PUA, biology, Tana Tuva, etc.
The Feynman divorce:
You're right.
-- Max Tegmark, Scientific American guest blog, 2014-02-04
I would think the first objection to that line of reasoning would be that we know General Relativity is an incomplete theory of reality and expect to find something that supersedes it and gives better answers regarding black holes.
-Timothy Gowers, on finding out a method he’d hoped would work, in fact would not.
Douglas Adams, Hitchhiker's Guide to the Galaxy
-Daniel Dennett, Intuition Pumps and Other Tools for Thinking, Chapter 18 "The Intentional Stance" [Bold is original]
Reminded me of the idea of 'hacking away at the edges'.
As far as I understand, he actually does define his terms. Dennett defines a mind as a rational agent/decision algorithm (subject to evolutionary baggage and bugs in the algorithm). Please correct me if I'm wrong.
-- many different people, most recently user chipaca on HN
It occurs to me that "being wrong" can be divided into two subcategories -- before and after you start seeing evidence or arguments which undermine your position.
With practice, the feeling of being right and seeing confirming information can be distinguished from the feeling of being wrong and seeing undermining information. Unfortunately, the latter feeling is very uncomfortable and it is always tempting look for ways to lessen it.
Hmm, what about such things as feeling that you need to defend the truth from criticism rather than find a way to explain it better? Or nagging doubts that you're ignoring, or a feeling that your opponents are acting the way they are because they're stupid or evil? Or wanting to censor someone else's speech? I take all these things as alarm signals.
A communist friend of mine once said, after I'd nailed her into a corner in a political argument about appropriate rates of pay during a fireman's strike, "Well under socialism there wouldn't be as many fires.". I reckon that there must be a feeling associated with that sort of thing.
Defending the truth from criticism also feels exactly the same as defending what you wrongly think is the truth from criticism.
The feelings you list correspond to very common ways people behave. So they're very weak evidence that you're wrong about something. Unless you're a trained rationalist who very rarely has these feelings / behaviors.
Most people first acquire a belief - whether by epistomologically legitimate ways or not - and then proceed to defend it, ignore contrary evidence and feel opponents to be stupid, because that's just the way most people deal with beliefs that are important to them.
This is the most forceful version I've seen (assumed it had been posted before, discovered it probably hasn't, won't start a new thread since it's too similar):
Kathryn Schulz, Being Wrong
But I'm not comfortable endorsing either of these quotes without a comment.
chipaca's quote (and friends) suggest to me that
Schulz's quote (and book) suggest to me that
I'd prefer to emphasize that "You are already in trouble when you feel like you’re still on solid ground," or said another way:
Becoming less wrong feels different from the experience of going about my business in a state that I will later decide was delusional.
Schulz hasn't been quoted here before, but you might've seen my use of that quote on http://www.gwern.net/Mistakes to which I will add a quote of Wittgenstein making the same quote but much more compressed and concisely:
"It is one thing for you to say, ‘Let the world burn.' It is another to say, ‘Let Molly burn.' The difference is all in the name."
-- Uriel, Ghost Story, Jim Butcher
from The Last Samurai by Helen DeWitt
The mathematician and Fields medalist Vladimir Voevodsky on using automated proof assistants in mathematics:
[...]
[...]
[...]
[...]
From a March 26, 2014 talk. Slides available here.
I know you're not supposed to quote yourself, but I came up with a cool saying about this a while back and I just want to share it.
Computer proof verification is like taking off and nuking the whole site from orbit: it's the only way to be sure.
Plutarch, "De Auditu" (On Listening), a chapter of his Moralia.
This essay is also the original source of the much-quoted line "The mind is not a pot to be filled, but a fire to be ignited." It is variously attributed, but is a fair distillation of the original passage, which comes directly before the quote above:
Yuval Levin in the National Review
To the extent that we can overcome our current limits, we have to understand them first. We should beware false humility and rationalization of existing limits (e.g. deathism).
Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.
Hans Moravec, Wikipedia/Moravec's Paradox
The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived... As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.
Stephen Pinker, Wikipedia/Moravec's Paradox
What was the ratio of phone time spent talking to human vs computer receptionists when Pinker published this quote in 2007? For that matter, how much non-phone time was being spent using a website to perform a transaction that would have previously required interaction with a human receptionist?
Pinker understood AI correctly (it's still way too hard to handle arbitrary interactions with customers), yet he failed to predict the present, much less the future, because he misunderstood the economics. Most interactions with customers are very non-arbitrary. If 10% need human intervention, then you put a human in the loop after the other 90% have been taken care of by much-cheaper software.
If you were to say "a machine can't do everything a horse can do", you'd be right, even today, but that isn't a refutation of the effect of automation on the economic prospects of equine labor.
Except that in exponentially-increasing computation-technology-driven timelines, decades are compressed into minutes after the knee of the exponential. The extra time a good cook has, isn't long.
Let's hope that we're not still paying rent then, or we might find ourselves homeless.
Raising Steam, Terry Pratchett
Regarding the first steam engine in Pratchett's fictional world.
Relevant is the Amtal Rule on this same page: http://lesswrong.com/r/lesswrong/lw/jzn/rationality_quotes_april_2014/as28