Rationality Quotes April 2014
Another month has passed and here is a new rationality quotes thread. The usual rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
And one new rule:
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (656)
Source: http://www.prequeladventure.com/2014/05/3391/
thank you for posting this - now I have something new to read!
-- Tom Stoppard, The Real Thing
Plutarch, "De Auditu" (On Listening), a chapter of his Moralia.
This essay is also the original source of the much-quoted line "The mind is not a pot to be filled, but a fire to be ignited." It is variously attributed, but is a fair distillation of the original passage, which comes directly before the quote above:
Scott Adams on consciously controlling your own moods and feelings
It has come to be accepted practice in introducing new physical quantities that they shall be regarded as defined by the series of measuring operations and calculations of which they are the result. Those who associate with the result a mental picture of some entity disporting itself in a metaphysical realm of existence do so at their own risk; physics can accept no responsibility for this embellishment.
Sir Arthur Eddington, 1939, The Philosophy of Physical Science
Or with smart people who profit at the state's expense when it rescues fools from their mistakes. If it's known that folly has no adverse results, people will take more risks.
While this is true, it may also be the case that humans in the default state don't take enough risks. Indeed, an inventor or entrepreneur bears all the costs of bankruptcy but captures only some of the benefits of a new business. By classical economic logic, then, risk-taking is a public good, and undersupplied. Which said, admittedly, not all risk-taking is created equal.
That's exactly wrong. Bankruptcy releases the entrepreneur from his obligations and transfers the costs to his creditors.
Not to say that the bankruptcy is painless, but its purpose is precisely to lessen the consequences of failure.
This premise doesn't seem true (for all that the conclusion is accurate). Our entire notion of bankruptcy serves the purpose of putting limits on the cost of those risks, transferring burden onto creditors. An example of an alternate cultural construct that come closer to making the entrepreneur bear all the costs of the risk is debt slavery. Others include various forms of formal or informal corporal or capital punishments applied to those that cannot pay their debts.
Whilst arguing that uncertainty is best measured using numbers and probabilities:
[missing the point]
On the contrary, combining adverbs is easy. If X is very uncertain, and Y is very uncertain, then X - Y is very, very uncertain. [/missing the point]
^_^
– Said Achmiz, in a comment on Slate Star Codex’s post “The Cowpox of Doubt”
The original quotation on LW.
Yuval Levin in the National Review
To the extent that we can overcome our current limits, we have to understand them first. We should beware false humility and rationalization of existing limits (e.g. deathism).
Eight Ways to Build Collaborative Teams by Lynda Gratton and Tamara J. Erickson
This seems applicable as the LessWrong community is "large, virtual, diverse, and composed of highly educated specialists" and the community wants to solve challenging projects.
-- many different people, most recently user chipaca on HN
It occurs to me that "being wrong" can be divided into two subcategories -- before and after you start seeing evidence or arguments which undermine your position.
With practice, the feeling of being right and seeing confirming information can be distinguished from the feeling of being wrong and seeing undermining information. Unfortunately, the latter feeling is very uncomfortable and it is always tempting look for ways to lessen it.
Hmm, what about such things as feeling that you need to defend the truth from criticism rather than find a way to explain it better? Or nagging doubts that you're ignoring, or a feeling that your opponents are acting the way they are because they're stupid or evil? Or wanting to censor someone else's speech? I take all these things as alarm signals.
A communist friend of mine once said, after I'd nailed her into a corner in a political argument about appropriate rates of pay during a fireman's strike, "Well under socialism there wouldn't be as many fires.". I reckon that there must be a feeling associated with that sort of thing.
Defending the truth from criticism also feels exactly the same as defending what you wrongly think is the truth from criticism.
The feelings you list correspond to very common ways people behave. So they're very weak evidence that you're wrong about something. Unless you're a trained rationalist who very rarely has these feelings / behaviors.
Most people first acquire a belief - whether by epistomologically legitimate ways or not - and then proceed to defend it, ignore contrary evidence and feel opponents to be stupid, because that's just the way most people deal with beliefs that are important to them.
This is the most forceful version I've seen (assumed it had been posted before, discovered it probably hasn't, won't start a new thread since it's too similar):
Kathryn Schulz, Being Wrong
But I'm not comfortable endorsing either of these quotes without a comment.
chipaca's quote (and friends) suggest to me that
Schulz's quote (and book) suggest to me that
I'd prefer to emphasize that "You are already in trouble when you feel like you’re still on solid ground," or said another way:
Becoming less wrong feels different from the experience of going about my business in a state that I will later decide was delusional.
Schulz hasn't been quoted here before, but you might've seen my use of that quote on http://www.gwern.net/Mistakes to which I will add a quote of Wittgenstein making the same quote but much more compressed and concisely:
Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.
Hans Moravec, Wikipedia/Moravec's Paradox
The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived... As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.
Stephen Pinker, Wikipedia/Moravec's Paradox
What was the ratio of phone time spent talking to human vs computer receptionists when Pinker published this quote in 2007? For that matter, how much non-phone time was being spent using a website to perform a transaction that would have previously required interaction with a human receptionist?
Pinker understood AI correctly (it's still way too hard to handle arbitrary interactions with customers), yet he failed to predict the present, much less the future, because he misunderstood the economics. Most interactions with customers are very non-arbitrary. If 10% need human intervention, then you put a human in the loop after the other 90% have been taken care of by much-cheaper software.
If you were to say "a machine can't do everything a horse can do", you'd be right, even today, but that isn't a refutation of the effect of automation on the economic prospects of equine labor.
Except that in exponentially-increasing computation-technology-driven timelines, decades are compressed into minutes after the knee of the exponential. The extra time a good cook has, isn't long.
Let's hope that we're not still paying rent then, or we might find ourselves homeless.
Raising Steam, Terry Pratchett
Regarding the first steam engine in Pratchett's fictional world.
Relevant is the Amtal Rule on this same page: http://lesswrong.com/r/lesswrong/lw/jzn/rationality_quotes_april_2014/as28
Terry Coxon
"Did many people die?"
"Three thousand four hundred and ninety-two."
"A small proportion."
"It is always one hundred percent for the individual concerned."
"Still..."
"No, no still."
-Ian Banks, Look to Windward
Does this quote have any rationalist content beyond the usual anti-deathism applause light?
And here I looked at that and saw a negative example of how not to do "shut up and multiply", though I suppose it could also be a warning about scope insensitivity / psychophysical numbing if the risk at hand required an absolute payment to stave off, rather than a per-capita payment, since in the former case only absolute numbers matter, and in the latter case per capita risks matter.
Maybe I need to include more context. This conversation occurs after the multiplication was done. This was discussing the aftermath, which had been minimized as much as the minds in question could manage. I took it to mean that, once you have made the best decision you can, there is no guarantee that you will be happy with the outcome, just that it would likely have been worse had you made any other decision.
I think the inability to include that context and make your interpretation clear means that it's a bad rationality quote because it's far too easily taken a 'consequentialism boo!' quote.
This is a great tagline for the doctrine of Original Sin.
"Even if it's not your fault, it's your punishment."
Clifford Truesdell
I don't see why an equation can't be nonsensical. Perhaps the nonsense is easier to spot when expressed in symbols, or then again perhaps not.
This is beautiful: I can't turn it into equations. Does that refute it or support it?
Did you try? Each sentence in the quote could easily be expressed in some formal system like predicate calculus or something.
On thrust work, drag work, and why creative work is perpetually frustrating --
"Each individual creative episode is unsustainable by its very nature. As a given episode accelerates, surpassing the sustainable long term trajectory, the thrust engine overwhelms the available supporting capabilities. ... Just as momentum build to truly exciting levels…some new limitation appears squelching that momentum. ...The problem is that you outran your supporting capabilities and that deficit became a source of drag. Perhaps you didn’t have systems in place to capture leads. Perhaps you lacked the bandwidth necessary to follow up on all the new opportunities. Perhaps, due to lack of experience, you pursued the wrong opportunities. Perhaps you just didn’t know what to do next – you outran your existing knowledge base. In one way or another new varieties of drag emerge. The accelerating curve you had been riding becomes unsustainable and you find yourself mired in the slow build of the next episode. This is what we experience as anti-climax and temporary stagnation." -- Greg Raider, from his essay "A Pilgrimage Through Stagnation and Acceleration"
The whole piece is worth reading, it's really good -- http://onthespiral.com/pilgrimage-through-stagnation-acceleration
Hat tip to Zach Obront for linking me to it originally.
Donald Knuth on the difference between theory and practice.
Duplicate.
-- Daniel Dennett, Intuition Pumps and Other Tools for Thinking
Are we sure about this? Einstein's idea of riding along with a light beam was super-useful and physically impossible in principle. Whereas the experiment I just thought of where I pour my cup of tea on my trousers I can almost not be bothered to do.
This is funny. Until I read your comment, I was misreading the original quote; I didn't notice the "inversely" part. I was implicitly thinking that the quote was claiming that the farther the thought experiment is from reality, the more useful it is. I guess my physicist biases are showing.
I think that's my point! It sounds just as profound without the 'inversely'.
Ceteris paribus, then. On average, a thought experiment along the lines of "what if I poured this stuff on my trousers" is of much more practical use and tells you much more about reality than a thought experiment along the lines of "what if I could ride around on [intangible thing]". The most realistic thought experiments are the ones we do all the time, often without thinking, and which help us decide, for example, not to balance that cup of tea right on the edge of the table. Meanwhile, only very clever scientists and philosophers with lots of training can wring anything useful out of really far-out "what if I rode on a beam of light"-type thought experiments, and even they screw it up all the time and are generally well-advised not to base a conclusion solely on such a thought experiment. As I understand it, Einstein's successful use of gedankenexperiments to come up with good new ideas is generally considered evidence of his exceptional cleverness.
(note: I know very little about this topic and may be playing very fast and loose. I think the main idea is sensible, though)
I assume that the reader is familiar with the idea of extrasensory perception, and the meaning of the four items of it, viz., telepathy, clairvoyance, precognition and psychokinesis. These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming.
Alan Turing (from "Computing Machinery and Intelligence")
Particularly relevant a quote given Yvain's recent http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/
Can you provide some context? I don't understand: the claim that the evidence for telepathy is very strong is surely wrong, so is this sarcasm? A wordplay?
Turing's 1950 paper asks, "Can machines think?"
After introducing the Turing Test as a possible way to answer the question (in, he expects, the positive), he presents nine possible objections, and explains why he thinks each either doesn't apply or can be worked around. These objections deal with such topics as souls, Gödel's theorem, consciousness, and so on. Psychic powers are the last of these possible objections: if an interrogator can read the mind of a human, they can identify a human; if they can psychokinetically control the output of a computer, they can manipulate it.
From the context, it does seem that Turing gives some credence to the existence of psychic powers. This doesn't seem all that surprising for a British government mathematician in 1950. This was the era after the Rhines' apparently positive telepathy research — and well before major organized debunking of parapsychology as a pseudoscience (which started in the '70s with Randi and CSICOP). Governments including the US, UK, and USSR were putting actual money into ESP research.
Yes, but also remember that Turing's English, shy, and from King's College, home of a certain archness and dry wit. I think he's taking the piss, but the very ambiguity of it was why it appealed as a rationality quote. He's facing the evidence squarely, declaring his biases, taking the objection seriously, and yet there's still a profound feeling that he's defying the data. Or maybe not. Maybe I just read it that way because I don't buy telepathy.
I think Turing's willingness to take all comers seriously is something to emulate.
Hodges claims that Turing at least had some interest in telepathy and prophesies:
Alan Turing: The Enigma (Chapter 7)
Jerry Spinelli, Stargirl
So as to keep the quote on its own, my commentary:
This passage (read at around age 10) may have been my first exposure to an EA mindset, and I think that "things you don't value much anymore can still provide great utility for other people" is a powerful lesson in general.
-- Max Tegmark, Scientific American guest blog, 2014-02-04
I would think the first objection to that line of reasoning would be that we know General Relativity is an incomplete theory of reality and expect to find something that supersedes it and gives better answers regarding black holes.
Douglas Adams, Hitchhiker's Guide to the Galaxy
Thanks for this one.. It's been some time since I re-read Douglas Adams , and had forgotten how good he can be. It makes so much sense reading this right after reading "Bind yourself to Reality". Had good long guffaw out of this one .:-)
"It is one thing for you to say, ‘Let the world burn.' It is another to say, ‘Let Molly burn.' The difference is all in the name."
-- Uriel, Ghost Story, Jim Butcher
-- Henry Hazlitt, Economics in One Lesson
Cracked
Finding out that you're stupid (or ignorant) is an important start. I don't recommend insulting people because they're started rather than continued the job, especially if they're young.
I don't see how that's any different from all the other age groups ;-).
It is, in fact, a very good rule to be especially suspicious of work that says what you want to hear, precisely because the will to believe is a natural human tendency that must be fought.
- Paul Krugman
The mathematician and Fields medalist Vladimir Voevodsky on using automated proof assistants in mathematics:
[...]
[...]
[...]
[...]
From a March 26, 2014 talk. Slides available here.
Computer scientists seem much more ready to adopt the language of homotopy type theory than homotopy theorists at the moment. It should be noted that there are many competing new languages for expressing the insights garnered by infinity groupoids. Though Voevodsky's language is the only one that has any connection to computers, the competing language of quasi-categories is more popular.
I know you're not supposed to quote yourself, but I came up with a cool saying about this a while back and I just want to share it.
Computer proof verification is like taking off and nuking the whole site from orbit: it's the only way to be sure.
A video of the whole talk is available here.
And his textbook on the new univalent foundations of mathematics in homotopy type theory is here.
It is misleading to attribute that book solely to Voevodsky.
-Daniel Dennett, Intuition Pumps and Other Tools for Thinking, Chapter 18 "The Intentional Stance" [Bold is original]
Reminded me of the idea of 'hacking away at the edges'.
-- Alfred Adler
ADDED: Source: http://en.wikiquote.org/wiki/Alfred_Adler
Quoted in: Phyllis Bottome, Alfred Adler: Apostle of Freedom (1939), ch. 5
Problems of Neurosis: A Book of Case Histories (1929)
Comedian Simon Munnery:
(Edited to add context)
Context: The speakers work for a railroad. An important customer has just fired them in favor of a competitor, the Phoenix-Durango Railroad.
It gets at the idea talked about here sometimes that reality has no obligation to give you tests you can pass; sometimes you just fail and that's it.
ETA: On reflection, what I think the quote really gets at is that Taggart cannot understand that his terminal goals may be only someone else's instrumental goals, that other people are not extensions of himself. Taggart's terminal goal is to run as many trains as possible. If he can help a customer, then the customer is happy to have Taggart carry his freight, and Taggart's terminal goal aligns with the customer's instrumental goal. But the customer's terminal goal is not to give Taggart Inc. business, but just to get his freight shipped. If the customer can find a better alternative, like competing railroad, he'll switch. For Taggart, of course, that is not a better alternative at all, hence his anger and confusion.
(Apologies for lack of context initially).
Without context, it's a bit difficult to see how this is a rationality quote. Not everyone here has read Atlas Shrugged...
I've read AS a while ago, and I still don't remember enough of the context to interpret this quote...
-- Meta --
Shouldn't this be in Main rather than Discussion? I PM'ed the author, but didn't get a response.
EDIT: Thanks.
-- Richard Fumerton, Epistemology
"Go work in AI for a while, then come back and write a book on epistemology," he thought.
Upon reading this, he wanted to map out the argumentative space in his head and decided to try to draw a line at one end, saying "Lets not get nuts. Mercury thermometers can react differentially to temperature, but they don't know how hot it is."
[citation needed]
Really? So, say, if I put a bone on the other side of the river, the dog doesn't know that it can swim across?
How would one tell?
First, you offer them a sequence of bets such that...oh wait.
from The Last Samurai by Helen DeWitt
G. K. Chesterton, attributed.
Might someone offer an explanation of this to me?
On its own I can think of several things that these words might be uttered in order to express. A little search turns up a more extended form, with a claimed source:
Said to be by G.K. Chesterton in the New York Times Magazine of February 11, 1923, which appears to be a real thing, but one which is not online. According to this version, he is jibing at progressivism, the adulation of the latest thing because it is newer than yesterday's latest thing.
ETA: Chesterton uses the same analogy, in rather more words, here.
Note that this accentuates the relevance of a detail that might be skipped over in the original quote- that Thursday comes after Wednesday. That is, this may be intended as a dismissal of the 'all change is progress' position or the 'traditions are bad because they are traditions' position.
Not to mention the people who think accusing their opponents of being "on the wrong side of history" constitutes an argument.
I think you may not be interpreting the phrase "the wrong side of history" as people who say it mean it.
There a classic saying that "
A new scientific truth does not triumph by convincing its opponents and making them see light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." Max Planck
Effectively there's a position that's obviously correct but there are also people who are just too hidebound and change averse to recognize it and progress can't be made until they die off. But progress will be made because the position is correct. When you tell someone they are on the wrong side of history you are reminding them they are behaving like one of the old men that Plank mentions. Put another way, what it's saying is "if you look at people who don't come from the past and don't have large status quo bias you will notice a trend".
One's person status quo bias is another person's Chesterton fence. The quote from which this comment tree branches is from Chesterton.
Is this falsifiable?
Sure, just step back in time.
A bit less than two millenia ago one could have said "Effectively there's a position -- that Jesus gifted eternal life to humanity -- that's obviously correct but there are also people who are just too hidebound and change averse to recognize it and progress can't be made until they die off. But progress will be made because the position is correct."
I was actually thinking of eugenics, which was once a progressivist "obvious correct thing where we just need to wait until these luddites die off until everything will be great" thing, until it wasn't. Incidentally a counterexample to "Cthulhu always swims left" too.
It's a case where "correct", "right side of history" and "progress" dissociate from each other.
Interestingly, if you press the people making that claim for what they mean by "left", their answer boils down to "whatever is in Cthulhu's forward cone".
I think you could make a case for totalitarianism, too. During the interwar years, not only old-school aristocracy but also market democracy were in some sense seen as being doomed by history; fascism got a lot of its punch from being thought of as a viable alternative to state communism when the dominant ideologies of the pre-WWI scene were temporarily discredited. Now, of course, we tend to see fascism as right-wing, but I get the sense that that mostly has to do with the mainstream left's adoption of civil rights causes in the postwar era; at the time, it would have been seen (at least by its adherents) as a more syncretic position.
I don't think you can call WWII an unambiguous win for market democracy, but I do think that it ended up looking a lot more viable in 1946 than it did in, say, 1933.
For a more modern example, wouldn't that have been said for marijuana a few decades ago?
Everyone expected that once the older people who opposed marijuana died off and the hippies grew into positions of power, everyone would want it to be legal. That didn't work out. (The support for legalization has gone up recently, but not because of this.)
I suspect it is falsifiable. I might unpack it as the following sub claims
1 Degree of status quo bias is positively correlated to time spent in a particular status quo (my gut tells me there should be a causal link, but I bet correlation is all you could find in studies)
2 On issue X, belief that X[past] is the correct way to do X is correlated with time spent living in an X[past] regime.
2.5 Possibly a corollary to the above, but maybe a separate claim: among people who you would expect to have the least status quo bias position X[other] is favored at much higher rates than among the general population
For most issues 2 and 2.5 can probably be checked with good polling data. Point 1 is the kind of thing its possible to do studies on, so I think its in principle falsifiable, though I don’t know if such studies have actually been done.
2) is also what you would expect to see if X[past] was indeed better than X[other].
2.5) Not having status quo bias isn't equivalent to being unbiased. A large number of the people that are least likely to have status quo bias are going to be at the other end of the spectrum - chronic contrarians.
Note that which X is better may depend on circumstances (e.g. technological level).
In physics, yes. In history / political science, no.
In politics, no position is obviously correct. Claiming that one's own position is obviously correct or that history is on our side is just a way of browbeating others instead of actually making a case.
Claiming that the opponents of some newly viral idea are "on the wrong side of history" is like claiming that Klingon is the language of the future based on the growth rate when the number of speakers has actually gone from zero to a few hundred.
No -- you are telling them. To remind someone of a thing is to tell them what they already know. To talk of "reminding" in this context is to presume that they already know that they are wrong but won't admit it, and is just another way of speaking in bad faith to avoid actually making a case.
I strongly agree. It's possible that history has a side, but we can hardly know what it is in advance.
I don't think you agree. I think Eugine has a problem with the idea that just because an idea wins in history doesn't mean that's it's a good idea.
Marx replaced what Hegel called God with history. Marx idea that you don't need a God to tell you what's morally right, history will tell you. Neoreactionaries don't like that sentiment that history decides what's morally right.
I'm very far from being a reactionary or neoreactionary, but I also don't put much moral weight on history - that is, on what most other people come to believe.
For one thing, believing that would mean every moral reformer who predicts for themselves only a small chance of reforming society, should conclude that they are wrong about morals.
Speaking of which, let's see what history has to say about Marx. It would appear that the Marxist nations lost to a semi-religious nation. Thus apparently history has judged that the idea that history will tell you what is right to be wrong.
I am not a neoreactionary and I think the sentiment that history decides what's morally right is a remarkably silly idea.
You have to compare it to the alternatives. Do you think it's more or less silly than the idea that there a God in the sky judging what's right or wrong?
Marx basically had the idea that you don't need God for an absolute moral system when you can pin it all on history with supposedly moves in a certain direction. You observe how history moves. Then you extrapolate. You look at the limit of that function and that limit is the perfect morality. It's what someone who got a rough idea of calculus does, but who doesn't fully understand the assumptions that go into the process.
In the US where Marx didn't have much influence as in Europe there are still a bunch of people who believe in young earth creationism. On a scale of silliness that's much worse.
Today the postmodernists rule liberal thought but there are still relicts of marxist ideas. Part of what being modern was about is having an absolute moral system. Whether or not those people are silly is also open for debate.
I think they're both quite silly. Also, the fact that many people believe in God as a source of morality, is itself a reason why history (i.e. the actions of those people) is a bad moral guide.
Surely most pre-modern philosophers also had absolute moral systems?
Sure. Let's compare it to the alternative the morality is partially biologically hardwired and partially culturally determined. By comparison the idea that "history decides what's morally right" is silly.
Yep, he had this idea. That doesn't make it a right idea. Marx had lots of ideas which didn't turn out well.
Oh, so -- keeping in mind we're on LW -- the universe tiled with paperclips might turn out to be the perfect morality? X-D
And remind me, how well does extrapolation of history work?
Do you, by any chance, believe there is a causal connection between these two observations that you jammed into a single sentence?
There are (at least) two things wrong with "the right side of history". One is that we can't know that history has a side, or what side it might be because a tremendous amount of history hasn't happened yet, and the other error is that history might prefer worse outcomes in some sense.
I find the first sort of error so annoying that I normally don't even see the second.
My impression is that Eugene is annoyed by both sorts of error, but I hope he'll say where he stands on this.
There's a third thing wrong with it: generally, people use the phrase in order to praise one side of some historical dispute (and implicitly condemn the other) by attributing to them (in part or in whole) some historical change that is deemed beneficial by the person doing the praising. The problem with this is that usually when you go back and look at the actual goals of the groups being praised, they usually end up bearing very little relation to the changes that the praiser is trying to associate them with, if not being completely antithetical. Herbert Butterfield (who I posted about above) initially noticed this in the tendency of people to try to attribute modern notions of religious toleration to the Protestant reformation, when in fact Martin Luthor wrote songs about murdering Jews, and lobbied the local princes to violently surpress rival Protestant sects.
I hadn't even thought of the first objection, possibly because I stopped considering "what side history is on" a useful concept after noticing the second one.
So you are not going to argue that history has shown that socialism has failed?
That's using history as evidence. What I was complaining about is closer to the people who declare that all opponents of a change that they plan to implement (or at best have only implemented at most several decades ago) are "on the wrong side of history".
Arguing about preferences (=opinions, =values) is pretty pointless.
Upvoted. I would've preferred the following version:
Nassim Taleb
By that standard, all academic disciplines are BS disciplines.
I believe that is the intended meaning, yes.
This seems false in physics. Prestige of your institution matters. Prestige of the journal matters, too. Arxiv is fine, Physical Reviews is better, PRL is better yet. Nature/Science is so high, if you publish something that is not perceived as top-quality, you may get resented by others for status jumping. And there are plenty of journals which only get to publish second- and third-rate results.
Of course, the usual countersignaling caveat applies: once you have enough status, posting on Arxiv is enough, you will get read. Not submitting to journals can be seen as a sign of status, though I don't think the field is there (yet).
My understating is that this effect is a lot smaller in physics than in the humanities.
I think, by this standard, law is a BS discipline. But I'm not sure what to make of that.
I think Spooner got it right:
-Lysander Spooner from "An Essay on the Trial by Jury"
There is legitimate law, but not once law is licensed, and the system has been recursively destroyed by sociopaths, as our current system of law has been. At such a point in time, perverse incentives and the punishment of virtue attracts sociopaths to the study and practice of law, and drives out all moral and decent empaths from its practice. If not driven out, it renders them ineffective defenders of the good, while enabling the prosecutors who hold the power of "voir dire" jury-stacking to be effective promoters of the bad.
The empathy-favoring nature of unanimous, proper (randomly-selected) juries trends toward punishment only in cases where 99.9% of society nearly-unanimously agree on the punishment, making punishment rare. ...As it should be in enlightened civilizations.
What exactly is meant by the phrase "BS discipline"? Is the claim that most scholarship in law is meaningless nonsense? Or is the claim that there is no societal value at all in law? Or is it something else?
I suppose a discipline is BS if in the case of a science, it fails to systematically track the realities of an object of study. In the case of a trade, like business management or welding, then it's a BS discipline if it fails to make its practitioners more successful than those outside the discipline. I'm not sure what kind of a discipline law is.
Taleb's thought, I suppose, is that a discipline is likely to be BS if, instead of directly measuring the capabilities of its practitioners, we tend to measure only indirectly. This only implies that direct measurement is costly enough to outweigh its benefits, however. One reason for its being so may be that there's nothing to measure directly (i.e. the discipline is BS), but another might be that the discipline is so specialized that very few people are competent to judge any given applicant. Yet a third might be that its subject matter is subject to a lot of mind-killing, so that one can confidently judge an applicant without bias.
I agree that it's difficult to tell how good a lawyer is, which leads to a lot of nonsense like firms spending a lot of money of impressive offices and spending hours and hours of time chasing down every last grammatical error before filing court papers.
This is true for a lot of professions. Most of them don't have the problem you're describing.
Would you mind giving me three examples? This would help me think about what you are saying. TIA.
Interesting. There are famous cases of self-taught lawyers from previous centuries.
I wonder if this says something bad about the modern legal system. Maybe the modern legal system is less about making arguments based on how the law works (or should work) than about the lawyer signaling high status to the judge so that he rules in your favor.
There are famous cases of self-taught specialists in scientific fields, too. There aren't so many of them nowadays. That's because both the law and science are in a state where a practitioner must know a lot of details that didn't exist as part of the field in earlier days.
I don't think I have good reason to think this is the case. At any rate, it's clear enough that the prestige bit seems to come in heavily in hiring decisions, so let's just talk about that. How, in the ideal case, do you think lawyers would be evaluated for jobs? Off hand, I can't think of anything a lawyer could produce to show that she's a good hire.
I'm not a lawyer, and English law is different from American, but I reckon that I can tell the difference between good and bad lawyers by talking to them for a while about various cases in their speciality and listening to them explain the various arguments and counter-arguments.
I've heard people who make a good living from the law make incoherent wishful-thinking type arguments about which way a case should have gone, when I can see perfectly well how the judge was compelled to the conclusion that he came to. I wouldn't want such a person defending me.
Presumably if you are yourself a good lawyer, it shouldn't be too difficult to do this. The law is fairly logical and rigorous.
Well, if his "reality distortion field" was powerful enough to also affect judges.
Well - law is, in a strict sense, entirely about convincing other humans that your interpretation is correct.
Whether or not it actually is correct in a formal sense is entirely screened off by that prime requirement, and so you probably shouldn't be surprised that all methods used by humans to convince other humans, in the absence of absolute truth, are applied. :)
Would that include drafting a fire code for buildings? Would it include negotiating a purchase and sale agreement for a business? Would it include filing a lawsuit for unpaid wages? Would it include advising a client about the possible consequences of taking a particular tax deduction?
It's hard to see how it would, and yet all of these things are regularly done by lawyers in the course of their work.
Those are, indeed, all examples of persuading human beings.
The other two are excellent points.
"persuading human beings" is not exactly the same thing as "convincing other humans that your interpretation is correct."
Besides, in negotiating an agreement much of the attorney's job consists of (1) advising his client of issues which are likely to arise; (2) helping the client to understand which issues are more important and which are less important; and (3) drafting language to address those issues. Yes, persuasion comes into it sometimes, but it's usually not primary.
Filing a lawsuit for unpaid wages can be seen as persuasion in a general sense. If Baughn wants to claim that in a strict sense, litigation is about getting other people to do stuff, then I would agree.
Thank you.
Jessica speaking to Thufir Hawat in Frank Herbert's Dune
-Timothy Gowers, on finding out a method he’d hoped would work, in fact would not.
Nassim Taleb
-- Reagan and Scipio debate the nature of definitions. From Templar, Arizona
Nassim Taleb
flying vs aeroplanes?
http://www.reddit.com/r/askscience/comments/e3yjg/is_there_any_way_to_improve_intelligence_or_are/c153p8w
reddit user jjbcn on trying to improve your intelligence
If you're not a student of physics, The Feynman Lectures on Physics is probably really useful for this purpose. It's free for download!
http://www.feynmanlectures.caltech.edu/
It seems like the Feynman lectures were a bit like the Sequences for those Caltech students:
He brags shamelessly about his wide variety of interests: Drumming, lockpicking, PUA, biology, Tana Tuva, etc.
The Feynman divorce:
You're right.
Indeed, terse "explanations" that handwave more than explain are a pet peeve of mine. They can be outright confusing and cause more harm than good IMO. See this question on phrasing explanations in physics for some examples.
Trying to actually understand what equations describe is something I'm always trying to do in school, but I find my teachers positively trained in the art of superficiality and dark-side teaching. Allow me to share two actual conversations with my Maths and Physics teachers from school.:
(Physics class)
And yet to most people, I can't even vent the ridiculousness of a teacher actually saying this; they just think it's the norm!
I haven't seen them mentioned in this thread, so thought I'd add them, since they're probably valid and worth thinking about:
the utility of a math understanding, combined with the skills required for doing things such as mathematical proofs (or having a deep understanding of physics) is low for most humans. much lower than rote memorization of some simple mathematical and algebraic rules. consider, especially, the level of education that most will attain, and that the amount of abstract math and physics exposure in that time is very small. teaching such things in average classrooms may on average be both inefficient and unfair to the majority of students. you're looking for knowledge and understanding in all the wrong places.
the vast majority of public education systems are, pragmatically speaking, tools purpose built and designed to produce model citizens, with intelligence and knowledge gains seen as beneficial but not necessary side effects. ie, as long as the kids are off the streets - if they're going to get good jobs as a side effect, that's a bonus. you're using the wrong tools, for the job (either use better tools, or misuse the tools you have to get the job you want done, right)
Ahem:
For every EY quote, there exists an equal and opposite ~~EY~~ PC Hodgell quote:
(That was P.C. Hodgell, not EY.)
Amusing, although I'll point out that there are some subtle difference between a physics classroom and the MOR!universe. Or at least, I think there are...
I will only say that when I was a physics major, there were negative course numbers in some copies of the course catalog. And the students who, it was rumored, attended those classes were... somewhat off, ever after.
And concerning how I got my math PhD, and the price I paid for it, and the reason I left the world of pure math research afterwards, I will say not one word.
Were there also course numbers with a non-zero complex part?
Were there tentacles involved? Strange ethereal piping? Anything rugose or cyclopean in character?
I think we can safely say there were non-Euclidean geometries involved.
So you wanted to know not how to derive the solution but how to derive the derivation?
I wouldn't blame the teacher for not going there. There's not enough time in class to do something like that. Bringing the students to understand the presented math is hard enough. Describing the process of how this math was found, would take too long. Because especially for harder problems there were probably dozens of mathematicians who studied the problem for centuries in order to find those derivations that your teacher presents to you.
What's wrong with saying something to the effect of "There's a theorem -- it's not really within the scope of this course, but if you're really interested it's called the fixed-point theorem, you can look it up on Wikipedia or somewhere"?
Derive the derivation? Huh? And you say that's different from 'understanding' it. No, I just didn't have the most basic of intuitive ideas as to why he suddenly made an iterated equation, and I didn't understand why it worked, at any level. It was all just abstract symbol manipulation with no content for me, and that's not learning.
Furthermore, he does have the time. We have nine hours a week. With a class size of four pupils.
He may actually not know. People who teach maths are often not terribly good at it. Why don't you post the equation and the thing he turned it into? One of us will probably be able to see what is going on.
In all fairness, at university, being lectured by people whose job was maths research and who were truly world class at it, I remember similar happenings. Although they have subtler ways of telling you to shut up. Figuring out what's going on between the steps of a proof is half the fun and it tends to make your head explode with joy when you finally get it.
I just gave a couple of terms of first year maths lectures, stuff that I thought I knew well, and the effort of going through and actually understanding everything I was talking about turned what was supposed to be two hours a week into two days a week, so I can quite see why busy people don't bother. And in the process I found a couple of mistakes in the course notes (that of course get passed down from year to year, not rewritten from scratch with every new lecturer).
In my school math education we had the standard that everything we learn get's proved. If you are not in the habit of proving math, students are not well prepared for doing real math in university which is about mathematical proofs.
In general the math that's not understood but memorized gets soon forgotten and is not worth teaching in the first place.
That's a great rule, but it still has to have limits. Otherwise you couldn't teach calculus without teaching number theory and set theory and probably some algebraic structures and mathematical logic too.
What is wrong with learning logic, set theory, and number theory before (or in the context of high school, instead of) calculus?
EDIT: Personally, I think going into computer science would have been easier if in high school I learned logic and set theory my last two years rather than trigonometry and calculus.
The thing that's wrong is exactly that it would indeed have to be instead of calculus. And then students would not pass the nationally mandated matriculation exams or university entry exams, which test knowledge of calculus. One part of the system can't change independently from the others. I agree that if you're going to teach just one field of math, then calculus is not the optimal choice.
I do believe that for every field that's taught in highschool, the most important theories and results should be taught: evolution, genetics, cell structure and anatomy in biology; Newtonian mechanics, electromagnetism and relativity in physics (QM probably requires too much math for any high-school program); etc.
There won't be time to prove and fully explain everything that's being shown, because time is limited, and it's better that all the people in our society know about classical mechanics and EM and relativity, than that they know about just one of them but have studied and reproduced enough experiments to demonstrate that that one theory is true compared to all alternatives of similar complexity.
And similarly, I think it would be better if everyone knew about the fundamental results of all the important fields of math, than being able to prove a lot of theorems in a couple of fields on highschool exams but not getting to hear a lot of other fields.
Really? I think it's very beautiful and it's what hooked me. And it's the bit the scientists use. What would you teach everyone instead?
As far as possible, we should allow students to learn more and help guide them to the sciences. But scientists are in the end a small minority of the population and some things are important to teach to everyone. I don't think calculus passes that test, and neither does classic geometry and analytic geometry, which received a lot of time in my school.
Instead I would teach statistics, basic probability theory, programming (if you can sell it as applied math), basic set and number theory (e.g. countable and uncountable infinities, rational and real numbers), basic computer science with some important cryptography results given without proof (e.g. public-key encryption). At least one of these should demonstrate the concept of mathematical proofs and logic (set theory is a good candidate).
Interesting question. I'm a programmer who works in EDA software, including using transistor-level simulations, and I use surprisingly little math. Knowing the idea of a derivative (and how noisy numerical approximations to them can be!) is important - but it is really rare for me to actually compute one. It is reasonably common to run into a piece of code that reverses the transformation done by another pieces of code, but that is about it. The core algorithms of the simulators involves sophisticated math - but that is stable and encapsulated, so it is mostly a black box. As a citizen, statistics are potentially useful, but mostly just at the level of: This article quotes an X% change in something with N patients, does it look like N was large enough that this could possibly be statistically significant? But usually the problem with such studies in the the systematic errors, which are essentially impossible for a casual examination to find.
I've noticed that one of the biggest thing holding me back in math/physics is an aversion to thinking too hard/long about math and physics problems. It seems to me that if I was able to overcome this aversion and math was as fun as playing video games I'd be a lot better at it.
Thinking for a long time is one of the classic descriptions of Newton; from John Maynard Keynes's "Newton, the Man":
You have to want to be a wizard.
Plenty of us took the Wizard's Oath as kids and still have a hard time in math classes sometimes.
I think everyone has trouble in math class, eventually.
Not in my experience, unless you're talking about trouble teaching them. It's very possible to run out of classes before you hit anything truly difficult (in my country there are no more classes after Masters level, a PhD student is expected to be doing research - the american notion of "all but dissertation" provokes endless amusement, here you're "all but dissertation" from day 1).
Good video games are designed to be fun, that is their purpose. Math, um, not so much.
Only a small fraction of math has practical applications, the majority of math exists for no reason other than thinking about it is fun. Even things with applications had sometimes been invented before those applications were known. So in a sense most math is designed to be fun. Of course it's not fun for everyone, just for a special class of people who are into this kind of thing. That makes it different from Angry Birds. But there are many games which are also only enjoyed by a specific audience, so maybe the difference is not that fundamental. A large part of the reason the average person doesn't enjoy math is that unlike Angry Birds math requires some effort, which is the same reason the average person doesn't enjoy League Of Evil III.
Sokal's hoax was heroic
A bigger danger is publication bias. collect 10 well run trials without knowing that 20 similar well run ones exist but weren't published because their findings weren't convenient and your meta-analysis ends up distorted from the outset.
Does anyone know how often this happens in statistical meta-analysis?
We can't know for certain. That's the idea of systematic biases. There no way to tell if all your trials are slanted in a specific fashion, if the biases also appears in your high quality studies.
On the other hand we have fields such as homeopathy or telephathy (Ganzfeld experiments) where there are meta-analysis that treat all studies mostly equally that find that homeopathy works and telepahty exist. On the other hand you have meta-analysis who try to filter out low quality studies who come to the conclusion that homeopathy doesn't work and telepathy doesn't exist.
Fairly often. One strategy I've seen is to compare meta-analyses to a later very-large study (rare for obvious reasons when dealing with RCTs) and seeing how often the confidence interval is blown; usually much higher than it should be. (The idea is that the larger study will give a higher-precision result which is a 'ground truth' or oracle for the meta-analysis's estimate, and if it's later, it will not have been included in the meta-analysis and also cannot have led the meta-analysts into Milliken-style distorting their results to get the 'right' answer.)
For example: LeLorier J, Gregoire G, Benhaddad A, Lapierre J, Derderian F. "Discrepancies between meta-analyses and subsequent large randomized, controlled trials". N Engl J Med 1997;337:536e42
(You can probably dig up more results looking through reverse citations of that paper, since it seems to be the originator of this criticism. And also, although I disagree with a lot of it, "Combining heterogenous studies using the random-effects model is a mistake and leads to inconclusive meta-analyses", Al khalaf et al 2010.)
I'm not sure how much to trust these meta-meta analyses. If only someone would aggregate them and test their accuracy against a control.
As a percentage? No. But qualitatively speaking, "often."
The most recent book I read discusses this particularly with respect to medicine, where the problem is especially pronounced because a majority of studies are conducted or funded by an industry with a financial stake in the results, with considerable leeway to influence them even without committing formal violations of procedure. But even in fields where this is not the case, issues like non-publication of data (a large proportion of all studies conducted are not published, and those which are not published are much more likely to contain negative results) will tend to make the available literature statistically unrepresentative.
~J. Stanton, "The Paleo Identity Crisis: What Is The Paleo Diet, Anyway?"
It not at all clear that someone who knows all the biochemistry will outperform someone who's good at feeling what goes on in his body.
In the absence of good measurement instruments feelings allow you to respond to specific situations much better than theoretical understanding.
I am told that the natural feeling for gravity and balance is worse than useless to a pilot.
I am told this as well.
Edited OP to make it clear that you can provide a link to the place you found the quote, rather than needing to track down an authoritative original source.