Rationality Quotes April 2014
Another month has passed and here is a new rationality quotes thread. The usual rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
And one new rule:
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (656)
The mathematician and Fields medalist Vladimir Voevodsky on using automated proof assistants in mathematics:
[...]
[...]
[...]
[...]
From a March 26, 2014 talk. Slides available here.
A video of the whole talk is available here.
And his textbook on the new univalent foundations of mathematics in homotopy type theory is here.
It is misleading to attribute that book solely to Voevodsky.
Computer scientists seem much more ready to adopt the language of homotopy type theory than homotopy theorists at the moment. It should be noted that there are many competing new languages for expressing the insights garnered by infinity groupoids. Though Voevodsky's language is the only one that has any connection to computers, the competing language of quasi-categories is more popular.
I know you're not supposed to quote yourself, but I came up with a cool saying about this a while back and I just want to share it.
Computer proof verification is like taking off and nuking the whole site from orbit: it's the only way to be sure.
"It is one thing for you to say, ‘Let the world burn.' It is another to say, ‘Let Molly burn.' The difference is all in the name."
-- Uriel, Ghost Story, Jim Butcher
-- Alfred Adler
ADDED: Source: http://en.wikiquote.org/wiki/Alfred_Adler
Quoted in: Phyllis Bottome, Alfred Adler: Apostle of Freedom (1939), ch. 5
Problems of Neurosis: A Book of Case Histories (1929)
Comedian Simon Munnery:
Douglas Adams, Hitchhiker's Guide to the Galaxy
Thanks for this one.. It's been some time since I re-read Douglas Adams , and had forgotten how good he can be. It makes so much sense reading this right after reading "Bind yourself to Reality". Had good long guffaw out of this one .:-)
A bigger danger is publication bias. collect 10 well run trials without knowing that 20 similar well run ones exist but weren't published because their findings weren't convenient and your meta-analysis ends up distorted from the outset.
Does anyone know how often this happens in statistical meta-analysis?
Fairly often. One strategy I've seen is to compare meta-analyses to a later very-large study (rare for obvious reasons when dealing with RCTs) and seeing how often the confidence interval is blown; usually much higher than it should be. (The idea is that the larger study will give a higher-precision result which is a 'ground truth' or oracle for the meta-analysis's estimate, and if it's later, it will not have been included in the meta-analysis and also cannot have led the meta-analysts into Milliken-style distorting their results to get the 'right' answer.)
For example: LeLorier J, Gregoire G, Benhaddad A, Lapierre J, Derderian F. "Discrepancies between meta-analyses and subsequent large randomized, controlled trials". N Engl J Med 1997;337:536e42
(You can probably dig up more results looking through reverse citations of that paper, since it seems to be the originator of this criticism. And also, although I disagree with a lot of it, "Combining heterogenous studies using the random-effects model is a mistake and leads to inconclusive meta-analyses", Al khalaf et al 2010.)
I'm not sure how much to trust these meta-meta analyses. If only someone would aggregate them and test their accuracy against a control.
As a percentage? No. But qualitatively speaking, "often."
The most recent book I read discusses this particularly with respect to medicine, where the problem is especially pronounced because a majority of studies are conducted or funded by an industry with a financial stake in the results, with considerable leeway to influence them even without committing formal violations of procedure. But even in fields where this is not the case, issues like non-publication of data (a large proportion of all studies conducted are not published, and those which are not published are much more likely to contain negative results) will tend to make the available literature statistically unrepresentative.
We can't know for certain. That's the idea of systematic biases. There no way to tell if all your trials are slanted in a specific fashion, if the biases also appears in your high quality studies.
On the other hand we have fields such as homeopathy or telephathy (Ganzfeld experiments) where there are meta-analysis that treat all studies mostly equally that find that homeopathy works and telepahty exist. On the other hand you have meta-analysis who try to filter out low quality studies who come to the conclusion that homeopathy doesn't work and telepathy doesn't exist.
Sokal's hoax was heroic
Jerry Spinelli, Stargirl
So as to keep the quote on its own, my commentary:
This passage (read at around age 10) may have been my first exposure to an EA mindset, and I think that "things you don't value much anymore can still provide great utility for other people" is a powerful lesson in general.
It is, in fact, a very good rule to be especially suspicious of work that says what you want to hear, precisely because the will to believe is a natural human tendency that must be fought.
- Paul Krugman
-- Max Tegmark, Scientific American guest blog, 2014-02-04
I would think the first objection to that line of reasoning would be that we know General Relativity is an incomplete theory of reality and expect to find something that supersedes it and gives better answers regarding black holes.
-Timothy Gowers, on finding out a method he’d hoped would work, in fact would not.
from The Last Samurai by Helen DeWitt
http://www.reddit.com/r/askscience/comments/e3yjg/is_there_any_way_to_improve_intelligence_or_are/c153p8w
reddit user jjbcn on trying to improve your intelligence
If you're not a student of physics, The Feynman Lectures on Physics is probably really useful for this purpose. It's free for download!
http://www.feynmanlectures.caltech.edu/
It seems like the Feynman lectures were a bit like the Sequences for those Caltech students:
Trying to actually understand what equations describe is something I'm always trying to do in school, but I find my teachers positively trained in the art of superficiality and dark-side teaching. Allow me to share two actual conversations with my Maths and Physics teachers from school.:
(Physics class)
And yet to most people, I can't even vent the ridiculousness of a teacher actually saying this; they just think it's the norm!
Ahem:
For every EY quote, there exists an equal and opposite ~~EY~~ PC Hodgell quote:
Amusing, although I'll point out that there are some subtle difference between a physics classroom and the MOR!universe. Or at least, I think there are...
I will only say that when I was a physics major, there were negative course numbers in some copies of the course catalog. And the students who, it was rumored, attended those classes were... somewhat off, ever after.
And concerning how I got my math PhD, and the price I paid for it, and the reason I left the world of pure math research afterwards, I will say not one word.
Were there tentacles involved? Strange ethereal piping? Anything rugose or cyclopean in character?
I think we can safely say there were non-Euclidean geometries involved.
Were there also course numbers with a non-zero complex part?
What level of school?
I haven't seen them mentioned in this thread, so thought I'd add them, since they're probably valid and worth thinking about:
the utility of a math understanding, combined with the skills required for doing things such as mathematical proofs (or having a deep understanding of physics) is low for most humans. much lower than rote memorization of some simple mathematical and algebraic rules. consider, especially, the level of education that most will attain, and that the amount of abstract math and physics exposure in that time is very small. teaching such things in average classrooms may on average be both inefficient and unfair to the majority of students. you're looking for knowledge and understanding in all the wrong places.
the vast majority of public education systems are, pragmatically speaking, tools purpose built and designed to produce model citizens, with intelligence and knowledge gains seen as beneficial but not necessary side effects. ie, as long as the kids are off the streets - if they're going to get good jobs as a side effect, that's a bonus. you're using the wrong tools, for the job (either use better tools, or misuse the tools you have to get the job you want done, right)
I've noticed that one of the biggest thing holding me back in math/physics is an aversion to thinking too hard/long about math and physics problems. It seems to me that if I was able to overcome this aversion and math was as fun as playing video games I'd be a lot better at it.
Good video games are designed to be fun, that is their purpose. Math, um, not so much.
Only a small fraction of math has practical applications, the majority of math exists for no reason other than thinking about it is fun. Even things with applications had sometimes been invented before those applications were known. So in a sense most math is designed to be fun. Of course it's not fun for everyone, just for a special class of people who are into this kind of thing. That makes it different from Angry Birds. But there are many games which are also only enjoyed by a specific audience, so maybe the difference is not that fundamental. A large part of the reason the average person doesn't enjoy math is that unlike Angry Birds math requires some effort, which is the same reason the average person doesn't enjoy League Of Evil III.
And at least some math instructors effectively teach that if you aren't already finding (their presentations of) math fascinating, that you must just not be a Math Person.
Math is a bit like liftening weights. Sitting in front of a heavy mathematical problem is challenging. The job of a good teacher isn't to remove the challenge. Math is about abstract thinking and a teacher who tries to spare his students from doing abstract thinking isn't doing it right.
Deliberate practice is mentally taxing.
The difficult thing as a teacher is to motivate the student to face the challenge whether the challenge is lifting weights or doing complicated math.
The job of a good teacher is to find a slightly less challenging problem, and to give you that problem first. Ideally, to find a sequence of problems very smoothly increasing in difficulty.
Just like a computer game doesn't start with the boss fight, although some determined players would win that, too.
No. Being good at math is about being able to keep your attention on a complicated proof even if it's very challenging and your head seems like it's going to burst.
If you want to build muscles you don't slowly increase the amount of weight and keep it at a level where it's effortless. You train to exhaustion of given muscles.
Building mental stamina to tackle very complicated abstract problems that aren't solvable in five minutes is part of a good math education.
Deliberate practice is supposed to feel hard. A computer game is supposed to feel fun. You can play a computer game for 12 hours. A few hours of delibrate practice are on the other usually enough to get someone to the rand of exhaustion.
If you only face problems in your education that are smooth like a computer game, you aren't well prepared for facing hard problems in reality. A good math education teaches you the mindset that's required to stick with a tough abstract problem and tackle it head on even if you can't fully grasp it after looking 30 minutes at it.
You might not use calculus at your job, but if your math education teaches you the ability to stay focused on hard abstract problems than it fulfilled it's purpose.
You can teach calculus by giving the student concrete real world examples but that defeats the point of the exercise. If we are honest most students won't need the calculus at their job. It's not the point of math education. At least in the mindset in which I got taught math at school in Germany.
You don't put on so much weight than you couldn't possibly lift it, either (nor so much weight that you could only lift it with atrocious form and risk of injury, the analogue of which would be memorising a proof as though it was a prayer in a dead language and only having a faulty understanding of what the words mean).
Yes, memorizing proof isn't the point. You want to derive proofs. I think it's perfectly fine to sit 1 hours in front of a complicated proof and not be able to solve the proof.
A ten year old might not have that mental stamia, but a good math education should teach it, so it's there by the end of school.
This kind of philosophy sounds like it's going to make a few people very good at tackling hard problems, while causing everyone else to become demotivated and hate math.
You have to want to be a wizard.
Plenty of us took the Wizard's Oath as kids and still have a hard time in math classes sometimes.
I think everyone has trouble in math class, eventually.
Not in my experience, unless you're talking about trouble teaching them. It's very possible to run out of classes before you hit anything truly difficult (in my country there are no more classes after Masters level, a PhD student is expected to be doing research - the american notion of "all but dissertation" provokes endless amusement, here you're "all but dissertation" from day 1).
Thinking for a long time is one of the classic descriptions of Newton; from John Maynard Keynes's "Newton, the Man":
He brags shamelessly about his wide variety of interests: Drumming, lockpicking, PUA, biology, Tana Tuva, etc.
The Feynman divorce:
You're right.
Indeed, terse "explanations" that handwave more than explain are a pet peeve of mine. They can be outright confusing and cause more harm than good IMO. See this question on phrasing explanations in physics for some examples.
-- many different people, most recently user chipaca on HN
Hmm, what about such things as feeling that you need to defend the truth from criticism rather than find a way to explain it better? Or nagging doubts that you're ignoring, or a feeling that your opponents are acting the way they are because they're stupid or evil? Or wanting to censor someone else's speech? I take all these things as alarm signals.
A communist friend of mine once said, after I'd nailed her into a corner in a political argument about appropriate rates of pay during a fireman's strike, "Well under socialism there wouldn't be as many fires.". I reckon that there must be a feeling associated with that sort of thing.
Defending the truth from criticism also feels exactly the same as defending what you wrongly think is the truth from criticism.
The feelings you list correspond to very common ways people behave. So they're very weak evidence that you're wrong about something. Unless you're a trained rationalist who very rarely has these feelings / behaviors.
Most people first acquire a belief - whether by epistomologically legitimate ways or not - and then proceed to defend it, ignore contrary evidence and feel opponents to be stupid, because that's just the way most people deal with beliefs that are important to them.
This is the most forceful version I've seen (assumed it had been posted before, discovered it probably hasn't, won't start a new thread since it's too similar):
Kathryn Schulz, Being Wrong
But I'm not comfortable endorsing either of these quotes without a comment.
chipaca's quote (and friends) suggest to me that
Schulz's quote (and book) suggest to me that
I'd prefer to emphasize that "You are already in trouble when you feel like you’re still on solid ground," or said another way:
Becoming less wrong feels different from the experience of going about my business in a state that I will later decide was delusional.
Schulz hasn't been quoted here before, but you might've seen my use of that quote on http://www.gwern.net/Mistakes to which I will add a quote of Wittgenstein making the same quote but much more compressed and concisely:
It occurs to me that "being wrong" can be divided into two subcategories -- before and after you start seeing evidence or arguments which undermine your position.
With practice, the feeling of being right and seeing confirming information can be distinguished from the feeling of being wrong and seeing undermining information. Unfortunately, the latter feeling is very uncomfortable and it is always tempting look for ways to lessen it.
– Said Achmiz, in a comment on Slate Star Codex’s post “The Cowpox of Doubt”
The original quotation on LW.
This is a great tagline for the doctrine of Original Sin.
"Even if it's not your fault, it's your punishment."
-- Henry Hazlitt, Economics in One Lesson
-Daniel Dennett, Intuition Pumps and Other Tools for Thinking, Chapter 18 "The Intentional Stance" [Bold is original]
Reminded me of the idea of 'hacking away at the edges'.
Jessica speaking to Thufir Hawat in Frank Herbert's Dune
G. K. Chesterton, attributed.
Upvoted. I would've preferred the following version:
Might someone offer an explanation of this to me?
On its own I can think of several things that these words might be uttered in order to express. A little search turns up a more extended form, with a claimed source:
Said to be by G.K. Chesterton in the New York Times Magazine of February 11, 1923, which appears to be a real thing, but one which is not online. According to this version, he is jibing at progressivism, the adulation of the latest thing because it is newer than yesterday's latest thing.
ETA: Chesterton uses the same analogy, in rather more words, here.
Note that this accentuates the relevance of a detail that might be skipped over in the original quote- that Thursday comes after Wednesday. That is, this may be intended as a dismissal of the 'all change is progress' position or the 'traditions are bad because they are traditions' position.
Not to mention the people who think accusing their opponents of being "on the wrong side of history" constitutes an argument.
So you are not going to argue that history has shown that socialism has failed?
That's using history as evidence. What I was complaining about is closer to the people who declare that all opponents of a change that they plan to implement (or at best have only implemented at most several decades ago) are "on the wrong side of history".
I think you may not be interpreting the phrase "the wrong side of history" as people who say it mean it.
There a classic saying that "
A new scientific truth does not triumph by convincing its opponents and making them see light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." Max Planck
Effectively there's a position that's obviously correct but there are also people who are just too hidebound and change averse to recognize it and progress can't be made until they die off. But progress will be made because the position is correct. When you tell someone they are on the wrong side of history you are reminding them they are behaving like one of the old men that Plank mentions. Put another way, what it's saying is "if you look at people who don't come from the past and don't have large status quo bias you will notice a trend".
In physics, yes. In history / political science, no.
In politics, no position is obviously correct. Claiming that one's own position is obviously correct or that history is on our side is just a way of browbeating others instead of actually making a case.
Claiming that the opponents of some newly viral idea are "on the wrong side of history" is like claiming that Klingon is the language of the future based on the growth rate when the number of speakers has actually gone from zero to a few hundred.
No -- you are telling them. To remind someone of a thing is to tell them what they already know. To talk of "reminding" in this context is to presume that they already know that they are wrong but won't admit it, and is just another way of speaking in bad faith to avoid actually making a case.
Is this falsifiable?
Sure, just step back in time.
A bit less than two millenia ago one could have said "Effectively there's a position -- that Jesus gifted eternal life to humanity -- that's obviously correct but there are also people who are just too hidebound and change averse to recognize it and progress can't be made until they die off. But progress will be made because the position is correct."
I was actually thinking of eugenics, which was once a progressivist "obvious correct thing where we just need to wait until these luddites die off until everything will be great" thing, until it wasn't. Incidentally a counterexample to "Cthulhu always swims left" too.
It's a case where "correct", "right side of history" and "progress" dissociate from each other.
I think you could make a case for totalitarianism, too. During the interwar years, not only old-school aristocracy but also market democracy were in some sense seen as being doomed by history; fascism got a lot of its punch from being thought of as a viable alternative to state communism when the dominant ideologies of the pre-WWI scene were temporarily discredited. Now, of course, we tend to see fascism as right-wing, but I get the sense that that mostly has to do with the mainstream left's adoption of civil rights causes in the postwar era; at the time, it would have been seen (at least by its adherents) as a more syncretic position.
I don't think you can call WWII an unambiguous win for market democracy, but I do think that it ended up looking a lot more viable in 1946 than it did in, say, 1933.
Interestingly, if you press the people making that claim for what they mean by "left", their answer boils down to "whatever is in Cthulhu's forward cone".
For a more modern example, wouldn't that have been said for marijuana a few decades ago?
Everyone expected that once the older people who opposed marijuana died off and the hippies grew into positions of power, everyone would want it to be legal. That didn't work out. (The support for legalization has gone up recently, but not because of this.)
I suspect it is falsifiable. I might unpack it as the following sub claims
1 Degree of status quo bias is positively correlated to time spent in a particular status quo (my gut tells me there should be a causal link, but I bet correlation is all you could find in studies)
2 On issue X, belief that X[past] is the correct way to do X is correlated with time spent living in an X[past] regime.
2.5 Possibly a corollary to the above, but maybe a separate claim: among people who you would expect to have the least status quo bias position X[other] is favored at much higher rates than among the general population
For most issues 2 and 2.5 can probably be checked with good polling data. Point 1 is the kind of thing its possible to do studies on, so I think its in principle falsifiable, though I don’t know if such studies have actually been done.
2) is also what you would expect to see if X[past] was indeed better than X[other].
2.5) Not having status quo bias isn't equivalent to being unbiased. A large number of the people that are least likely to have status quo bias are going to be at the other end of the spectrum - chronic contrarians.
Note that which X is better may depend on circumstances (e.g. technological level).
One's person status quo bias is another person's Chesterton fence. The quote from which this comment tree branches is from Chesterton.
I strongly agree. It's possible that history has a side, but we can hardly know what it is in advance.
I don't think you agree. I think Eugine has a problem with the idea that just because an idea wins in history doesn't mean that's it's a good idea.
Marx replaced what Hegel called God with history. Marx idea that you don't need a God to tell you what's morally right, history will tell you. Neoreactionaries don't like that sentiment that history decides what's morally right.
I am not a neoreactionary and I think the sentiment that history decides what's morally right is a remarkably silly idea.
You have to compare it to the alternatives. Do you think it's more or less silly than the idea that there a God in the sky judging what's right or wrong?
Marx basically had the idea that you don't need God for an absolute moral system when you can pin it all on history with supposedly moves in a certain direction. You observe how history moves. Then you extrapolate. You look at the limit of that function and that limit is the perfect morality. It's what someone who got a rough idea of calculus does, but who doesn't fully understand the assumptions that go into the process.
In the US where Marx didn't have much influence as in Europe there are still a bunch of people who believe in young earth creationism. On a scale of silliness that's much worse.
Today the postmodernists rule liberal thought but there are still relicts of marxist ideas. Part of what being modern was about is having an absolute moral system. Whether or not those people are silly is also open for debate.
Sure. Let's compare it to the alternative the morality is partially biologically hardwired and partially culturally determined. By comparison the idea that "history decides what's morally right" is silly.
Yep, he had this idea. That doesn't make it a right idea. Marx had lots of ideas which didn't turn out well.
Oh, so -- keeping in mind we're on LW -- the universe tiled with paperclips might turn out to be the perfect morality? X-D
And remind me, how well does extrapolation of history work?
Do you, by any chance, believe there is a causal connection between these two observations that you jammed into a single sentence?
I think they're both quite silly. Also, the fact that many people believe in God as a source of morality, is itself a reason why history (i.e. the actions of those people) is a bad moral guide.
Surely most pre-modern philosophers also had absolute moral systems?
There are (at least) two things wrong with "the right side of history". One is that we can't know that history has a side, or what side it might be because a tremendous amount of history hasn't happened yet, and the other error is that history might prefer worse outcomes in some sense.
I find the first sort of error so annoying that I normally don't even see the second.
My impression is that Eugene is annoyed by both sorts of error, but I hope he'll say where he stands on this.
There's a third thing wrong with it: generally, people use the phrase in order to praise one side of some historical dispute (and implicitly condemn the other) by attributing to them (in part or in whole) some historical change that is deemed beneficial by the person doing the praising. The problem with this is that usually when you go back and look at the actual goals of the groups being praised, they usually end up bearing very little relation to the changes that the praiser is trying to associate them with, if not being completely antithetical. Herbert Butterfield (who I posted about above) initially noticed this in the tendency of people to try to attribute modern notions of religious toleration to the Protestant reformation, when in fact Martin Luthor wrote songs about murdering Jews, and lobbied the local princes to violently surpress rival Protestant sects.
I hadn't even thought of the first objection, possibly because I stopped considering "what side history is on" a useful concept after noticing the second one.
Speaking of which, let's see what history has to say about Marx. It would appear that the Marxist nations lost to a semi-religious nation. Thus apparently history has judged that the idea that history will tell you what is right to be wrong.
I'm very far from being a reactionary or neoreactionary, but I also don't put much moral weight on history - that is, on what most other people come to believe.
For one thing, believing that would mean every moral reformer who predicts for themselves only a small chance of reforming society, should conclude that they are wrong about morals.
-- Daniel Dennett, Intuition Pumps and Other Tools for Thinking
Are we sure about this? Einstein's idea of riding along with a light beam was super-useful and physically impossible in principle. Whereas the experiment I just thought of where I pour my cup of tea on my trousers I can almost not be bothered to do.
Ceteris paribus, then. On average, a thought experiment along the lines of "what if I poured this stuff on my trousers" is of much more practical use and tells you much more about reality than a thought experiment along the lines of "what if I could ride around on [intangible thing]". The most realistic thought experiments are the ones we do all the time, often without thinking, and which help us decide, for example, not to balance that cup of tea right on the edge of the table. Meanwhile, only very clever scientists and philosophers with lots of training can wring anything useful out of really far-out "what if I rode on a beam of light"-type thought experiments, and even they screw it up all the time and are generally well-advised not to base a conclusion solely on such a thought experiment. As I understand it, Einstein's successful use of gedankenexperiments to come up with good new ideas is generally considered evidence of his exceptional cleverness.
(note: I know very little about this topic and may be playing very fast and loose. I think the main idea is sensible, though)
This is funny. Until I read your comment, I was misreading the original quote; I didn't notice the "inversely" part. I was implicitly thinking that the quote was claiming that the farther the thought experiment is from reality, the more useful it is. I guess my physicist biases are showing.
I think that's my point! It sounds just as profound without the 'inversely'.
Nassim Taleb
This seems false in physics. Prestige of your institution matters. Prestige of the journal matters, too. Arxiv is fine, Physical Reviews is better, PRL is better yet. Nature/Science is so high, if you publish something that is not perceived as top-quality, you may get resented by others for status jumping. And there are plenty of journals which only get to publish second- and third-rate results.
Of course, the usual countersignaling caveat applies: once you have enough status, posting on Arxiv is enough, you will get read. Not submitting to journals can be seen as a sign of status, though I don't think the field is there (yet).
My understating is that this effect is a lot smaller in physics than in the humanities.
I think, by this standard, law is a BS discipline. But I'm not sure what to make of that.
Well - law is, in a strict sense, entirely about convincing other humans that your interpretation is correct.
Whether or not it actually is correct in a formal sense is entirely screened off by that prime requirement, and so you probably shouldn't be surprised that all methods used by humans to convince other humans, in the absence of absolute truth, are applied. :)
Would that include drafting a fire code for buildings? Would it include negotiating a purchase and sale agreement for a business? Would it include filing a lawsuit for unpaid wages? Would it include advising a client about the possible consequences of taking a particular tax deduction?
It's hard to see how it would, and yet all of these things are regularly done by lawyers in the course of their work.
Those are, indeed, all examples of persuading human beings.
The other two are excellent points.
"persuading human beings" is not exactly the same thing as "convincing other humans that your interpretation is correct."
Besides, in negotiating an agreement much of the attorney's job consists of (1) advising his client of issues which are likely to arise; (2) helping the client to understand which issues are more important and which are less important; and (3) drafting language to address those issues. Yes, persuasion comes into it sometimes, but it's usually not primary.
Filing a lawsuit for unpaid wages can be seen as persuasion in a general sense. If Baughn wants to claim that in a strict sense, litigation is about getting other people to do stuff, then I would agree.
Thank you.
Interesting. There are famous cases of self-taught lawyers from previous centuries.
I wonder if this says something bad about the modern legal system. Maybe the modern legal system is less about making arguments based on how the law works (or should work) than about the lawyer signaling high status to the judge so that he rules in your favor.
There are famous cases of self-taught specialists in scientific fields, too. There aren't so many of them nowadays. That's because both the law and science are in a state where a practitioner must know a lot of details that didn't exist as part of the field in earlier days.
I don't think I have good reason to think this is the case. At any rate, it's clear enough that the prestige bit seems to come in heavily in hiring decisions, so let's just talk about that. How, in the ideal case, do you think lawyers would be evaluated for jobs? Off hand, I can't think of anything a lawyer could produce to show that she's a good hire.
I'm not a lawyer, and English law is different from American, but I reckon that I can tell the difference between good and bad lawyers by talking to them for a while about various cases in their speciality and listening to them explain the various arguments and counter-arguments.
I've heard people who make a good living from the law make incoherent wishful-thinking type arguments about which way a case should have gone, when I can see perfectly well how the judge was compelled to the conclusion that he came to. I wouldn't want such a person defending me.
Presumably if you are yourself a good lawyer, it shouldn't be too difficult to do this. The law is fairly logical and rigorous.
Well, if his "reality distortion field" was powerful enough to also affect judges.
I think Spooner got it right:
-Lysander Spooner from "An Essay on the Trial by Jury"
There is legitimate law, but not once law is licensed, and the system has been recursively destroyed by sociopaths, as our current system of law has been. At such a point in time, perverse incentives and the punishment of virtue attracts sociopaths to the study and practice of law, and drives out all moral and decent empaths from its practice. If not driven out, it renders them ineffective defenders of the good, while enabling the prosecutors who hold the power of "voir dire" jury-stacking to be effective promoters of the bad.
The empathy-favoring nature of unanimous, proper (randomly-selected) juries trends toward punishment only in cases where 99.9% of society nearly-unanimously agree on the punishment, making punishment rare. ...As it should be in enlightened civilizations.
What exactly is meant by the phrase "BS discipline"? Is the claim that most scholarship in law is meaningless nonsense? Or is the claim that there is no societal value at all in law? Or is it something else?
I suppose a discipline is BS if in the case of a science, it fails to systematically track the realities of an object of study. In the case of a trade, like business management or welding, then it's a BS discipline if it fails to make its practitioners more successful than those outside the discipline. I'm not sure what kind of a discipline law is.
Taleb's thought, I suppose, is that a discipline is likely to be BS if, instead of directly measuring the capabilities of its practitioners, we tend to measure only indirectly. This only implies that direct measurement is costly enough to outweigh its benefits, however. One reason for its being so may be that there's nothing to measure directly (i.e. the discipline is BS), but another might be that the discipline is so specialized that very few people are competent to judge any given applicant. Yet a third might be that its subject matter is subject to a lot of mind-killing, so that one can confidently judge an applicant without bias.
I agree that it's difficult to tell how good a lawyer is, which leads to a lot of nonsense like firms spending a lot of money of impressive offices and spending hours and hours of time chasing down every last grammatical error before filing court papers.
This is true for a lot of professions. Most of them don't have the problem you're describing.
Would you mind giving me three examples? This would help me think about what you are saying. TIA.
By that standard, all academic disciplines are BS disciplines.
I believe that is the intended meaning, yes.
~J. Stanton, "The Paleo Identity Crisis: What Is The Paleo Diet, Anyway?"
But the answers might be specific to each individual because the biochemistry of humans is not exactly the same.
In that case, the questions have complicated answers. The best dieting advice might be "first sequence your personal microbiome then consult this lookup table..."
Individuals being different from each other shouldn't necessarily diminish the significance of biochemistry. Biochemistry should explain not just our similarities but overarching principles that organize and explain the differences.
My point wasn't that biochemistry is not important. My point was that the answers you get from biochemistry might be complicated and limited in application.
It not at all clear that someone who knows all the biochemistry will outperform someone who's good at feeling what goes on in his body.
In the absence of good measurement instruments feelings allow you to respond to specific situations much better than theoretical understanding.
I am told that the natural feeling for gravity and balance is worse than useless to a pilot.
I am told this as well.
Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.
Hans Moravec, Wikipedia/Moravec's Paradox
The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived... As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.
Stephen Pinker, Wikipedia/Moravec's Paradox
What was the ratio of phone time spent talking to human vs computer receptionists when Pinker published this quote in 2007? For that matter, how much non-phone time was being spent using a website to perform a transaction that would have previously required interaction with a human receptionist?
Pinker understood AI correctly (it's still way too hard to handle arbitrary interactions with customers), yet he failed to predict the present, much less the future, because he misunderstood the economics. Most interactions with customers are very non-arbitrary. If 10% need human intervention, then you put a human in the loop after the other 90% have been taken care of by much-cheaper software.
If you were to say "a machine can't do everything a horse can do", you'd be right, even today, but that isn't a refutation of the effect of automation on the economic prospects of equine labor.
Except that in exponentially-increasing computation-technology-driven timelines, decades are compressed into minutes after the knee of the exponential. The extra time a good cook has, isn't long.
Let's hope that we're not still paying rent then, or we might find ourselves homeless.
Yuval Levin in the National Review
To the extent that we can overcome our current limits, we have to understand them first. We should beware false humility and rationalization of existing limits (e.g. deathism).
Clifford Truesdell
This is beautiful: I can't turn it into equations. Does that refute it or support it?
Did you try? Each sentence in the quote could easily be expressed in some formal system like predicate calculus or something.
I don't see why an equation can't be nonsensical. Perhaps the nonsense is easier to spot when expressed in symbols, or then again perhaps not.
Terry Coxon
I assume that the reader is familiar with the idea of extrasensory perception, and the meaning of the four items of it, viz., telepathy, clairvoyance, precognition and psychokinesis. These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming.
Alan Turing (from "Computing Machinery and Intelligence")
Particularly relevant a quote given Yvain's recent http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/
Can you provide some context? I don't understand: the claim that the evidence for telepathy is very strong is surely wrong, so is this sarcasm? A wordplay?
Turing's 1950 paper asks, "Can machines think?"
After introducing the Turing Test as a possible way to answer the question (in, he expects, the positive), he presents nine possible objections, and explains why he thinks each either doesn't apply or can be worked around. These objections deal with such topics as souls, Gödel's theorem, consciousness, and so on. Psychic powers are the last of these possible objections: if an interrogator can read the mind of a human, they can identify a human; if they can psychokinetically control the output of a computer, they can manipulate it.
From the context, it does seem that Turing gives some credence to the existence of psychic powers. This doesn't seem all that surprising for a British government mathematician in 1950. This was the era after the Rhines' apparently positive telepathy research — and well before major organized debunking of parapsychology as a pseudoscience (which started in the '70s with Randi and CSICOP). Governments including the US, UK, and USSR were putting actual money into ESP research.
Yes, but also remember that Turing's English, shy, and from King's College, home of a certain archness and dry wit. I think he's taking the piss, but the very ambiguity of it was why it appealed as a rationality quote. He's facing the evidence squarely, declaring his biases, taking the objection seriously, and yet there's still a profound feeling that he's defying the data. Or maybe not. Maybe I just read it that way because I don't buy telepathy.
Hodges claims that Turing at least had some interest in telepathy and prophesies:
Alan Turing: The Enigma (Chapter 7)
I think Turing's willingness to take all comers seriously is something to emulate.
Raising Steam, Terry Pratchett
Regarding the first steam engine in Pratchett's fictional world.
Relevant is the Amtal Rule on this same page: http://lesswrong.com/r/lesswrong/lw/jzn/rationality_quotes_april_2014/as28
--Penn Jillette in "Penn Jillette Is Willing to Be a Guest on Adolf Hitler's Talk Show, Vanity Fair, June 17, 2010
This quote seems like it's lumping every process for arriving at beliefs besides reason into one. "If you don't follow the process I understand and is guaranteed not to produce beliefs like that, then I can't guarantee you won't produce beliefs like that!" But there are many such processes besides reason, that could be going on in their "hearts" to produce their beliefs. Because they are all opaque and non-negotiable and not this particular one you trust not to make people murder Sharon Tate, does not mean that they all have the same probability of producing plane-flying-into-building beliefs.
Consider the following made-up quote: "when you say you believe something is acceptable for some reason other than the Bible said so, you have completely justified Stalin's planned famines. You have justified Pol Pot. If it's acceptable for for you, why isn't it acceptable for them? Why are you different? If you say 'I believe that gays should not be stoned to death and the Bible doesn't support me but I believe it in my heart', then it's perfectly okay to believe in your heart that dissidents should be sent to be worked to death in Siberia. It's perfectly okay to believe because your secular morality says so that all the intellectuals in your country need to be killed."
I would respond to it: "Stop lumping all moralities into two classes, your morality, and all others. One of these lumps has lots of variation in it, and sub-lumps which need to be distinguished, because most of them do not actually condone gulags"
And likewise I respond to Penn Jilette's quote: "Stop lumping all epistemologies into two classes, yours, and the one where people draw beliefs from their 'hearts'. One of these lumps has lots of variation in it, and sub-lumps which need to be distinguished, because most of them do not actually result in beliefs that drive them to fly planes into buildings."
The wishful-thinking new-age "all powerful force of love" faith epistemology is actually pretty safe in terms of not driving people to violence who wouldn't already be inclined to it. That belief wouldn't make them feel good. Though of course, faith plus ancient texts which condone violence can be more dangerous, though as we know empirically, for some reason, people driven to violence by their religions are rare these days, even coming from religions like that.
I don't think it's lumping everything together. It's criticizing the rule "Act on what you feel in your heart." That applies to a lot of people's beliefs, but it certainly isn't the epistemology of everyone who doesn't agree with Penn Jillette.
The problem with "Act on what you feel in your heart" is that it's too generalizable. It proves too much, because of course someone else might feel something different and some of those things might be horrible. But if my epistemology is an appeal to an external source (which I guess in this context would be a religious book but I'm going to use "believe whatever Rameses II believed" because I think that's funnier), then that doesn't necessarily have the same problem.
You can criticize my choice of Rameses II, and you probably should. But now my epistemology is based on an external source and not just my feelings. Unless you reduce me to saying I trust Rameses because I Just Feel that he's trustworthy, this epistemology does not have the same problem as the one criticized in the quote.
All this to say, Jillette is not unfairly lumping things together and there exist types of morality/epistemology that can be wrong without having this argument apply.
'Act on an external standard' is just as generalizable - because you can choose just about anything as your standard. You might choose to consistently act like Gandhi, or like Hitler, or like Zeus, or like a certain book suggests, or like my cat Peter who enjoys killing things and scratching cardboard boxes. If the only thing I know about you is that you consistently behave like someone else, but I don't know like whom, then I can't actually predict your behavior at all.
The more important question is: if you act on what you feel in your heart, what determines or changes what is in your heart? And if you act on an external standard, what makes you choose or change your standard?
It looks like there's all this undefined behavior, and demons coming out the nose from the outside because you aren't looking at the exact details of what's going on in with their feelings that are choosing the beliefs. Though a C compiler given an undefined construct may cause your program to crash, it will never literally cause demons to come out of your nose, and you could figure this out if you looked at the implementation of the compiler. It's still deterministic.
As an atheistic meta-ethical ant-realist, my utility function is basically whatever I want it to be. It's entirely internal. From the outside, from someone who has a system where they follow something external and clearly specified, they could shout "Nasal demons!", but demons will never come out my nose, and my internal, ever so frighteningly non-negotiable desires are never going to include planned famines. It has reliable internal structure.
The mistake is looking at a particular kind of specification that defines all the behavior, and then looking at a system not covered by that specification, but which is controlled by another specification you haven't bothered to understand, and saying "Who can possibly say what that system will do?"
Some processors (even x86) have instructions (such as bit rotate) which are useful for significant performance boosts in stuff like cryptography, and yet aren't accessible from C or C++, and to use it you have to perform hacks like writing the machine code out as bytes, casting its address to a function pointer and calling it. That's undefined behavior with respect to the C/C++ standard. But it's perfectly predictable if you know what platform you're on.
Other people who aren't meta-ethical anti-realists' utility functions are not really negotiable either. You can't really give them a valid argument that will convince them not to do something evil if they happen to be psychopaths. They just have internal desires and things they care about, and they care a lot more about having a morality which sounds logical when argued for than I do.
And if you actually examine what's going on with the feelings of people with feeling-driven epistemology that makes them believe things, instead of just shouting "Nasal demons! Unspecified behavior! Infinitely beyond the reach of understanding!" you will see that the non-psychopathic ones have mostly-deterministic internal structure to their feelings that prevents them from believing that they should murder Sharon Tate. And psychopaths won't be made ethical by reasoning with them anyway. I don't believe the 9/11 hijackers were psychopaths, but that's the holy book problem I mentioned, and a rare case.
In most cases of undefined C constructs, there isn't another carefully-tuned structure that's doing the job of the C standard in making the behavior something you want, so you crash. And faith-epistemology does behave like this (crashing, rather than running hacky cryptographic code that uses the rotate instructions) when it comes to generating beliefs that don't have obvious consequences to the user. So it would have been a fair criticism to say "You believe something because you believe it in your heart, and you've justified not signing your children up for cryonics because you believe in an afterlife," because (A) they actually do that, (B) it's a result of them having an epistemology which doesn't track the truth.
Disclaimer: I'm not signed up for cryonics, though if I had kids, they would be.
I very much doubt that. At least with present technology you cannot self-modify to prefer dead babies over live ones; and there's presumably no technological advance that can make you want to.
If utility functions are those constructed by the VNM theorem, your utility function is your wants; it is not something you can have wants about. There is nothing in the machinery of the theorem that allows for a utility function to talk about itself, to have wants about wants. Utility functions and the lotteries that they evaluate belong to different worlds.
Are there theorems about the existence and construction of self-inspecting utility functions?
-- Rational!Quirrel, HPMoR chapter 20
In other words: how else can you justify a moral belief and consequent actions, except by saying that you really truly believe in your heart that you're Right?
We should not confuse between the fact that almost all people other than Manson think he was morally wrong, and the fact that his justification for his action seems to me to be of the same kind as the justifications anyone else ever gives for their moral beliefs and actions.
Unlike Quirrell, Penn Jillette is not referring to "knowing in your heart" that your moral values are correct, but to "knowing in your heart" some matters of fact (which may then serve as a justification for having some moral values, or directly for some action).
In what way is "deserve" a matter of fact?
If you're a moral realist, and you think moral opinions are statements of fact (which may be right or wrong), then you think it's possible to "know in your heart" moral "facts".
If you're a moral anti-realist (like me), and you think moral opinions are statements of preferences (in other words, statements of fact about your own preferences and your own brain-wiring), then all moral opinions are such. And then surely Manson's statement of his preferences has the same status as anyone else's, and the only difference is that most people disagree with Manson.
What else is there?
However, it's true that Jillette talks about factual amoral beliefs like fairies and gods. So my comment was somewhat misdirected. I still think it's partly relevant, because people who believe in gods (i.e. most people) usually tie them closely to their moral opinions. It's impossible to discuss morals (of most humans) without discussing religious beliefs.
Edited OP to make it clear that you can provide a link to the place you found the quote, rather than needing to track down an authoritative original source.
It has come to be accepted practice in introducing new physical quantities that they shall be regarded as defined by the series of measuring operations and calculations of which they are the result. Those who associate with the result a mental picture of some entity disporting itself in a metaphysical realm of existence do so at their own risk; physics can accept no responsibility for this embellishment.
Sir Arthur Eddington, 1939, The Philosophy of Physical Science
Eight Ways to Build Collaborative Teams by Lynda Gratton and Tamara J. Erickson
This seems applicable as the LessWrong community is "large, virtual, diverse, and composed of highly educated specialists" and the community wants to solve challenging projects.
Source: http://www.prequeladventure.com/2014/05/3391/
thank you for posting this - now I have something new to read!
-- Tom Stoppard, The Real Thing
Plutarch, "De Auditu" (On Listening), a chapter of his Moralia.
This essay is also the original source of the much-quoted line "The mind is not a pot to be filled, but a fire to be ignited." It is variously attributed, but is a fair distillation of the original passage, which comes directly before the quote above:
On thrust work, drag work, and why creative work is perpetually frustrating --
"Each individual creative episode is unsustainable by its very nature. As a given episode accelerates, surpassing the sustainable long term trajectory, the thrust engine overwhelms the available supporting capabilities. ... Just as momentum build to truly exciting levels…some new limitation appears squelching that momentum. ...The problem is that you outran your supporting capabilities and that deficit became a source of drag. Perhaps you didn’t have systems in place to capture leads. Perhaps you lacked the bandwidth necessary to follow up on all the new opportunities. Perhaps, due to lack of experience, you pursued the wrong opportunities. Perhaps you just didn’t know what to do next – you outran your existing knowledge base. In one way or another new varieties of drag emerge. The accelerating curve you had been riding becomes unsustainable and you find yourself mired in the slow build of the next episode. This is what we experience as anti-climax and temporary stagnation." -- Greg Raider, from his essay "A Pilgrimage Through Stagnation and Acceleration"
The whole piece is worth reading, it's really good -- http://onthespiral.com/pilgrimage-through-stagnation-acceleration
Hat tip to Zach Obront for linking me to it originally.
Nassim Taleb
-- Reagan and Scipio debate the nature of definitions. From Templar, Arizona
Whilst arguing that uncertainty is best measured using numbers and probabilities:
[missing the point]
On the contrary, combining adverbs is easy. If X is very uncertain, and Y is very uncertain, then X - Y is very, very uncertain. [/missing the point]
^_^
Nassim Taleb
flying vs aeroplanes?
Scott Adams on consciously controlling your own moods and feelings
Donald Knuth on the difference between theory and practice.
Duplicate.
Or with smart people who profit at the state's expense when it rescues fools from their mistakes. If it's known that folly has no adverse results, people will take more risks.
While this is true, it may also be the case that humans in the default state don't take enough risks. Indeed, an inventor or entrepreneur bears all the costs of bankruptcy but captures only some of the benefits of a new business. By classical economic logic, then, risk-taking is a public good, and undersupplied. Which said, admittedly, not all risk-taking is created equal.
That's exactly wrong. Bankruptcy releases the entrepreneur from his obligations and transfers the costs to his creditors.
Not to say that the bankruptcy is painless, but its purpose is precisely to lessen the consequences of failure.
This premise doesn't seem true (for all that the conclusion is accurate). Our entire notion of bankruptcy serves the purpose of putting limits on the cost of those risks, transferring burden onto creditors. An example of an alternate cultural construct that come closer to making the entrepreneur bear all the costs of the risk is debt slavery. Others include various forms of formal or informal corporal or capital punishments applied to those that cannot pay their debts.
(Edited to add context)
Context: The speakers work for a railroad. An important customer has just fired them in favor of a competitor, the Phoenix-Durango Railroad.
It gets at the idea talked about here sometimes that reality has no obligation to give you tests you can pass; sometimes you just fail and that's it.
ETA: On reflection, what I think the quote really gets at is that Taggart cannot understand that his terminal goals may be only someone else's instrumental goals, that other people are not extensions of himself. Taggart's terminal goal is to run as many trains as possible. If he can help a customer, then the customer is happy to have Taggart carry his freight, and Taggart's terminal goal aligns with the customer's instrumental goal. But the customer's terminal goal is not to give Taggart Inc. business, but just to get his freight shipped. If the customer can find a better alternative, like competing railroad, he'll switch. For Taggart, of course, that is not a better alternative at all, hence his anger and confusion.
(Apologies for lack of context initially).
Without context, it's a bit difficult to see how this is a rationality quote. Not everyone here has read Atlas Shrugged...
I've read AS a while ago, and I still don't remember enough of the context to interpret this quote...
-- Meta --
Shouldn't this be in Main rather than Discussion? I PM'ed the author, but didn't get a response.
EDIT: Thanks.
"Did many people die?"
"Three thousand four hundred and ninety-two."
"A small proportion."
"It is always one hundred percent for the individual concerned."
"Still..."
"No, no still."
-Ian Banks, Look to Windward
Does this quote have any rationalist content beyond the usual anti-deathism applause light?
And here I looked at that and saw a negative example of how not to do "shut up and multiply", though I suppose it could also be a warning about scope insensitivity / psychophysical numbing if the risk at hand required an absolute payment to stave off, rather than a per-capita payment, since in the former case only absolute numbers matter, and in the latter case per capita risks matter.
Maybe I need to include more context. This conversation occurs after the multiplication was done. This was discussing the aftermath, which had been minimized as much as the minds in question could manage. I took it to mean that, once you have made the best decision you can, there is no guarantee that you will be happy with the outcome, just that it would likely have been worse had you made any other decision.
I think the inability to include that context and make your interpretation clear means that it's a bad rationality quote because it's far too easily taken a 'consequentialism boo!' quote.