Rationality Quotes December 2014
Another month, another rationality quotes thread. The rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (440)
G. K. Chesterton
I think all the work here is done by determining what actually constitutes a precipice.
"Don’t let anybody discourage you or tell you that intelligence doesn’t pay or that success in life has to be achieved through dishonesty or through sheer blind luck. That is not true. Real success is never accidental and real happiness cannot be found except by the honest use of your intelligence."
Ayn Rand
Too strong.
Nobody EVER got successful from luck? Not even people born billionaires or royalty?
Nobody can EVER be happy without using intelligence? Only if you're using some definition of happiness that includes a term like "Philosophical fulfillment" or some such, which makes the issue tautological.
I don't think you're applying the negation correctly; "not every success was from luck" means "at least one success was not from luck." Similarly, if you broaden your viewpoint to before the moment of someone's birth, it seems silly to claim that it's an accident that they were born a billionaire or royalty; it's not like their ancestors put no planning into acquiring their wealth or their titles.
Not really; this is a nontrivial empirical claim that turns out to be correct. People with solid philosophical grounding are measurably happier (on standard psychological surveys of happiness) than people without.
I didn't read that as a negation of "success in life has to be achieved... through sheer blind luck" but rather of "real success is never accidental". Both, of course, are descriptively false (at least for values of "real" that don't bake in the conclusion), though as a normative statement I'd rate the former as much more problematic.
That was the impression I had. Yes, Rand is making the normative claim that 'accidental' success is not 'real,' and that 'happiness' acquired in ways other than 'honest use of your intelligence' is not 'real,' but those seem like fine normative claims to me.
They sound like no true Scotsman to me. And they make the whole thing tautological. Would you consider it worth quoting if she said "nobody ever achieves anything by luck, except for the times they get lucky"? Or "happiness is only achieved through honest use of your intelligence if it's achieved through honest use of your intelligence"?
Did you read what you linked to?
Where is the counterexample? Success refers to an abstract concept. Luck and success are different things. Luck usually contributes to success, but luck usually implies undeserved success. So successful people get lucky, but on average everybody gets lucky sometimes. The quote encourages people to focus on the things in which luck plays a minor factor. That's what intelligence is for, intelligence is not for optimizing luck.
And yes, that does make it tautological. So what?
Some people hold the view that all normative claims are either tautological or false. Does that describe you, or can you provide an example of a normative statement that you consider true and non-tautological?
In the second case, I'm happy to discuss underlying value systems and the similarities or differences. In the first, I don't think I'm interested in discussing whether or not value systems should be communicated through normative claims.
-Richard Hamming, Mathematics on a Distant Planet
I agree with the quote, but don't really see any point or importance to it.
It's actually called Mathematics on a Distant Planet.
Thanks! I've made the change.
G. K. Chesterton, The Everlasting Man
Chesterton was talking about Neoreaction, right?
ETA: A note of clarification for those in need of it: I am not actually claiming that Chesterton was talking about Neoreaction.
A quote from my son (just turned eleven years):
This sounds trite but I think it is actually the correct (or most sensible) answer. I was kind of impressed. Maybe we should ask children more of these grande questions and gain factual answers instead of taking them as deeper as they are.
Indeed, I suppose their worldview are much clearer and in some ways unbiased than ours. When child is born he sees the world as it is, not through many prisms including our subjective value judgements
I prefer:
Kurt Vonnegut, Breakfast Of Champions
https://www.youtube.com/watch?v=SvMiXk2gGSk
Insightful? I give him credit for his epistemic humility, at least.
“Never confuse honor with stupidity!” ― R.A. Salvatore, The Crystal Shard
Publius Cornelius Scipio Nasica Corculum
Since you're probably aware that one Roman senator (Cato) ended his speeches with "Carthage must be destroyed," you should also know that another responded with the opposite.
-- Douglas Hofstadter, Godel, Escher, Bach
It is a good quote in general, but not quite a rationality quote.
I thought it was a nice illustration of the distinction between map and territory, or between different maps of the same territory. In other words, JFK and the speaker's uncle were very close together by a certain map, but that doesn't mean they were very similar in real life.
Better, but make sure you keep the stuff you don't want quoted on a different paragraph.
G. K. Chesterton, Orthodoxy.
This seems like Chesterton is making it up completely. Most progressives base the impulse on the hope that things could be better; dealing with the decay of conservatism is not a hypothesis that even enters in their minds. The 'truth of conservatism' (at least, the straw-conservatism defined by Chesterton here) is taken for granted by most people: if things keep on going like this, they'll keep on being like this.
No one has ever become a feminist by saying 'my god! if we leave things alone, the patriarchy will keep becoming even more oppressive and brutal with each year! We need to fight this slide of the status quo, and incidentally, it would be nice if we could not just repair the rot but also yank the status quo towards feminism and get women the vote and stuff like that'.
No, it tends to be more like 'the status quo is awful! Let's try to move it towards getting women the vote and stuff like that'.
– Jimmy Kimmel
That's just not true. Death rate, as the name implies, is a rate - the population that died in this year divided by the average total population. If "death rate" is 100%, then "birth rate" is 100% by the same reasoning, because 100% of people were born.
That depends on whether fetuses are people ...
If yes, the actual birth rate is around 80%. http://www.cdc.gov/reproductivehealth/Data_Stats/Abortion.htm
It's actually only about 45 percent. The death rate for the world as a whole is about 93 percent.
"As a human being, you have no choice about the fact that you need a philosophy. Your only choice is whether you define your philosophy by a conscious, rational, disciplined process of thought and scrupulously logical deliberation—or let your subconscious accumulate a junk heap of unwarranted conclusions, false generalizations, undefined contradictions, undigested slogans, unidentified wishes, doubts and fears, thrown together by chance, but integrated by your subconscious into a kind of mongrel philosophy and fused into a single, solid weight: self-doubt, like a ball and chain in the place where your mind’s wings should have grown."
Ayn Rand
You lost me at "junk heap." There is no conscious choice available to a layperson ignorant of philosophy and logic, and such ways of life are perfectly copacetic with small-enough communities. If anything, it is the careful thinker who is more shackled by self-doubt, better understood as the Dunning-Kruger effect, but Ayn Rand has made it obvious she never picked up any primary literature on cognitive science so it's not surprising to see her confusion here.
Quote from 1971's The Romantic Manifesto.
Sorry you're so averse to negative descriptions of the average person's philosophy.
Yes there is, they can choose what music, TV, movies, videos etc to buy/view/play.
Do you mean communities where the leader knows about philosophy and can order people around?
It's reasonable to doubt certain things, but if learning increases your self doubt than you're doing it wrong.
She was associated with Nathaniel Branden, a well regarded psychologist. Cognitive Science is a relatively new field.
I don't think she's confused, she's saying something you disagree with. If you think you've refuted it, I think you're the confused one.
A. False dichotomy - there are other choices. We might choose to compartmentalize our rationality, for example.
B. False dichotomy in a different sense - we actually don't have access to this choice. No matter how hard we work, our brains are going to be biased and our philosophies are going to be sloppy. It's a question of making one's brain marginally more organized or less disorganized, not of jumping from insanity onto reason. I'm suspicious that working with the insanity and trying to guide its flow is a better strategy than trying to destroy it.
C. Although not having a philosophy leaves us open to bias, having a philosophy can sometimes expose us to bias even further. It's about comparative advantage. Agnosticism has wiggle room that sometimes can be a place for bias to hide, but conversely ideology without self-doubt often serves to crush the truth.
A. How would you implement that choice?
B. We is a loaded term, speak for yourself. There's benefit to realizing that as a human you have bias. There's no benefit to declaring that you can't overcome some of this bias.
C Wouldn't that depend on your philosophy?
C. Yes.
B. Agreed that there's benefit to realizing we have bias, disagree that there's no benefit to declaring some biases aren't overcomeable. Trying to overcome biases takes effort. Wasted effort is bad. It's better to pursue mixed strategies that aim at instrumental rationality than to aim at the perfection described in the Rand quotation. Thoughts that seem complex or messy should not be something we shy away from, reality is complicated and our brains are imperfect.
A. I don't know how to describe how to do it, but I do it all the time. It's something humans have to fight against to avoid doing, as it's essentially automatic under normal conditions.
I think you are assuming hyperbolic discounting/short time preference. It requires a lot of effort to overcome bias, perhaps years. But there are times when it is worth it.
What perfection? Choosing philosophy? You can always update your philosophy.
There are also times when it's not worth it, in my opinion.
Rand contrasts "a conscious, rational, disciplined process of thought and scrupulously logical deliberation" with "a junk heap of unwarranted conclusions, false generalizations, undefined contradictions, undigested slogans, unidentified wishes, doubts and fears, thrown together by chance, but integrated by your subconscious into a kind of mongrel philosophy and fused into a single, solid weight: self-doubt, like a ball and chain".
I think it's possible to avoid becoming such a disgrace without scrupulously logical deliberation. Most people are severely biased but are not as unhappy or helpless as Rand's argument would imply. Trimming the excesses of our biases seems more reasonable than eliminating them, to me.
If the bolded pair of words were struck, I'd agree completely. Different people will have different balls and chains.
This quote was from a speech given to West Point cadets. By no means are they identical but it would be relatively hard to find a group of people more identical (from the perspective of being of the same gender, same age (within a few years) same nationality, and same general ideology).
AlyssaRowan On Hacker News
Schneier on Security blog post
The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt.
-- Bertrand Russell
Duplicate.
Abigail
(Is self-reference ok? This struck me.)
"There is no such thing as uncharted waters. You may not have the chart on hand to show you how to navigate these waters, but the charts exist. Google them."
Joe Queenan, WSJ 11/30/14
Too strong to be literally true but still
Think it's false, both literally and figuratively. Moreover, the guy needs to get out of his cubicle and go to interesting places :-)
As far as literal charts of literal bodies of water on the surface of the earth, satelite photography actually has pretty much solved that problem.
As far as metaphorical waters, human civilization is larger than most people really think, and consists disproportionately of people finding and publishing answers to interesting questions. "Don't assume the waters are uncharted until you've done at least a cursory search for the charts" is sound advice.
95% of the people, 95% of the time is a less good standard when dealing with interesting people, isn't it ;)
EDIT: Downvote for... accepting a different opinion? Duly noted; will do so more quietly in future.
There's a law about that :-P
-- Lisa Bradley, a character in Brennan Lee Mulligan & Molly Ostertag's Strong Female Protagonist
Actually, well I suppose it depends on what you mean by "met".
There's no such things as gods.
I think this is about the only scenario on LW that someone can be justifiably downvoted for that statement.
I up-voted it for dissenting against sloppy thinking disguised as being deep or clever. Twisting the word 'god' to include other things that do fit the original, literal or intended meaning of the term results in useless equivocation.
I don't see why. Non-agents simply don't fit the definition of "god", so equivocating on the definition of "god" from "world-changingly powerful agent" to "abstract personification of causality itself" does not really shed any light on anything.
It isn't meant to be some rigorous account of how the world works, it's a deliberate mythology. I'm not entirely convinced as to whether it's a good idea, but aspie criticisms that amount to "god don't real" are missing the point entirely.
http://www.moreright.net/postrat-religion/
Actually, upon reading that article you've linked, I've found it to be cogent and well-written but emotionally toxic, tenuous in its connection to facts, and philosophically/existentially filled to the brim with lost purposes. To give examples, the obsession with preserving "European civilization" and the admiration for the internet's cult of ultra-masculinity (which should really be called pseudo-masculinity since it so exaggerates the present day's Masculinity Tropes that it dramatically misses other modes of masculinity, despite their actual historicity) portray the writer as chiefly, bizarrely concerned with present-day cultural trends rather than with the kind of good-in-themselves terminal values around which one could design a society from scratch if necessary.
I mean, sorry to be uncharitable in my reading, but I just don't see why I should want to build white European Christian or post-Christian society, in the first place. I know that reactionary and conservative communities give immense weight and worry to cultural goal-drift away from whatever weird version of white Christian/post-Christian society it is they actually like (derisive tone because it often seems they like The Silmarillion more than Actually Existing Europe), but it seems to me that the only way to really avoid random drift is to ground one's worldview in things that are actually, verifiably, literally true. Only an epistemic thought process will obtain consistent, nonrandom, meaningful results.
And since there is a truth of the matter when it comes to human beings' emotional and existential needs, it seems you couldn't get anywhere by doing anything but anchoring yourself to that truth and drawing as close as possible. Any deviation into lost purposes, ill-posed questions, and fallacious reasoning will be punished.
If you attach yourself to some invented image of some particular time-period in European history and try to pump all the entropy out of it, try to optimize everything to forcibly fit that image you've got in your head, you will only succeed in destroying everything else that you aren't acknowledging you care about. And since that image isn't even a terminal goal, a good-in-itself, the everything else will just be more-or-less everything.
If you separate Myth from Truth, Truth will burn you in hellfire. There is no escape.
(Also, citing an imageboard as a source of information about mythology and religion is just embarrassingly bad scholarship.)
Says the guy citing a deliberately informal wiki as a source of information about historical cultures :P
Fine, but Dungeons and Dragons is also a constructed, deliberate mythology, and you wouldn't respond to a quote about "You haven't met gods" by saying, "Actually, I role-played encountering Boccob the Uncaring, God of Magic, just last Tuesday."
Well actually, I would respond that way, but as a joke. I would not expect to be taken seriously.
Let's look at why are asking the question. The relevant property in this discussion is "will punish you for being 'uppity" ". Being an agent isn't directly relevant to that.
But causality can't punish you for being uppity. You basically just cannot be uppity against causality.
Why are you arguing about taste? People adapt metaphors to help them think and act effectively. Human brains like agent-metaphors a lot: witness the popularity of the Moloch essay.
Your problem with classical religion might be that a lot of silly people are classically religious.
"But is the metaphor true" is kind of a silly question, imo.
Also, if there is an agenty God, it/she/he made sure to construct a world where nudges here and there are hard to trace.
No, my actual problem here is that these metaphors are not useful for making predictions.
Is that your line for good language use, prediction effectiveness? Do you have an issue with Scott's Moloch metaphor also? What about poetic language more generally?
Look: I am not a major fan of using poetic language to describe real life. Really. Just don't like it. And the problem with Scott's "metaphor" is that it wasn't a metaphor: he actually explicitly tagged the post as having an epistemic status of Fanciful Visionary Visions. It wasn't supposed to be anything approaching a useful sociological analysis that cuts reality at the joints. It wasn't supposed to be a rational way to think about the world.
But because it told a colorful story that stirs the emotions, people remember it far more prominently than any of Scott's writing on mere statistics that actually addresses reality, and now I have to put up with people pretending there's a demon at work in the world.
Fair enough. Why insist others share this preference? I like poetry (T. S. Eliot for example).
A ton of math is about metaphors (Lakoff wrote a book about this).
This is false. Not only does the LW wiki have a definition of "god" that is a non-agent, the study of theology points one to numerous gods that people believe in that are non-agents. There's a reason that many of the popular monotheisms refer to their god as a personal god; it stands in contrast to the heresy of a non-personal (i.e., non-agent) god.
I'm not sure this is very rational. Assuming that you are more competent than you really are -- which seems to be a matter of hubris -- is indeed capable of destroying you.
Yes, but more favorable outcomes are also possible, like becoming the [e.g. 43rd] President.
I think the way it works is that people are built to have hubris for signalling purposes, and then they're built to be lazy and risk-averse to counter the dangers of hubris. If you don't get rid of risk-aversion and akrasia but you do get rid of hubris, that can be problematic.
Or by physics. Not all consequences for overconfidence are social.
— George R. R. Martin, Wikiquote, audio interview source
(Changed from an earlier quote I decided I'd keep for later.)
Wow. I am, uh, embarrassed to say that I somehow managed to get caught up in the replies to this comment without ever actually seeing the quote itself until now. (In my defense, I did get here through the Recent Comments sidebar, but still... yeah, not one of my prouder moments.) So, now that I've finally gotten around to reading the quote, uh...
...Maybe I'm dense, but I'm not quite understanding this one. I mean, I understand that it's an explanation of Martin's philosophy of writing, but I'm not really seeing the rationality tie-in. I could probably shoehorn in an explanation for why and how it relates, but the problem with such an explanation is that it would be exactly that: shoehorned in. I feel as though advice of this sort would be much better suited to a writing thread than to a rationality quotes thread. Could someone explain this one to me? Thanks in advance.
Fair point. To be honest, I just got this quote from Martin's Wikiquote page after I decided to save the original and needed something to replace it. (I suppose I could've done something like change the whole post to "[DELETED]" and then retract it, but this seemed good enough at the time.)
I can't really make a rigorous case for this quote's appropriateness here, what actually drove my decision to use this was basically a hunch. My after-the-fact rationalization is that maybe this quote sort of touched on the Beyond the Reach of God sense that death is allowed to happen to anyone, at any time, and especially in dangerous situations, as opposed to most fiction which would only allow the hero to die in some big heroic sacrifice?
For an after-the-fact rationalization, that's actually not bad. On the other hand, I think Martin might actually push it a little too far; reality isn't as pretty as most fiction writers make it out to be, true, but it isn't actively out to get you, either. The universe is just neutral. While it doesn't prevent people from suffering or dying, neither does it go out of its way to make sure they do. In ASoIaF, on the other hand, it's as though events are conspiring to screw everyone over, almost as if Martin is trying to show that he isn't like those other writers who are too "soft" on their characters. In doing so, however, I feel he fell into the opposite trap: that of making his world too hostile. Everything went wrong for the characters, which broke my suspension of disbelief every bit as badly as it would have if everything had gone right.
His reputation as a "bloody minded bastard" aside, Martin has creznaragyl xvyyrq bss n tenaq gbgny bs bar CBI punenpgre va gur ebhtuyl svir gubhfnaq phzhyngvir cntrf bs gur NFbVnS frevrf fb sne (abg pbhagvat cebybthr/rcvybthr punenpgref, jubz ab bar rkcrpgf gb fheivir sbe zber guna bar puncgre). Gur raqvat bs gur zbfg erprag obbx yrnirf bar CBI punenpgre'f sngr hapyrne, ohg gur infg znwbevgl bs gur snaqbz rkcrpgf uvz gb or onpx va fbzr sbez be nabgure. (Aba-CBI graq gb qebc yvxr syvrf, ohg gur nhqvrapr vf yrff nggnpurq gb gurz.)
For me, it's not just a problem of suspension of disbelief, it's a problem of destroying involvement in the story. If too much bad happens to the characters, I'm less likely to be emotionally invested in them. Martin's "The Princess and the Queen" (a prequel to ASoIaF) in Dangerous Women is especially awful that way, through the characters aren't developed very much, either. I'm hoping he does a better job in the main series.
Prediction: 30% chance it's a Christmas related quote.
Nope, just saving my first choice of quote for the beginning of the next thread. I figure if I post a good quote now, people will mostly only see it from the recent comment and recent quote feeds, and after a few others get posted, people will mostly forget about it and not, if they were to like it, upvote it. Whereas if it were one of the first posts in a thread, and people liked it and started upvoting it, it would stay high on the page and gather even more attention and upvotes, creating a positive feedback loop which would give me karma.
Machiavellian, isn't it? I doubt it'll work out that well, but I figure it's worth a shot.
^Everyone should upvote this in an ironic celebration of your honesty.
I think that we use "Best" (which is a complicated thing other than "absolute points") rather than "Top" (absolute points) precisely to reduce the effectiveness of that strategy.
That's interesting. What criterion/criteria does "Best" use, then?
And on a different but related note: does it really negate the strategy? I note that, despite using the "Best" setting, this page still tends to display higher-karma comments near the top; furthermore, most of those high-karma comments seem to have been posted pretty early in the month. That suggests to me that Gondolinian's strategy may still have a shot.
Raymond Smullyan, This Book Needs No Title, taking joy in the merely real
Frequency is not importance. I think this quote has more humorous than practical merit.
Ayn Rand, to a Catholic Priest.
Philosophers have played a game going way back where they believe that popular religion comes in handy as a fiction for keeping the mob in line, but they view themselves as god-optional. The philosophes in the Enlightenment started the experiment of letting the mob in on the truth, and the experiment has apparently gone so far in parts of Europe like Estonia that some populations have lost familiarity with christian beliefs, or even how to pronounce Jesus' name in their own language. Or so Phil Zuckerman claims:
https://books.google.com/books?id=C-glNscSpiUC&lpg=PP1&dq=phil%20zuckerman&pg=PA96#v=onepage&q=estonia&f=false
The mob is pretty well educated these days, and the standard of living is so high that there's much less incentive to step out of line. I don't think we can compare modern nations to historical nations to make any claim about whether religion keeps people in line.
The claim that people can't pronounce Jesus' name might apply to former Soviet Union countries, but I doubt it applies anywhere else in Europe.
Do you know that Jesus's actual name is Yeshua?
We don't know that. It was likely some variant of the name commonly translated as "Joshua" in English. It could have been Yeshua or Yehoshua or a variety of slightly Aramacized variants of that.
But English language's "Jesus" is still far off.
Sure, but I fail to see how that's relevant to the point in question.
-- Pope Francis, Open Mind, Faithful Heart: Reflections on Following Jesus
I assume this is a pro-cryonics quote, but I don't quite see how it relates to rationality. Its point is, quite clearly, "Accept Jesus as your personal savior and gain the gift of Eternal Life".
It seems to me it's anti-death rather than pro-cryonics; the two aren't quite the same, and in particular being anti-death no more implies being pro-cryonics than it implies being pro-Jesus. And while no doubt Bergoglio's (= Pope Francis's) anti-death-ism is tightly tied up with his pro-Jesus-ism, what he's written here can stand on its own as an expression of an anti-death attitude.
(I'm not sure being strongly opposed to death should really qualify something as a Rationality Quote either, but that's a different complaint from "it's really all about Jesus".)
I think it's worth clarifying that Pope Francis and Jorge Mario Bergoglio are one and the same person.
Lesson learned: do not just copy-paste from Amazon.
--Marcel Proust
"You should never bet against anything in science at odds of more than about 10^12 to 1 against."
Alas, as nice a quote as it is, it seems to be bogus:
Saul Alinsky, in his Rules for Radicals.
there is a familiar phenomenon here, in which a certain kind of would-be economic expert loves to cite the supposed lessons of economic experiences that are in the distant past, and where we actually have only a faint grasp of what really happened. Harding 1921 “works” only because people don’t know much about it; you have to navigate through some fairly obscure sources to figure out [what actually happened]. And the same goes even more strongly — let’s say, XII times as strongly — when, say, [Name] starts telling us about the Emperor Diocletian. The point is that the vagueness of the information, and even more so what most people [think they] know about it, lets such people project their prejudices onto the past and then claim that they’re discussing the lessons of experience.
Paul Krugman on the use of examples to obscure rather than clarify
What's the alternative. Site what's currently going on in other countries (people generally aren't to familiar with that either)? Generalize from one example (where people don't necessarily now all the details either)?
Yes. Because both of those have actual data, and are thus useful - your reasoning can be tested against reality.
We just really don't know very much about the roman economy, and are unlikely to find out much more than we currently do. Generalizing from one example isn't good .. science, logic or argument. But it's better than generalizing from the fog of history. Not a lot better - Economics only very barely qualifies as a science on a good day, but Krugman is completely correct to call people out for going in this direction because doing so just outright reduces it to storytelling.
On the other hand we do know a lot about what happened in 1921, Krugman just wishes we didn't because it appears to contradict his theories.
Um, no. History contains evidence, it's not particularly clean evidence, but evidence nonetheless and we shouldn't be throwing it away.
-- Adam Cadre
This seems like explaining vs. explaining away. The process by which better players pick up wins is by winning the "contest of athletic prowess." The game itself is interesting to watch because we like to see competent people play, and when upsets happen, they often happen for reasons that are easily displayed and engaged with in terms of the mechanics of the game.
This is similar to choosing strict determinism over compatibilism. Which players are the "best" depends on each of those players' individual efforts during the game. You could extend the idea to the executives too, anyway--which groups of executives acquire better players is largely a function of which have the best executives.
Efforts are only one variable here, and the quote did say "largely a function of". Those being said, look at how often teams replay each other during a season with a different winner.
--Shunryu Suzuki
I think this is a very important sentiment. I'm however not sure how to get others to adopt it.
It's the wisdom that comes with age. Doctors call it Alzheimer's.
:-D
Are you saying that because you don't understand the point that the orginal quote wants to make, or are you using it to try to make a unrelated joke?
I'm using it to make a related joke.
Alzheimer's first attacks short term memory before long-term memory. It makes learning harder. It has little to do with being open to new learning.
The quote doesn't talk about easier learning. Alzheimer's makes it easier to approach problems as "a beginner", with "a fresh mind" :-P
Tough crowd.
Or, in ChristianKI's case, tough Kraut. Since IIRC he's a Berliner (an actual one, not like JFK).
Tyler Cowen
--Eugene Volokh, "Liberty, safety, and Benjamin Franklin"
A good example of the risk of reading too much into slogans that are basically just applause lights. Also reminds me of "The Choice between Good and Bad is not a matter of saying 'Good!' It is about deciding which is which."
The quote always annoyed me too. People bring it up for ANY infringement on liberty, often leaving off the words "Essential" and "Temporary", making a much stronger version of the quote (And of course, obviously wrong).
Tangentially, Sword of Good was my introduction to Yudkowsky, and by extension, LW.
-- Oliver Burkeman, The Guardian, May 21, 2014
I enjoyed this quote, and have had a great number of self depreciating laughs with other young professionals about how we were totally winging it.
But it is not true.
There are those winging it, but they are faking it until they make it, and make up a smaller group than represented above. The much larger group is made from a rainbow of wrong! Biases, ignorance, bad information, misinformation, conflicting agendas, the list goes on.
The group of people just winging it, pushing their limits, faking it until they make it, are only piece of the bigger picture of stuff done wrong. It is not fair to overrepresent their influence. Although, it is always a comfort to know there are others out their in the same boat, just winging it.
George S. Patton
Ideally, everyone should be thinking alike. How about
Humans have bounded rationality, different available data sets, and different sets of accumulated experience (which is freqently labeled as part of intuition).
Twenty art students are drawing the same life model. They are all thinking about the task; they will produce twenty different drawings. In what world would it be ideal for them to produce identical drawings?
Twenty animators apply for the same job at Pixar. They put a great deal of thought into their applications, and submit twenty different demo reels. In what world would it be ideal for them to produce identical demo reels?
Twenty designers compete to design the new logo for a company. In what world would it be ideal for them to come up with identical logos?
Twenty would-be startup founders come up with ideas for new products. In what world would it be ideal for them to come up with the same idea?
Twenty students take the same exam. In what world would it be ideal for them to give the same answers?
Twenty people thinking alike lynch an innocent man. Does this happen in an ideal world?
In 1 and 2, the thinking is not the type being referred to in the quote. In 3, assuming only one of theirs get chosen, then there are 19 failures, hence 19 non-thinkers or non-sufficient thinking. In 4, they're not all trying to answer the same question "what's the best way to make money", but the question "what's a good way to make money". (That may also apply to 3.) I touched on the difference in another thread. In 5, yes, every test-taker should give the correct answer to every question. Obvious for multiple choice tests, and even other tests usually only have one really correct answer, even if there may be more than one way to phrase it.
In 6, first of all, your example is isomorphic to its complement; where 20 people decide not to lynch an innocent man. If you defend the original quote, then some of them must not be thinking. And the actual answer is that my quoted version is one-sided; agreement doesn't imply idealism, idealism implies agreement.
I could add a disclaimer; everyone should be thinking alike in cases referred to by the first quote. I don't have a good way to narrow down exactly what that is off-hand right now, it's kind of intuitional. Do you have an example where my claim conflicts directly with what the first quote would say, and you think it's obvious in that scenario that they are right and not me?
You are invited by a friend to what he calls a "cool organization". You walk into the building, and are promptly greeted by around twenty different people, all using variations on the same welcome phrase. You ask what the main point of the organization is, and several different people chime in at the same time, all answering, "Politics." You ask what kind of politics. Every single one of them proceeds to endorse the idea that abortion is unconditionally bad. Now feeling rather creeped out, you ask them for their reasoning. Several of them give answers, but all of those answers are variations of the same argument, and the way in which they say it gives you the feeling as though they are reciting this argument from memory.
Would you be inclined to stay at this "cool organization" a moment longer than you have to?
Now substitute "abortion is unconditionally bad" with "creationism should not be taught as science in public schools".
If you would still be creeped out by that, then your creep detector is miscalibrated; that would mean nobody can have an organization dedicated to a cause without creeping you out.
If you would not be creeped out by that, then your initial reaction to the abortion example was probably being mindkilled by abortion, not being creeped out by the fact that a lot of people agreed on something.
Just because I agree with their ideas doesn't mean I won't find it creepy. A cult is a cult, regardless of what it promotes. If I wanted to join an anti-creationist community, I certainly wouldn't join that one, and there are plenty such communities that manage to get their message across without coming off as cultish.
The example is supposed to sound cultist because the people think alike. But I have a hard time seeing how a non-cultist anti-creationist group would produce different arguments against creationism.
The non-cultist group could of course not all use the same welcome phrase, but that's not really the heart of what the example is supposed to illustrate,
There are multiple anti-creationist arguments out there, so if they all immediately jump to the same one, I'd be suspicious. But even beyond that, it's natural for humans to disagree about stuff, because we're not perfect Bayesians. If you see a bunch of humans agreeing completely, you should immediately think "cult", or at the very least "these people don't think for themselves". (I'd be much less suspicious if we replace humans with Bayesian superintelligences, however, because those actually follow Aumann's Agreement Theorem.)
Yes, actually, and I don't see why it is creepy despite your repeated assertions that it is.
And if they gave completely different arguments, you'd complain about the remarkable co-incidence that all these arguments suggest the same policy.
Difference of opinion, then. I would find it creepy as all hell.
I probably would, yes, but I would still prefer that world to the one in which they gave only one argument.
Now you're just arguing from creepiness.
Just because people should reach the same conclusions does not imply they should always do the same thing; e.g. some versions of chicken have the optimal solution where both players have the same options but they should do different things. (On a one-off with binding preconditions (or TDT done right), where the sum of outcomes on their doing different things is higher than any symmetrical outcome, they should commit to choose randomly in coordination.)
This example looks similiar to me; the cool cultists don't know how to assign turns. Even if I had several clones, we wouldn't all be doing the same things; not because we would disagree on what was important, but because it's unnecessary to do some things more than once.
Also, this organization sounds really cool! Where can I join? (Seriously, I've never been in a cult before and would love to have the experience.)
You really don't want that.
edit: A concrete useful suggestion is to reorganize your life in such a way that you have better things to do with your time than be a tourist in other people's misery and ruin.
The quote is without a provenance that I can discover. If authentic, I presume that Patton was referring to military planning. I don't see a line separating that type of thinking from cases (1)-(4) and some of (5). Ideas must be found or created to achieve results that are not totally ordered. Thinking better is helpful but thinking alike is not.
Only if you "thinking better" to retroactively mean "won". But that is not what the word "thinking" means.
I doubt any of those entrepreneurs are indifferent between a given level of success and 10 times that level.
Perhaps you are thinking only of a limited type of exam. There is only one correct answer to "what is 23 times 87?"[1] Not all exams are like that.
Philosophy:
Ancient history (from here:
The link also provides the marking criteria for the question. The ideal result can only be described as "twenty students giving the same answer" if, as in case (3), "the same answer" is redefined to mean "anything that gets top marks", in which case it becomes tautological.
I reject both of those. Agreement doesn't imply ideal, of course (case 6 was just a test to see if people were thinking). But neither does ideal imply agreement, except by definitional shenanigans. And your version of Patton's quote doesn't include the hypothesis of ideality anyway. Neither does Patton's. We are, or should be, talking about the real world.
What are those cases? Military planning, I am assuming, on the basis of who Patton was. Twenty generals gather to decide how to address the present juncture of a war. All will have ideas; these ideas will not all be the same. They will bring different backgrounds of knowledge and experience to the matter. In that situation, if they all all agree at once on what to do, I believe Patton's version applies.
(1) Ubj znal crbcyr'f svefg gubhtug ba ernqvat gung jnf "nun, urknqrpvzny!" Whfg...qba'g.
Saul Alinsky, in his Rules for Radicals.
This one hit home for me. Got a haircut yesterday. :P
And a thousand female metalheads shall weep.
And you end up like this.
"Murphy's Laws of Combat"
One of my former fencing instructors had this as a sort of catchphrase. Needless to say, he was a pretty cool guy.
the map is not the territory. if it's stupid and it works, update your map.
Paul Graham
The situation is far worse than that. At least a compiled program you can: add more memory or run it on a faster computer, disassemble the code and see at which step things go wrong, rewind if there's a problem, interface with programs you've written, etc. If compiled programs really were that bad, hackers would have already won (as security researchers wouldn't be able to take apart malware), drm would work, no emulators for undocumented devices would exist.
The state of the mind is many orders of magnitude worse.
Also, I'd quibble with "we don't know why". The word I'd use is how. We know why, perhaps not in detail (although we sort of know how, in even less detail.)
Plutarch, from Life of Theseus.
Lois McMaster Bujold
The less you care about "the respect" others show towards you, the less power idiots can exert over you. The trick is differentiating whose opinion actually matters (say, in a professional context) and whose does not (say, your neighbors').
Due to being social animals, we're prone to rationalize caring about what anyone thinks of us (say, strangers in a supermarket when your kid is having a tantrum -- "they must think I'm a terrible mom!" -- or in the neighbors case "who knows, I might one day need to rely on them, better put some effort into fitting in"). Only very few people's opinions actually impact you in a tangible / not-just-social-posturing way. (The standard answer on /r/relationships should be "why do you care about what those idiots think, even in the unlikely case they actually want to help your situation, as opposed to reinforcing their make-believe fool's paradise travesty of a world view".)
Interestingly, internalizing such a IDGAF attitude usually does a good job at signalling high status, in most settings. Sigh, damned if you do and damned if you don't.
"It’s much better to live in a place like Switzerland where the problems are complex and the solutions are unclear, rather than North Korea where the problems are simple and the solutions are straightforward."
Scott Sumner, A time for nuance
That simply means that Switzerland has already solved the easier problems North Korea struggles with. To paraphrase, an absence of low-hanging fruit on a well-tended tree means you're probably in a garden.
The problems in North Korea are not so simple with straightforward solutions, when we look at them from the perspective of the actors involved.
For the average citizen in North Korea, there are no clear avenues to political influence that don't increase rather than decrease personal risk. For the people in North Korea who do have significant political influence, from a self-serving perspective, there are no "problems" with how North Korea is run.
North Korea's problems might be simple to solve from the perspective of an altruistic Supreme Leader, but they're hard as coordination problems. Some of our societal problems in the developed world are also simple from the perspective of an altruistic Supreme Leader, but hard as coordination problems. Some of the more salient differences are that those problems didn't occur due to the actions of non altruistic or incompetent Supreme Leaders in the first place, and aren't causing mass subsistence level poverty.
-- Ferrett Steinmetz
But is it only a human behavior? I'd think anything with cached thoughts/results/computations would be similarly vulnerable.
That's true of most frequently referenced elements of human nature, if not all of them.
Even Love.
~The Homo Sapiens Class has a trusted computing override that enables it to lock itself into a state of heightened agreeability towards a particular target unit. More to the point: it can signal this shift in modes in a way that is both recognizable to other units, and which the implementation makes very difficult for it to forge. The Love feature then provides HS units on either side of a reciprocated Love signalling a means of safely cooperating in extremely high-stakes PD scenarios without violating their superrationality circumvention architecture.
Hmm.. On reflection, one would hope that most effective designs for time-constrained intelligent(decentralized, replication-obsessed) agents would not override superrationality("override": Is it reasonable to talk about it like a natural consequence of intelligence?), and that, then, the love override may not occur.
Hard to say.
Thomas J. McKay, Reasons, Explanations and Decisions
I don't know if I agree with this. Suppose the stock market is driven by runaway herd behavior. If that's the case, then an inexplicably bad random perturbation might have cascading effects. Saying that the initial slump in the market is driving further decline seems accurate to me.
That would be a slump in the market caused by a decline in stock prices :)
I don't understand what you're trying to say. As used in the original quote they are interchangeable synonyms.
I was poking fun at that.
--Rudyard Kipling, "Dane-Geld"
A nice reminder about the value of one-boxing, especially in light of current events.
Well, when this capitulation happened in 2012 no one except a few "right-wing nuts" seemed to care.
This was definitely not the right link to use, at all - how about wikipedia instead? Nor am I sure what point you want to make besides scoring political points - how about specific recommendations?
You don't see that last link as a publicity stunt? I tentatively suspect that it is - though maybe I should put that under 50% - with a lot of the remaining probability going to blackmail of some individual(s).
Jeff Bezos
E. T. Jaynes, Probability: The Logic of Science
But, as compiler optimizations exploit increasingly recondite properties of the programming language definition, we find ourselves having to program as if the compiler were our ex-wife’s or ex-husband’s divorce lawyer, lest it introduce security bugs into our kernels, as happened with FreeBSD a couple of years back with a function erroneously annotated as noreturn, and as is happening now with bounds checks depending on signed overflow behavior.
Hacker new comment
Wouldn't something good happening correctly result in a Bayseian update on the probability that you are a genius, and something bad a Bayseian update on the probability that someone is an idiot? (perhaps even you)
Not without a causal link, the absence of which is conspicuous.
Not necessarily. Causation might not be present, true, but causation is not necessary for correlation, and statistical correlation is what Bayes is all about. Correlation often implies causation, and even when it doesn't, it should still be respected as a real statistical phenomenon. All Jiro's update would require is that P(success|genius) > P(success|~genius), which I don't think is too hard to grant. It might not update enough to make the hypothesis the dominant hypothesis, true, but the update definitely occurs.
"Because" (in the original quote) is about causality. Your inequality implies nothing causal without a lot of assumptions. I don't understand what your setup is for increasing belief about a causal link based on an observed correlation (not saying it is impossible, but I think it would be helpful to be precise here).
Jiro's comment is correct but a non-sequitur because he was (correctly) pointing out there is a dependence between success and genius that you can exploit to update. But that is not what the original quote was talking about at all, it was talking about an incorrect, self-serving assignment of a causal link in a complicated situation.
Yes, naturally. I suppose I should have made myself a little clearer there; I was not making any reference to the original quote, but rather to Jiro's comment, which makes no mention of causation, only Bayesian updates.
Because P(causation|correlation) > P(causation|~correlation). That is, it's more likely that a causal link exists if you see a correlation than if you don't see a correlation.
As for your second paragraph, Jiro himself/herself has come to clarify, so I don't think it's necessary (for me) to continue that particular discussion.
Where are you getting this? What are the numerical values of those probabilities?
You can have presence or absence of a correlation between A and B, coexisting with presence or absence of a causal arrow between A and B. All four combinations occur in ordinary, everyday phenomena.
I cannot see how to define, let alone measure, probabilities P(causation|correlation) and P(causation|~correlation) over all possible phenomena.
I also don't know what distinction you intend in other comments in this thread between "correlation" and "real correlation". This is what I understand by "correlation", and there is nothing I would contrast with this and call "real correlation".
Do you think it is literally equally likely that causation exists if you observe a correlation, and if you don't? That observing the presence or absence of a correlation should not change your probability estimate of a causal link at all? If not, then you acknowledge that P(causation|correlation) != P(causation|~correlation). Then it's just a question of which probability is greater. I assert that, intuitively, the former seems likely to be greater.
By "real correlation" I mean a correlation that is not simply an artifact of your statistical analysis, but is actually "present in the data", so to speak. Let me know if you still find this unclear. (For some examples of "unreal" correlations, take a look here.)
I think I have no way of assigning numbers to the quantities P(causation|correlation) and P(causation|~correlation) assessed over all examples of pairs of variables. If you do, tell me what numbers you get.
I asked why and you have said "intuition", which means that you don't know why.
My belief is different, but I also know why I hold it. Leaping from correlation to causation is never justified without reasons other than the correlation itself, reasons specific to the particular quantities being studied. Examples such as the one you just linked to illustrate why. There is no end of correlations that exist without a causal arrow between the two quantities. Merely observing a correlation tells you nothing about whether such an arrow exists. For what it's worth, I believe that is in accordance with the views of statisticians generally. If you want to overturn basic knowledge in statistics, you will need a lot more than a pronouncement of your intuition.
A correlation (or any other measure of statistical dependence) is something computed from the data. There is no such thing as a correlation not "present in the data".
What I think you mean by a "real correlation" seems to be an actual causal link, but that reduces your claim that "real correlation" implies causation to a tautology. What observations would you undertake to determine whether a correlation is, in your terms, a "real" correlation?
My original question was whether you think the probabilities are equal. This reply does not appear to address that question. Even if you have no way of assigning numbers, that does not imply that the three possibilities (>, =, <) are equally likely. Let's say we somehow did find those probabilities. Would you be willing to say, right now, that they would turn out to be equal (with high probability)?
Okay, here's my reasoning (which I thought was intuitively obvious, hence the talk of "intuition", but illusion of transparency, I guess):
The presence of a correlation between two variables means (among other things) that those two variables are statistically dependent. There are many ways for variables to be dependent, one of which is causation. When you observe that a correlation is present, you are effectively eliminating the possibility that the variables are independent. With this possibility gone, the remaining possibilities must increase in probability mass, i.e. become more likely, if we still want the total to sum to 1. This includes the possibility of causation. Thus, the probability of some causal link existing is higher after we observe a correlation than before: P(causation|correlation) > P(causation|~correlation).
If you are using a flawed or unsuitable analysis method, it is very possible for you to (seemingly) get a correlation when in fact no such correlation exists. An example of such a flawed method may be found here, where a correlation is found between ratios of quantities despite those quantities being statistically independent, thus giving the false impression that a correlation is present when it is actually not.
As I suggested in my reply to Lumifer, redundancy helps.
Yes, but if something good happens you have to update on the probability that someone besides you is a genius, and if something bad happens you have to update on the probability that you're the idiot. The problem is people only update the parts that make them look better.
Yes, but the issue is whether or not those are the dominant hypotheses that come to mind. It's better to see success and failure as results of plans and facts than innate ability or disability.