Rationality Quotes December 2014
Another month, another rationality quotes thread. The rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (440)
The race is not always to the swift, nor the battle to the strong, but that's the way to bet.
-Damon Runyon
Damon Runyon clearly has not considered point spreads.
--Rudyard Kipling, "Dane-Geld"
A nice reminder about the value of one-boxing, especially in light of current events.
Well, when this capitulation happened in 2012 no one except a few "right-wing nuts" seemed to care.
This was definitely not the right link to use, at all - how about wikipedia instead? Nor am I sure what point you want to make besides scoring political points - how about specific recommendations?
Jeff Bezos
Henry St John, Viscount Bolingbroke, Reflections on Exile
Cf. Tolstoy: all happy families are alike, but every unhappy family is unhappy in its own way.
What happens twice probably happens more than twice: are there other notable expressions of this idea?
(There's a well-known principle in software development that's pretty close, though I can't find a Famous Quotation of it right now: when you're choosing a name for a variable or function or whatever, avoid abbreviations: there's only one way to spell a word right, and lots of ways to spell it wrong. Though this is not always good advice.)
Biblical verse on the asymmetry of error: "Enter through the narrow gate. For wide is the gate and broad is the road that leads to destruction, and many enter through it."
Thomas J. McKay, Reasons, Explanations and Decisions
-- Ferrett Steinmetz
But is it only a human behavior? I'd think anything with cached thoughts/results/computations would be similarly vulnerable.
That's true of most frequently referenced elements of human nature, if not all of them.
Even Love.
~The Homo Sapiens Class has a trusted computing override that enables it to lock itself into a state of heightened agreeability towards a particular target unit. More to the point: it can signal this shift in modes in a way that is both recognizable to other units, and which the implementation makes very difficult for it to forge. The Love feature then provides HS units on either side of a reciprocated Love signalling a means of safely cooperating in extremely high-stakes PD scenarios without violating their superrationality circumvention architecture.
Hmm.. On reflection, one would hope that most effective designs for time-constrained intelligent(decentralized, replication-obsessed) agents would not override superrationality("override": Is it reasonable to talk about it like a natural consequence of intelligence?), and that, then, the love override may not occur.
Hard to say.
Wouldn't something good happening correctly result in a Bayseian update on the probability that you are a genius, and something bad a Bayseian update on the probability that someone is an idiot? (perhaps even you)
Yes, but if something good happens you have to update on the probability that someone besides you is a genius, and if something bad happens you have to update on the probability that you're the idiot. The problem is people only update the parts that make them look better.
Yes, but the issue is whether or not those are the dominant hypotheses that come to mind. It's better to see success and failure as results of plans and facts than innate ability or disability.
Not without a causal link, the absence of which is conspicuous.
Not necessarily. Causation might not be present, true, but causation is not necessary for correlation, and statistical correlation is what Bayes is all about. Correlation often implies causation, and even when it doesn't, it should still be respected as a real statistical phenomenon. All Jiro's update would require is that P(success|genius) > P(success|~genius), which I don't think is too hard to grant. It might not update enough to make the hypothesis the dominant hypothesis, true, but the update definitely occurs.
"Because" (in the original quote) is about causality. Your inequality implies nothing causal without a lot of assumptions. I don't understand what your setup is for increasing belief about a causal link based on an observed correlation (not saying it is impossible, but I think it would be helpful to be precise here).
Jiro's comment is correct but a non-sequitur because he was (correctly) pointing out there is a dependence between success and genius that you can exploit to update. But that is not what the original quote was talking about at all, it was talking about an incorrect, self-serving assignment of a causal link in a complicated situation.
Yes, naturally. I suppose I should have made myself a little clearer there; I was not making any reference to the original quote, but rather to Jiro's comment, which makes no mention of causation, only Bayesian updates.
Because P(causation|correlation) > P(causation|~correlation). That is, it's more likely that a causal link exists if you see a correlation than if you don't see a correlation.
As for your second paragraph, Jiro himself/herself has come to clarify, so I don't think it's necessary (for me) to continue that particular discussion.
Where are you getting this? What are the numerical values of those probabilities?
You can have presence or absence of a correlation between A and B, coexisting with presence or absence of a causal arrow between A and B. All four combinations occur in ordinary, everyday phenomena.
I cannot see how to define, let alone measure, probabilities P(causation|correlation) and P(causation|~correlation) over all possible phenomena.
I also don't know what distinction you intend in other comments in this thread between "correlation" and "real correlation". This is what I understand by "correlation", and there is nothing I would contrast with this and call "real correlation".
Do you think it is literally equally likely that causation exists if you observe a correlation, and if you don't? That observing the presence or absence of a correlation should not change your probability estimate of a causal link at all? If not, then you acknowledge that P(causation|correlation) != P(causation|~correlation). Then it's just a question of which probability is greater. I assert that, intuitively, the former seems likely to be greater.
By "real correlation" I mean a correlation that is not simply an artifact of your statistical analysis, but is actually "present in the data", so to speak. Let me know if you still find this unclear. (For some examples of "unreal" correlations, take a look here.)
I think I have no way of assigning numbers to the quantities P(causation|correlation) and P(causation|~correlation) assessed over all examples of pairs of variables. If you do, tell me what numbers you get.
I asked why and you have said "intuition", which means that you don't know why.
My belief is different, but I also know why I hold it. Leaping from correlation to causation is never justified without reasons other than the correlation itself, reasons specific to the particular quantities being studied. Examples such as the one you just linked to illustrate why. There is no end of correlations that exist without a causal arrow between the two quantities. Merely observing a correlation tells you nothing about whether such an arrow exists. For what it's worth, I believe that is in accordance with the views of statisticians generally. If you want to overturn basic knowledge in statistics, you will need a lot more than a pronouncement of your intuition.
A correlation (or any other measure of statistical dependence) is something computed from the data. There is no such thing as a correlation not "present in the data".
What I think you mean by a "real correlation" seems to be an actual causal link, but that reduces your claim that "real correlation" implies causation to a tautology. What observations would you undertake to determine whether a correlation is, in your terms, a "real" correlation?
My original question was whether you think the probabilities are equal. This reply does not appear to address that question. Even if you have no way of assigning numbers, that does not imply that the three possibilities (>, =, <) are equally likely. Let's say we somehow did find those probabilities. Would you be willing to say, right now, that they would turn out to be equal (with high probability)?
Okay, here's my reasoning (which I thought was intuitively obvious, hence the talk of "intuition", but illusion of transparency, I guess):
The presence of a correlation between two variables means (among other things) that those two variables are statistically dependent. There are many ways for variables to be dependent, one of which is causation. When you observe that a correlation is present, you are effectively eliminating the possibility that the variables are independent. With this possibility gone, the remaining possibilities must increase in probability mass, i.e. become more likely, if we still want the total to sum to 1. This includes the possibility of causation. Thus, the probability of some causal link existing is higher after we observe a correlation than before: P(causation|correlation) > P(causation|~correlation).
If you are using a flawed or unsuitable analysis method, it is very possible for you to (seemingly) get a correlation when in fact no such correlation exists. An example of such a flawed method may be found here, where a correlation is found between ratios of quantities despite those quantities being statistically independent, thus giving the false impression that a correlation is present when it is actually not.
As I suggested in my reply to Lumifer, redundancy helps.
E. T. Jaynes, Probability: The Logic of Science
Lois McMaster Bujold
The less you care about "the respect" others show towards you, the less power idiots can exert over you. The trick is differentiating whose opinion actually matters (say, in a professional context) and whose does not (say, your neighbors').
Due to being social animals, we're prone to rationalize caring about what anyone thinks of us (say, strangers in a supermarket when your kid is having a tantrum -- "they must think I'm a terrible mom!" -- or in the neighbors case "who knows, I might one day need to rely on them, better put some effort into fitting in"). Only very few people's opinions actually impact you in a tangible / not-just-social-posturing way. (The standard answer on /r/relationships should be "why do you care about what those idiots think, even in the unlikely case they actually want to help your situation, as opposed to reinforcing their make-believe fool's paradise travesty of a world view".)
Interestingly, internalizing such a IDGAF attitude usually does a good job at signalling high status, in most settings. Sigh, damned if you do and damned if you don't.
"It’s much better to live in a place like Switzerland where the problems are complex and the solutions are unclear, rather than North Korea where the problems are simple and the solutions are straightforward."
Scott Sumner, A time for nuance
The problems in North Korea are not so simple with straightforward solutions, when we look at them from the perspective of the actors involved.
For the average citizen in North Korea, there are no clear avenues to political influence that don't increase rather than decrease personal risk. For the people in North Korea who do have significant political influence, from a self-serving perspective, there are no "problems" with how North Korea is run.
North Korea's problems might be simple to solve from the perspective of an altruistic Supreme Leader, but they're hard as coordination problems. Some of our societal problems in the developed world are also simple from the perspective of an altruistic Supreme Leader, but hard as coordination problems. Some of the more salient differences are that those problems didn't occur due to the actions of non altruistic or incompetent Supreme Leaders in the first place, and aren't causing mass subsistence level poverty.
I do think North Korea leaders would prefer a state of affairs where it could educate it's own elite instead of sending the kids to Switzerland to get a real education.
North Korea's military would like to have capable engineers that can produce working technology.
On the other hand a simple act like giving the population access to internet might produce a chain reaction that blows up the whole state.
Jang Sung-taek was someone in North Korea with a lot of political power. According to Wikipedia South Korean believed that Jang Sung-taek was the defacto leader of North Korea in 2008.
Last year the North Korean state television announced his execution. His extended family might also have gotten executed.
One of the charges was that he "made no scruple of committing such act of treachery in May last as selling off the land of the Rason economic and trade zone to a foreign country..."
It's worth noting that Western countries did engage in policies to block Jang Sung-taek efforts to create economic change in North Korea.
That simply means that Switzerland has already solved the easier problems North Korea struggles with. To paraphrase, an absence of low-hanging fruit on a well-tended tree means you're probably in a garden.
Isn't that the point of the quote?
But, as compiler optimizations exploit increasingly recondite properties of the programming language definition, we find ourselves having to program as if the compiler were our ex-wife’s or ex-husband’s divorce lawyer, lest it introduce security bugs into our kernels, as happened with FreeBSD a couple of years back with a function erroneously annotated as noreturn, and as is happening now with bounds checks depending on signed overflow behavior.
Hacker new comment
Saul Alinsky, in his Rules for Radicals.
This one hit home for me. Got a haircut yesterday. :P
If I could convince Aubrey de Grey to cut off his beard it would increase everyones expected longevity more than any other accomplishment I'm capable of.
This I'm not actually sure about. I think the guru look might be a net positive in his particular situation.
Agreed. His fundraising might be benefiting from a strategy that increases the variance of peoples' opinions of him even if it also lowers this mean.
I wasn't familiar with the name, so I looked it up. There are some pretty strong criticisms of him here: http://www2.technologyreview.com/sens/docs/estepetal.pdf
Looks like pseudoscience.
I have seen him speak a couple of times and he addressed many of these criticisms in the talks. You might want to read his response to these criticisms before assuming they are valid.
A lot of this comes from a lack of appreciation of the difference between science and engineering. In engineering you just have to find something that works. You don't need to understand everything.
Some debate here and you can easily find his talks online:
http://www2.technologyreview.com/sens/
In his talks I did not get the sense that he is positioning himself as a great misunderstood maverick. He does say that in his opinion much ageing research is unproductive because it is aimed at understanding the problem rather than fixing it.
For example, rather than tweak metabolic processes to produce slightly smaller amounts of toxic substances, remove those substances by various means, or replace the cells grown old from said toxic substances.
His solution to cancer is to remove the telomerase genes. This way cancer cells will die after X divisions. Of course this creates the problem that stem cells will not work. So we will need to replenish germ lines in the immune system, stomach walls, skin etc.
These are "dumb" strategies and rarely of interest to scientists perhaps for that reason.
There is a similar issue in nanotechology discussed in Drexler's book "Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization". For example you do not need to solve the protein folding problem in generality in order to design proteins that have specific shapes. You just need to find a set of patterns that allow you to build proteins of specific shapes.
(edited for typos)
I will definitely be going over this, it looks very helpful. Thank you for making this.
I haven't finished the document yet, but I noticed it keeps on using the word "unscientific", which sounds problematic as one of its aims is to define pseudoscience.
There no such thing as evidence-based decision on strategies for research funding. Nobody really knows good criteria for deciding which research should get grants to be carried out.
Aubrey de Grey among other things makes the argument that it's good to put out prices for research groups that get mices to a certain increased lifespan. That's the Methuselah Foundation’s Mprize.
Now the Methuselah Foundation worked to set up the new organ liver price that gives 1 million to the first team that creates a regenerative or bioengineered solution that keeps a large animal alive for 90 days without native liver function.
Funding that kind of research is useful whether or not certain arguments Aubrey de Grey made about “Whole Body Interdiction of Lengthening of Telomeres” are correct. In science there's room for people proposing ideas that turn out to be wrong.
The authors provide more arguments than ones about telomeres. Further, they charge that he's misrepresenting evidence systematically, not just making specific proposals that turn out to be wrong. I agree giving prizes for increasing the lifespan of mice is a good idea, but that's not a very strong reason to support him. Do you have examples of novel scientific ideas he's had that have turned out to be useful?
Why exactly?
The SENS website lists 42 published papers that were funded with SENS grant money. The foundation has a yearly budget of 4 million that it uses to award grants to science that's publishable. A lot of that money comes out of Grey's own pocket and Peter Thiel's pocket. Other money comes from private donations. It's mainly additional money for the subject that wouldn't be there without Aubrey de Grey activism.
Aubrey de Grey may very well represent a picture of aging that underestiamtes the difficulties. However the resulting effect is that now a company like Google did start a project with Calico that's speficially targeted on curing aging.
If you want to convince Silicon Valley's billionaires to pay for more anti-aging research Aubrey de Grey might simply be making the right moves when scientists who are more conservative about possible success can't convince donars to put up money.
Because most advances in mouse models don't carry over into humans.
While mouse model aren't perfect, they do produce new knowledge and you simply can't do some exploratory research in humans.
I should distinguish between "supporting him as an activist" and "supporting him as a legitimate scientific researcher". I think that the fact he provides prizes to others is a decent reason to support him in the first category but not a reason to support him in the second. Even if we collapse the two categories, the mice thing doesn't seem like enough to outweigh misrepresenting research to the public.
Mostly, I was wondering whether you knew of any innovations or discoveries he found as a scientist. Because as the above link describes it, even if he has been a good activist he has been a poor scientist, not finding anything new and misleading people about the old.
This sounds like Dark Arts, which would make it deserve the label pseudoscience. If your argument is that there's a legitimate place for "marketing" like that, I see your point but I'm reluctant to agree.
And you end up like this.
Seems to have worked for them.
For whom? For the Mormon Church or for the specific individuals? :-/
And a thousand female metalheads shall weep.
"Murphy's Laws of Combat"
This is what survivorship bias looks like from the inside.
Paul Graham
The situation is far worse than that. At least a compiled program you can: add more memory or run it on a faster computer, disassemble the code and see at which step things go wrong, rewind if there's a problem, interface with programs you've written, etc. If compiled programs really were that bad, hackers would have already won (as security researchers wouldn't be able to take apart malware), drm would work, no emulators for undocumented devices would exist.
The state of the mind is many orders of magnitude worse.
Also, I'd quibble with "we don't know why". The word I'd use is how. We know why, perhaps not in detail (although we sort of know how, in even less detail.)
Plutarch, from Life of Theseus.
This makes me think that some of this practice might have been motivated by professional pride on the part of the mapmakers. Such as, "oh, the only reason I didn't go farther was because of the ravenous beasts, and my rival would never be able to push the boundaries farther either so you might as well buy/trust in my mapmaking"
You may be right, but I'm also inclined to include that it's fun to draw monsters.
--Eugene Volokh, "Liberty, safety, and Benjamin Franklin"
A good example of the risk of reading too much into slogans that are basically just applause lights. Also reminds me of "The Choice between Good and Bad is not a matter of saying 'Good!' It is about deciding which is which."
I mostly agree, but I think the slogan (like, I think, many others about which similar things could be said) has some value none the less.
A logically correct but uninspiring version would go like this:
-- Not Benjamin Franklin
Franklin's slogan serves as a sort of reminder that (1) there is a frequent temptation to "give up essential Liberty, to purchase a little temporary Safety" and (2) this is likely a bad idea. Indeed, the actual work of figuring out when the slogan is appropriate still needs to be done, but the reminder can still be useful. And (3) because it's a Famous Saying of a Famous Historical Figure, one can fairly safely draw attention to it and maybe even be taken seriously, even in times when the powers that be are trying to portray any refusal to be terrorized as unpatriotic.
Of course Volokh is aware of the "reminder" function (as he says: "The slogan might work as a reminder") but I think he undervalues it. (He says the "real difficulty" is deciding which tradeoffs to make, but actually just noticing that there's an important tradeoff being proposed is often a real difficulty.) And, alas, its Famous Saying nature is pretty important too.
It strikes me that the original Franklin quote really identifies a specific case of the availability heuristic. That is, when you're focused on safety, you tend to adopt policies that increase safety, without even considering other values such as liberty.
There may also be an issue of externalities here. This is really, really common in law enforcement. For example, consider civil asset forfeiture. It is an additional legal tool that enables police to catch and punish more criminals, more easily. That it also harms a lot of innocent people is simply not considered because their is no penalty to the police for doing so. All the cost is borne by people who are irrelevant to them.
The quote always annoyed me too. People bring it up for ANY infringement on liberty, often leaving off the words "Essential" and "Temporary", making a much stronger version of the quote (And of course, obviously wrong).
Tangentially, Sword of Good was my introduction to Yudkowsky, and by extension, LW.
George S. Patton
Ideally, everyone should be thinking alike. How about
I think the intended meaning (phrased in LessWrong terminology) is something more along the lines of the following:
Humans are not perfect Bayesians, and even if they were, they don't start from the same priors and encounter the same evidence. Therefore, Aumann's Agreement Theorem does not hold for human beings; thus, if a large number of human beings is observed to agree on the truth of a proposition, you should be suspicious. It's far more likely that they are signalling tribal agreement or, worse yet, accepting the proposition without thinking it through for themselves, than that they have each individually thought it through and independently reached identical conclusions. In general, then, civilized disagreement is a strong indicator of a healthy rationalist community; look at how often people disagree with each other on LW, for example. If everyone on LW was chanting, "Yes, Many Worlds is true, you should prefer torture to dust specks, mainstream philosophy is worthless," then that would be worrying, even if it is true. (I am not claiming that it is, nor am I claiming that it is not; such topics are, I feel, beyond the scope of this discussion and were brought up purely as examples.)
Why? Thinking is not limited to answering well-defined questions about empirical reality.
As a practical matter, I think lack of diversity in thinking is a bigger problem than too much diversity.
Twenty art students are drawing the same life model. They are all thinking about the task; they will produce twenty different drawings. In what world would it be ideal for them to produce identical drawings?
Twenty animators apply for the same job at Pixar. They put a great deal of thought into their applications, and submit twenty different demo reels. In what world would it be ideal for them to produce identical demo reels?
Twenty designers compete to design the new logo for a company. In what world would it be ideal for them to come up with identical logos?
Twenty would-be startup founders come up with ideas for new products. In what world would it be ideal for them to come up with the same idea?
Twenty students take the same exam. In what world would it be ideal for them to give the same answers?
Twenty people thinking alike lynch an innocent man. Does this happen in an ideal world?
In 1 and 2, the thinking is not the type being referred to in the quote. In 3, assuming only one of theirs get chosen, then there are 19 failures, hence 19 non-thinkers or non-sufficient thinking. In 4, they're not all trying to answer the same question "what's the best way to make money", but the question "what's a good way to make money". (That may also apply to 3.) I touched on the difference in another thread. In 5, yes, every test-taker should give the correct answer to every question. Obvious for multiple choice tests, and even other tests usually only have one really correct answer, even if there may be more than one way to phrase it.
In 6, first of all, your example is isomorphic to its complement; where 20 people decide not to lynch an innocent man. If you defend the original quote, then some of them must not be thinking. And the actual answer is that my quoted version is one-sided; agreement doesn't imply idealism, idealism implies agreement.
I could add a disclaimer; everyone should be thinking alike in cases referred to by the first quote. I don't have a good way to narrow down exactly what that is off-hand right now, it's kind of intuitional. Do you have an example where my claim conflicts directly with what the first quote would say, and you think it's obvious in that scenario that they are right and not me?
You are invited by a friend to what he calls a "cool organization". You walk into the building, and are promptly greeted by around twenty different people, all using variations on the same welcome phrase. You ask what the main point of the organization is, and several different people chime in at the same time, all answering, "Politics." You ask what kind of politics. Every single one of them proceeds to endorse the idea that abortion is unconditionally bad. Now feeling rather creeped out, you ask them for their reasoning. Several of them give answers, but all of those answers are variations of the same argument, and the way in which they say it gives you the feeling as though they are reciting this argument from memory.
Would you be inclined to stay at this "cool organization" a moment longer than you have to?
Yes, actually, and I don't see why it is creepy despite your repeated assertions that it is.
And if they gave completely different arguments, you'd complain about the remarkable co-incidence that all these arguments suggest the same policy.
Difference of opinion, then. I would find it creepy as all hell.
I probably would, yes, but I would still prefer that world to the one in which they gave only one argument.
Now substitute "abortion is unconditionally bad" with "creationism should not be taught as science in public schools".
If you would still be creeped out by that, then your creep detector is miscalibrated; that would mean nobody can have an organization dedicated to a cause without creeping you out.
If you would not be creeped out by that, then your initial reaction to the abortion example was probably being mindkilled by abortion, not being creeped out by the fact that a lot of people agreed on something.
Just because I agree with their ideas doesn't mean I won't find it creepy. A cult is a cult, regardless of what it promotes. If I wanted to join an anti-creationist community, I certainly wouldn't join that one, and there are plenty such communities that manage to get their message across without coming off as cultish.
The example is supposed to sound cultist because the people think alike. But I have a hard time seeing how a non-cultist anti-creationist group would produce different arguments against creationism.
The non-cultist group could of course not all use the same welcome phrase, but that's not really the heart of what the example is supposed to illustrate,
There are multiple anti-creationist arguments out there, so if they all immediately jump to the same one, I'd be suspicious. But even beyond that, it's natural for humans to disagree about stuff, because we're not perfect Bayesians. If you see a bunch of humans agreeing completely, you should immediately think "cult", or at the very least "these people don't think for themselves". (I'd be much less suspicious if we replace humans with Bayesian superintelligences, however, because those actually follow Aumann's Agreement Theorem.)
The quote is without a provenance that I can discover. If authentic, I presume that Patton was referring to military planning. I don't see a line separating that type of thinking from cases (1)-(4) and some of (5). Ideas must be found or created to achieve results that are not totally ordered. Thinking better is helpful but thinking alike is not.
Only if you "thinking better" to retroactively mean "won". But that is not what the word "thinking" means.
I doubt any of those entrepreneurs are indifferent between a given level of success and 10 times that level.
Perhaps you are thinking only of a limited type of exam. There is only one correct answer to "what is 23 times 87?"[1] Not all exams are like that.
Philosophy:
Ancient history (from here:
The link also provides the marking criteria for the question. The ideal result can only be described as "twenty students giving the same answer" if, as in case (3), "the same answer" is redefined to mean "anything that gets top marks", in which case it becomes tautological.
I reject both of those. Agreement doesn't imply ideal, of course (case 6 was just a test to see if people were thinking). But neither does ideal imply agreement, except by definitional shenanigans. And your version of Patton's quote doesn't include the hypothesis of ideality anyway. Neither does Patton's. We are, or should be, talking about the real world.
What are those cases? Military planning, I am assuming, on the basis of who Patton was. Twenty generals gather to decide how to address the present juncture of a war. All will have ideas; these ideas will not all be the same. They will bring different backgrounds of knowledge and experience to the matter. In that situation, if they all all agree at once on what to do, I believe Patton's version applies.
(1) Ubj znal crbcyr'f svefg gubhtug ba ernqvat gung jnf "nun, urknqrpvzny!" Whfg...qba'g.
-- Oliver Burkeman, The Guardian, May 21, 2014
I enjoyed this quote, and have had a great number of self depreciating laughs with other young professionals about how we were totally winging it.
But it is not true.
There are those winging it, but they are faking it until they make it, and make up a smaller group than represented above. The much larger group is made from a rainbow of wrong! Biases, ignorance, bad information, misinformation, conflicting agendas, the list goes on.
The group of people just winging it, pushing their limits, faking it until they make it, are only piece of the bigger picture of stuff done wrong. It is not fair to overrepresent their influence. Although, it is always a comfort to know there are others out their in the same boat, just winging it.
there is a familiar phenomenon here, in which a certain kind of would-be economic expert loves to cite the supposed lessons of economic experiences that are in the distant past, and where we actually have only a faint grasp of what really happened. Harding 1921 “works” only because people don’t know much about it; you have to navigate through some fairly obscure sources to figure out [what actually happened]. And the same goes even more strongly — let’s say, XII times as strongly — when, say, [Name] starts telling us about the Emperor Diocletian. The point is that the vagueness of the information, and even more so what most people [think they] know about it, lets such people project their prejudices onto the past and then claim that they’re discussing the lessons of experience.
Paul Krugman on the use of examples to obscure rather than clarify
What's the alternative. Site what's currently going on in other countries (people generally aren't to familiar with that either)? Generalize from one example (where people don't necessarily now all the details either)?
Yes. Because both of those have actual data, and are thus useful - your reasoning can be tested against reality.
We just really don't know very much about the roman economy, and are unlikely to find out much more than we currently do. Generalizing from one example isn't good .. science, logic or argument. But it's better than generalizing from the fog of history. Not a lot better - Economics only very barely qualifies as a science on a good day, but Krugman is completely correct to call people out for going in this direction because doing so just outright reduces it to storytelling.
On the other hand we do know a lot about what happened in 1921, Krugman just wishes we didn't because it appears to contradict his theories.
Um, no. History contains evidence, it's not particularly clean evidence, but evidence nonetheless and we shouldn't be throwing it away.
-- Adam Cadre
This seems like explaining vs. explaining away. The process by which better players pick up wins is by winning the "contest of athletic prowess." The game itself is interesting to watch because we like to see competent people play, and when upsets happen, they often happen for reasons that are easily displayed and engaged with in terms of the mechanics of the game.
This is similar to choosing strict determinism over compatibilism. Which players are the "best" depends on each of those players' individual efforts during the game. You could extend the idea to the executives too, anyway--which groups of executives acquire better players is largely a function of which have the best executives.
Efforts are only one variable here, and the quote did say "largely a function of". Those being said, look at how often teams replay each other during a season with a different winner.
--Shunryu Suzuki
Tyler Cowen
We also confuse "what is important" with "what is interesting" fairly often.
Saul Alinsky, in his Rules for Radicals.
--Marcel Proust
"You should never bet against anything in science at odds of more than about 10^12 to 1 against."
Frequency is not importance. I think this quote has more humorous than practical merit.
But frequency can be strong evidence of importance.
I suspect many people would experience significant psychological trauma if they were unable to rationalize for a week.
-- Pope Francis, Open Mind, Faithful Heart: Reflections on Following Jesus
I think it's worth clarifying that Pope Francis and Jorge Mario Bergoglio are one and the same person.
Ayn Rand, to a Catholic Priest.
Philosophers have played a game going way back where they believe that popular religion comes in handy as a fiction for keeping the mob in line, but they view themselves as god-optional. The philosophes in the Enlightenment started the experiment of letting the mob in on the truth, and the experiment has apparently gone so far in parts of Europe like Estonia that some populations have lost familiarity with christian beliefs, or even how to pronounce Jesus' name in their own language. Or so Phil Zuckerman claims:
https://books.google.com/books?id=C-glNscSpiUC&lpg=PP1&dq=phil%20zuckerman&pg=PA96#v=onepage&q=estonia&f=false
The mob is pretty well educated these days, and the standard of living is so high that there's much less incentive to step out of line. I don't think we can compare modern nations to historical nations to make any claim about whether religion keeps people in line.
The claim that people can't pronounce Jesus' name might apply to former Soviet Union countries, but I doubt it applies anywhere else in Europe.
Do you know that Jesus's actual name is Yeshua?
We don't know that. It was likely some variant of the name commonly translated as "Joshua" in English. It could have been Yeshua or Yehoshua or a variety of slightly Aramacized variants of that.
But English language's "Jesus" is still far off.
-- Lisa Bradley, a character in Brennan Lee Mulligan & Molly Ostertag's Strong Female Protagonist
Or by physics. Not all consequences for overconfidence are social.
I'm not sure this is very rational. Assuming that you are more competent than you really are -- which seems to be a matter of hubris -- is indeed capable of destroying you.
Actually, well I suppose it depends on what you mean by "met".
There's no such things as gods.
I think this is about the only scenario on LW that someone can be justifiably downvoted for that statement.
I don't see why. Non-agents simply don't fit the definition of "god", so equivocating on the definition of "god" from "world-changingly powerful agent" to "abstract personification of causality itself" does not really shed any light on anything.
This is false. Not only does the LW wiki have a definition of "god" that is a non-agent, the study of theology points one to numerous gods that people believe in that are non-agents. There's a reason that many of the popular monotheisms refer to their god as a personal god; it stands in contrast to the heresy of a non-personal (i.e., non-agent) god.
Let's look at why are asking the question. The relevant property in this discussion is "will punish you for being 'uppity" ". Being an agent isn't directly relevant to that.
But causality can't punish you for being uppity. You basically just cannot be uppity against causality.
It isn't meant to be some rigorous account of how the world works, it's a deliberate mythology. I'm not entirely convinced as to whether it's a good idea, but aspie criticisms that amount to "god don't real" are missing the point entirely.
http://www.moreright.net/postrat-religion/
Fine, but Dungeons and Dragons is also a constructed, deliberate mythology, and you wouldn't respond to a quote about "You haven't met gods" by saying, "Actually, I role-played encountering Boccob the Uncaring, God of Magic, just last Tuesday."
Well actually, I would respond that way, but as a joke. I would not expect to be taken seriously.
Actually, upon reading that article you've linked, I've found it to be cogent and well-written but emotionally toxic, tenuous in its connection to facts, and philosophically/existentially filled to the brim with lost purposes. To give examples, the obsession with preserving "European civilization" and the admiration for the internet's cult of ultra-masculinity (which should really be called pseudo-masculinity since it so exaggerates the present day's Masculinity Tropes that it dramatically misses other modes of masculinity, despite their actual historicity) portray the writer as chiefly, bizarrely concerned with present-day cultural trends rather than with the kind of good-in-themselves terminal values around which one could design a society from scratch if necessary.
I mean, sorry to be uncharitable in my reading, but I just don't see why I should want to build white European Christian or post-Christian society, in the first place. I know that reactionary and conservative communities give immense weight and worry to cultural goal-drift away from whatever weird version of white Christian/post-Christian society it is they actually like (derisive tone because it often seems they like The Silmarillion more than Actually Existing Europe), but it seems to me that the only way to really avoid random drift is to ground one's worldview in things that are actually, verifiably, literally true. Only an epistemic thought process will obtain consistent, nonrandom, meaningful results.
And since there is a truth of the matter when it comes to human beings' emotional and existential needs, it seems you couldn't get anywhere by doing anything but anchoring yourself to that truth and drawing as close as possible. Any deviation into lost purposes, ill-posed questions, and fallacious reasoning will be punished.
If you attach yourself to some invented image of some particular time-period in European history and try to pump all the entropy out of it, try to optimize everything to forcibly fit that image you've got in your head, you will only succeed in destroying everything else that you aren't acknowledging you care about. And since that image isn't even a terminal goal, a good-in-itself, the everything else will just be more-or-less everything.
If you separate Myth from Truth, Truth will burn you in hellfire. There is no escape.
(Also, citing an imageboard as a source of information about mythology and religion is just embarrassingly bad scholarship.)
Says the guy citing a deliberately informal wiki as a source of information about historical cultures :P
I up-voted it for dissenting against sloppy thinking disguised as being deep or clever. Twisting the word 'god' to include other things that do fit the original, literal or intended meaning of the term results in useless equivocation.
— George R. R. Martin, Wikiquote, audio interview source
(Changed from an earlier quote I decided I'd keep for later.)
Wow. I am, uh, embarrassed to say that I somehow managed to get caught up in the replies to this comment without ever actually seeing the quote itself until now. (In my defense, I did get here through the Recent Comments sidebar, but still... yeah, not one of my prouder moments.) So, now that I've finally gotten around to reading the quote, uh...
...Maybe I'm dense, but I'm not quite understanding this one. I mean, I understand that it's an explanation of Martin's philosophy of writing, but I'm not really seeing the rationality tie-in. I could probably shoehorn in an explanation for why and how it relates, but the problem with such an explanation is that it would be exactly that: shoehorned in. I feel as though advice of this sort would be much better suited to a writing thread than to a rationality quotes thread. Could someone explain this one to me? Thanks in advance.
Fair point. To be honest, I just got this quote from Martin's Wikiquote page after I decided to save the original and needed something to replace it. (I suppose I could've done something like change the whole post to "[DELETED]" and then retract it, but this seemed good enough at the time.)
I can't really make a rigorous case for this quote's appropriateness here, what actually drove my decision to use this was basically a hunch. My after-the-fact rationalization is that maybe this quote sort of touched on the Beyond the Reach of God sense that death is allowed to happen to anyone, at any time, and especially in dangerous situations, as opposed to most fiction which would only allow the hero to die in some big heroic sacrifice?
For an after-the-fact rationalization, that's actually not bad. On the other hand, I think Martin might actually push it a little too far; reality isn't as pretty as most fiction writers make it out to be, true, but it isn't actively out to get you, either. The universe is just neutral. While it doesn't prevent people from suffering or dying, neither does it go out of its way to make sure they do. In ASoIaF, on the other hand, it's as though events are conspiring to screw everyone over, almost as if Martin is trying to show that he isn't like those other writers who are too "soft" on their characters. In doing so, however, I feel he fell into the opposite trap: that of making his world too hostile. Everything went wrong for the characters, which broke my suspension of disbelief every bit as badly as it would have if everything had gone right.
For me, it's not just a problem of suspension of disbelief, it's a problem of destroying involvement in the story. If too much bad happens to the characters, I'm less likely to be emotionally invested in them. Martin's "The Princess and the Queen" (a prequel to ASoIaF) in Dangerous Women is especially awful that way, through the characters aren't developed very much, either. I'm hoping he does a better job in the main series.
AlyssaRowan On Hacker News
Schneier on Security blog post
"As a human being, you have no choice about the fact that you need a philosophy. Your only choice is whether you define your philosophy by a conscious, rational, disciplined process of thought and scrupulously logical deliberation—or let your subconscious accumulate a junk heap of unwarranted conclusions, false generalizations, undefined contradictions, undigested slogans, unidentified wishes, doubts and fears, thrown together by chance, but integrated by your subconscious into a kind of mongrel philosophy and fused into a single, solid weight: self-doubt, like a ball and chain in the place where your mind’s wings should have grown."
Ayn Rand
If the bolded pair of words were struck, I'd agree completely. Different people will have different balls and chains.
A. False dichotomy - there are other choices. We might choose to compartmentalize our rationality, for example.
B. False dichotomy in a different sense - we actually don't have access to this choice. No matter how hard we work, our brains are going to be biased and our philosophies are going to be sloppy. It's a question of making one's brain marginally more organized or less disorganized, not of jumping from insanity onto reason. I'm suspicious that working with the insanity and trying to guide its flow is a better strategy than trying to destroy it.
C. Although not having a philosophy leaves us open to bias, having a philosophy can sometimes expose us to bias even further. It's about comparative advantage. Agnosticism has wiggle room that sometimes can be a place for bias to hide, but conversely ideology without self-doubt often serves to crush the truth.
"There is no such thing as uncharted waters. You may not have the chart on hand to show you how to navigate these waters, but the charts exist. Google them."
Joe Queenan, WSJ 11/30/14
Too strong to be literally true but still
Think it's false, both literally and figuratively. Moreover, the guy needs to get out of his cubicle and go to interesting places :-)
95% of the people, 95% of the time is a less good standard when dealing with interesting people, isn't it ;)
EDIT: Downvote for... accepting a different opinion? Duly noted; will do so more quietly in future.
There's a law about that :-P
As far as literal charts of literal bodies of water on the surface of the earth, satelite photography actually has pretty much solved that problem.
As far as metaphorical waters, human civilization is larger than most people really think, and consists disproportionately of people finding and publishing answers to interesting questions. "Don't assume the waters are uncharted until you've done at least a cursory search for the charts" is sound advice.
Ahem. Do you really think that a picture of water surface which looks pretty much the same anywhere is equivalent to a nautical chart?
Proper nautical charts are very information-dense (take a look) and some of the more important bits refer to things underwater.
Raymond Smullyan, This Book Needs No Title, taking joy in the merely real
Duplicate
Whoops! Thank you.
G. K. Chesterton, Orthodoxy.
I am reminded of:
@stevenkaas
In trying to find the above quote by wildcard searching on Google, I stumbled upon another quote of this nature by the dog's owner himself: "I want to love my neighbour not because he is I, but precisely because he is not I." There appears to be another one about science being bad not because it encourages doubt, but because it encourages credulity, but I'm unable to find the exact quote.
Who could have imagined that Zizek was so derivative! Oh wait...
Zizek himself lampshades the method here.
As does Chesterton, less explicitly:
and at length.
I get the impression that he (thankfully!) eased off on that particular template as time went on.
I'm inclined to think that non-ideological autocracy (we're in charge because we're us and you're you) is the human default. Anything better or worse takes work to maintain.
I'm not sure about that. In fact, I can't think of any actually non-ideologically autocratic society in history. Are you sure you're not confusing "non-ideological" with "having an ideology I don't find at all convincing"?
Publius Cornelius Scipio Nasica Corculum
Since you're probably aware that one Roman senator (Cato) ended his speeches with "Carthage must be destroyed," you should also know that another responded with the opposite.
How is this a rationality quote?
Accurate beliefs, efficient altruism, and giving historical credit to the good guys. What does it say about us that (I would guess) most well educated westerners know about the "Carthage must be destroyed" quote but not the "Carthage must be saved" one?
It says that we care about the real as opposed to the imaginary. That is entirely to our credit.
Regardless of what may be considered moral, Carthage was destroyed. Educated people who wish to understand ancient history therefore naturally wish to learn of Cato's anti-Carthaginian campaign, precisely because it was successful. In addition, Cato the Elder was considered a model of behaviour by subsequent generations of Romans, in a way that Corculum was not, therefore to understand ancient Rome we have to understand the behaviour they valourised.
Similarly, Fumimaro Konoe is not nearly as famous as Hideki Tojo. This is not because educated Westerners favour Tojo's foreign policy, but because Tojo won the debate and Japan went to war.
While I agree with the overall sentiment, I think it's important not to overdo this approach. Let me explain.
Consider the situation where you have a stochastic process which generates values -- for example, you're drawing random values from a certain distribution. So you draw a number and let's say it is 17.
On the one hand you did draw 17 -- that number is "real" and the rest of the distribution which didn't get realized is only "imaginary". You should care about that 17 and not about what did not happen.
On the other hand, if we're interested not just in a single sample, but in the whole process and the distribution underlying it, that number 17 is almost irrelevant. We want to understand the entire distribution and that involves parts which did not get realized but had potential to be realized. We care about them because they inform our understanding of what might happen if the process runs again and generates another value.
Similarly, if you treat history as a sequence of one-off events, you should pay attention only to what actually happened and ignore what did not. But if you want to see history as a set of long-term processes which generate many events, you're probably interested in estimating the entire shape of these processes and that includes "invisible" parts which did not actually happen but could have happened.
There are obvious methodological pitfalls here and I would recommend wielding Occam's Razor with abandon, but that should not conceal the underlying epistemic point that what did not happen could be important, too.
You make a good point.
Why is Publius Scipio Nasica a "good guy"? His opposition to Carthage's destruction was based on his idea that without a strong external enemy Rome will descend into decadence. (see Plutarch). That, to me, tentatively places him into the "pain builds character so I will make sure you will have lots of pain" camp which is not quite the good guys camp.
Well, it did.
Forgive my fulfilling of Godwin's Law, but if a Nazi leader repeatedly told Hitler "Don't kill the Jews because struggling against them in the economic marketplace will make Germans stronger" would you consider this leader a "good guy"?
No, I would not.
And the equivalent position, actually, would be "Do not kill all the Jews at once, keep on killing them for a long time because the struggle will keep the Germans morally pure".
The intent matters.
G. K. Chesterton, The Everlasting Man
Pathological counter example: "Passive propulsion in vortex wakes" by Beal et al. PDF
Chesterton was talking about Neoreaction, right?
ETA: A note of clarification for those in need of it: I am not actually claiming that Chesterton was talking about Neoreaction.
What Chesterton was actually talking about was a reaction against liberal Protestantism in favour of more traditional Catholicism (represented e.g. by the "Oxford Movement" in the Church of England), against a prevailing tide in the direction of greater liberalism within Christianity and greater skepticism about Christianity.
So: not literally neoreaction, obviously, but something with a thing or two in common with neoreaction.
If RichardKennaway's meaning is "ha ha, Azathoth123 is using this as support for a neoreactionary view, when in fact Chesterton had something entirely different in mind" then I don't think that's altogether fair.
(I have no idea whether the people who have downvoted Richard did so because they thought he was saying that and that Chesterton really was talking about (something like) neoreaction, or because they thought he was actually claiming that Chesterton was talking about neoreaction when really he wasn't. This is one of the problems with downvoting as opposed to disagreeing. On the other hand, perhaps they downvoted because RK's comment could be taken either way with roughly equal plausibility and was therefore needlessly unclear.)
Something along those lines. What makes Azathoth123's recent Chesterton quotes rationality quotes? Chesterton wrote:
and of course neoreaction is something that is going against a stream, and in its eyes progressivism is, well, I'm not sure how the metaphor works out from here, because what neoreaction is going against is progressivism, which makes progressivism the stream itself, rather than a dead thing floating down some other stream. But anyway, why should we take Chesterton's quote as real wisdom? As a literal statement about the physics of floating bodies, it is true but uninteresting. As a metaphor, he is applying it to Christianity, or to his preferred form of it, everything else being either the stream or the flotsam (the metaphor has the same problem here).
So just what truth is being asserted here, that for Chesterton supports Catholicism and for Azathoth123 supports neoreaction? Contrarianism, the view that the majority is always wrong? That is all the metaphor amounts to. This sits oddly with the contention, also made by neoreactionaries, that their preferred view of society is actually the great stream within which progressivism is a historical anomaly, a trifling eddy that will not last (e.g. Anissimov and advancedatheist in recent comments on LW). I'm sure that if that metaphor had suited Chesterton in making some point, he would have elaborated it at no less length. But a metaphor proves nothing: it is a method of presentation, not of argument.
So now that you've made your argument more explicit, I largely agree, but let me defend Chesterton (and maybe to some extent Azathoth123) just a little. Specifically, I'll argue (1) that his metaphor is a bit more coherent than you give it credit for, (2) that he isn't claiming anything as silly as that the majority is always wrong, and (3) that if he's understood right then what he says isn't as hard to reconcile with the neoreactionaries' claims as you suggest. But I agree (4) that he isn't making any very interesting factual statement, (5) that it doesn't offer all that much support for neoreaction, and (6) that he's engaging in rhetoric rather than anything much like reasoned argument.
Chesterton's argument, so far as there is one, seems to go like this. "Everyone thought traditional Christianity was dying or dead, washed away by a great stream of enlightenment and skepticism and liberalism. Some of its parts might stay in place for a while before eventually being eroded away, but the ultimate outcome seemed in little doubt. But now we see something -- let's call it neocatholicism -- not merely hanging on as the tide rushes past but actually heading in the opposite direction, getting more traditional over time rather than less. This shows that neocatholicism, and by extension traditional Christianity generally, is still alive and kicking. Let us put this in the context of these other times I've described, when Christianity appeared to be dead or dying but then regained ascendancy; this is just another example of the same phenomenon, even though we haven't yet quite reached the bit where the Church triumphs again."
So Chesterton is saying that his Great Thingy is enduring where what presently looks like a great all-consuming stream is in fact ephemeral; give it another century or so, he says, and the Church will still be there stronger than ever while the stream of skeptical enlightenment dwindles and is eventually forgotten. So I don't see the difference you do with the neoreactionaries' view of their model of society.
I don't think the claim here is that the majority is always wrong. Nor that "only a living thing can go against the stream" proved Christianity right or proves neoreaction right. I think it's intended to be something less ambitious: the emergence in Chesterton's time of a Catholic reaction showed (according to Chesterton) that Catholic Christianity was a still-somewhat-vigorous living thing and shouldn't be written off, and the emergence of neoreaction in our time shows (according to Azathoth123, if I'm interpreting him right) that a reactionary view of society is a still-somewhat-vigorous living thing and shouldn't be written off.
(So the actual alleged truth being asserted would be something like this: "Something that goes against the current of popular opinion, not merely holding on but pushing in the opposite direction, must have some vigour to it, and it may turn out to win in the end." Which, like the literal statement about floating bodies, is probably true but not very interesting.)
To support his stronger thesis that traditional Catholic Christianity is (not merely not quite dead yet, but) everlasting and always ultimately triumphant, Chesterton appeals to a bigger historical context in which (so he says) it has repeatedly seemed dead but returned greater and more terrible than ever before. I'm not sure whether the neoreactionaries make a similar claim.
There's still very little here in the way of actual argument, but that's how Chesterton rolls. Some clever and counterintuitive ideas, a paradoxical way of presenting them, lots of rhetorical fireworks and ingenious metaphors, and his work is done. If you get as far as thinking carefully about whether what he says is actually correct then his verbal pyrotechnics obviously weren't impressive enough.
Well progressivism self-identifies as "being on the right side of history".
Indeed it does. It sees itself as the stream and the tide, not dead flotsam. At least, when it is not casting its enemies as the stream and itself as the living thing valiantly fighting against oppression. Chesterton, progressivism, and neoreaction all have that equivocation in common, casting their favoured ideology as either the tide or as fighting against the tide, as it suits their rhetoric.
"Don’t let anybody discourage you or tell you that intelligence doesn’t pay or that success in life has to be achieved through dishonesty or through sheer blind luck. That is not true. Real success is never accidental and real happiness cannot be found except by the honest use of your intelligence."
Ayn Rand
Too strong.
Nobody EVER got successful from luck? Not even people born billionaires or royalty?
Nobody can EVER be happy without using intelligence? Only if you're using some definition of happiness that includes a term like "Philosophical fulfillment" or some such, which makes the issue tautological.
I don't think you're applying the negation correctly; "not every success was from luck" means "at least one success was not from luck." Similarly, if you broaden your viewpoint to before the moment of someone's birth, it seems silly to claim that it's an accident that they were born a billionaire or royalty; it's not like their ancestors put no planning into acquiring their wealth or their titles.
Not really; this is a nontrivial empirical claim that turns out to be correct. People with solid philosophical grounding are measurably happier (on standard psychological surveys of happiness) than people without.
I didn't read that as a negation of "success in life has to be achieved... through sheer blind luck" but rather of "real success is never accidental". Both, of course, are descriptively false (at least for values of "real" that don't bake in the conclusion), though as a normative statement I'd rate the former as much more problematic.
That was the impression I had. Yes, Rand is making the normative claim that 'accidental' success is not 'real,' and that 'happiness' acquired in ways other than 'honest use of your intelligence' is not 'real,' but those seem like fine normative claims to me.
They sound like no true Scotsman to me. And they make the whole thing tautological. Would you consider it worth quoting if she said "nobody ever achieves anything by luck, except for the times they get lucky"? Or "happiness is only achieved through honest use of your intelligence if it's achieved through honest use of your intelligence"?
Some people hold the view that all normative claims are either tautological or false. Does that describe you, or can you provide an example of a normative statement that you consider true and non-tautological?
In the second case, I'm happy to discuss underlying value systems and the similarities or differences. In the first, I don't think I'm interested in discussing whether or not value systems should be communicated through normative claims.
– Jimmy Kimmel
It's actually only about 45 percent. The death rate for the world as a whole is about 93 percent.
That's just not true. Death rate, as the name implies, is a rate - the population that died in this year divided by the average total population. If "death rate" is 100%, then "birth rate" is 100% by the same reasoning, because 100% of people were born.
G. K. Chesterton
... the lateral thinker who finds a new route forward, the hedonist who bungee jumps off the edge, and the engineer who builds a bridge.
(Of course, there might not be another route to find, the bungee jumping could get you killed, and a bridge might not be cost-effective, but I'd like to at least consider a third way out of a dilemma)
I think all the work here is done by determining what actually constitutes a precipice.
A quote from my son (just turned eleven years):
This sounds trite but I think it is actually the correct (or most sensible) answer. I was kind of impressed. Maybe we should ask children more of these grande questions and gain factual answers instead of taking them as deeper as they are.
I prefer:
Kurt Vonnegut, Breakfast Of Champions
Indeed, I suppose their worldview are much clearer and in some ways unbiased than ours. When child is born he sees the world as it is, not through many prisms including our subjective value judgements
-Richard Hamming, Mathematics on a Distant Planet
It's actually called Mathematics on a Distant Planet.
“Never confuse honor with stupidity!” ― R.A. Salvatore, The Crystal Shard
-- Douglas Hofstadter, Godel, Escher, Bach
I think
are generally preferred on Rationality Quotes threads. You can make blockquotes by typing a greater-than symbol (>) followed by a space before each paragraph in your quote (no need for quotation marks).
Also, that specific quote doesn't make much sense to me without context.
After a joint American-Soviet mission to Mars, the astronauts return home and refuse to tell who was the first to put their feet on the planet. Everybody pesters them, but they say they did it together (though they really couldn't.) The Soviet one is drinking with a new friend, whom he knows for a few hours, and the friend says it is impossible that Harrison will claim the honour - and so gets dubbed 'a Martian' himself. Martianss here is really a name for humans for whom petty things don't matter, who work for mankind.
Supporters of the Soviets were keen on moral equivalency.
Imagine if that was done with Nazis. "Petty things like the difference between people who burn others in ovens, and people who don't, don't matter".
I think that the quote means "petty things like who stepped out of the spaceship first don't matter", not "petty things like the difference between us and those capitalist pigs don't matter".
It's also true that the line between "American" and "Soviet" (or, for that matter, between "American" and "1940s German") is not drawn in remotely the same way as the line between "burns others in ovens" and "doesn't": it is mainly indicative of which part of the world you were born in. I have much greater sympathy for moral equivalency in the first case than in the second.
The line between a random American and a random Soviet person depends mostly on what part of the world they were born in. A person who lands on Mars is not random; they couldn't get to Mars without enthusiastically participating in the system. The people who praise the astronauts are aware of this too, and will treat the astronauts' successes as a success of the system, not mainly as the success of an individual astronaut.
Ok, how about the difference between "sends people to the gulags on trumped up charges" and "doesn't", or "engineers famines" and "doesn't"?
Offend with substance, don't offend with style.
Fixing broken windows is useful even if you don't care about the actual window.