Rationality Quotes December 2014
Another month, another rationality quotes thread. The rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (440)
Saul Alinsky, in his Rules for Radicals.
This one hit home for me. Got a haircut yesterday. :P
If I could convince Aubrey de Grey to cut off his beard it would increase everyones expected longevity more than any other accomplishment I'm capable of.
This I'm not actually sure about. I think the guru look might be a net positive in his particular situation.
Agreed. His fundraising might be benefiting from a strategy that increases the variance of peoples' opinions of him even if it also lowers this mean.
I wasn't familiar with the name, so I looked it up. There are some pretty strong criticisms of him here: http://www2.technologyreview.com/sens/docs/estepetal.pdf
Looks like pseudoscience.
There no such thing as evidence-based decision on strategies for research funding. Nobody really knows good criteria for deciding which research should get grants to be carried out.
Aubrey de Grey among other things makes the argument that it's good to put out prices for research groups that get mices to a certain increased lifespan. That's the Methuselah Foundation’s Mprize.
Now the Methuselah Foundation worked to set up the new organ liver price that gives 1 million to the first team that creates a regenerative or bioengineered solution that keeps a large animal alive for 90 days without native liver function.
Funding that kind of research is useful whether or not certain arguments Aubrey de Grey made about “Whole Body Interdiction of Lengthening of Telomeres” are correct. In science there's room for people proposing ideas that turn out to be wrong.
I have seen him speak a couple of times and he addressed many of these criticisms in the talks. You might want to read his response to these criticisms before assuming they are valid.
A lot of this comes from a lack of appreciation of the difference between science and engineering. In engineering you just have to find something that works. You don't need to understand everything.
Some debate here and you can easily find his talks online:
http://www2.technologyreview.com/sens/
In his talks I did not get the sense that he is positioning himself as a great misunderstood maverick. He does say that in his opinion much ageing research is unproductive because it is aimed at understanding the problem rather than fixing it.
For example, rather than tweak metabolic processes to produce slightly smaller amounts of toxic substances, remove those substances by various means, or replace the cells grown old from said toxic substances.
His solution to cancer is to remove the telomerase genes. This way cancer cells will die after X divisions. Of course this creates the problem that stem cells will not work. So we will need to replenish germ lines in the immune system, stomach walls, skin etc.
These are "dumb" strategies and rarely of interest to scientists perhaps for that reason.
There is a similar issue in nanotechology discussed in Drexler's book "Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization". For example you do not need to solve the protein folding problem in generality in order to design proteins that have specific shapes. You just need to find a set of patterns that allow you to build proteins of specific shapes.
(edited for typos)
I will definitely be going over this, it looks very helpful. Thank you for making this.
I haven't finished the document yet, but I noticed it keeps on using the word "unscientific", which sounds problematic as one of its aims is to define pseudoscience.
?
They explicitly say that there is no rigid definition distinguishing pseudoscience from legitimate science. They claim that in order to distinguish between them it's necessary to point at specific instances of misleading behaviors, and they enumerate these behaviors at the very beginning of the paper.
But in that list of problems, they keep on saying "Unscientifically simplified, Unscientifically claimed, etc", which is a problem unless they define science. They clearly haven't learned how to taboo words like science, which shows here.
His girlfriend, or one of his girlfriends (I'm not sure how many he had at the time) told me she thinks the beard is really hot.
There might be a bit of selection bias there.
And you end up like this.
Seems to have worked for them.
And a thousand female metalheads shall weep.
it's fun to contemplate alternative methods for avoiding/removing these barriers
The race is not always to the swift, nor the battle to the strong, but that's the way to bet.
-Damon Runyon
Damon Runyon clearly has not considered point spreads.
"Murphy's Laws of Combat"
This is what survivorship bias looks like from the inside.
the map is not the territory. if it's stupid and it works, update your map.
Thomas J. McKay, Reasons, Explanations and Decisions
I guess technically if a lot of stocks went paid their dividend on the same day (went ex-divvie) you could get a 0.5-1% fall in the stock prices (depending on the dividend yield at the time) without their being a slump - the value of those dividends which have now been paid out is simply no longer part of the market. But I agree wholeheartedly with the sentiment.
--Eugene Volokh, "Liberty, safety, and Benjamin Franklin"
A good example of the risk of reading too much into slogans that are basically just applause lights. Also reminds me of "The Choice between Good and Bad is not a matter of saying 'Good!' It is about deciding which is which."
I mostly agree, but I think the slogan (like, I think, many others about which similar things could be said) has some value none the less.
A logically correct but uninspiring version would go like this:
-- Not Benjamin Franklin
Franklin's slogan serves as a sort of reminder that (1) there is a frequent temptation to "give up essential Liberty, to purchase a little temporary Safety" and (2) this is likely a bad idea. Indeed, the actual work of figuring out when the slogan is appropriate still needs to be done, but the reminder can still be useful. And (3) because it's a Famous Saying of a Famous Historical Figure, one can fairly safely draw attention to it and maybe even be taken seriously, even in times when the powers that be are trying to portray any refusal to be terrorized as unpatriotic.
Of course Volokh is aware of the "reminder" function (as he says: "The slogan might work as a reminder") but I think he undervalues it. (He says the "real difficulty" is deciding which tradeoffs to make, but actually just noticing that there's an important tradeoff being proposed is often a real difficulty.) And, alas, its Famous Saying nature is pretty important too.
It strikes me that the original Franklin quote really identifies a specific case of the availability heuristic. That is, when you're focused on safety, you tend to adopt policies that increase safety, without even considering other values such as liberty.
There may also be an issue of externalities here. This is really, really common in law enforcement. For example, consider civil asset forfeiture. It is an additional legal tool that enables police to catch and punish more criminals, more easily. That it also harms a lot of innocent people is simply not considered because their is no penalty to the police for doing so. All the cost is borne by people who are irrelevant to them.
The quote always annoyed me too. People bring it up for ANY infringement on liberty, often leaving off the words "Essential" and "Temporary", making a much stronger version of the quote (And of course, obviously wrong).
Tangentially, Sword of Good was my introduction to Yudkowsky, and by extension, LW.
-- Ferrett Steinmetz
But is it only a human behavior? I'd think anything with cached thoughts/results/computations would be similarly vulnerable.
That's true of most frequently referenced elements of human nature, if not all of them.
Even Love.
~The Homo Sapiens Class has a trusted computing override that enables it to lock itself into a state of heightened agreeability towards a particular target unit. More to the point: it can signal this shift in modes in a way that is both recognizable to other units, and which the implementation makes very difficult for it to forge. The Love feature then provides HS units on either side of a reciprocated Love signalling a means of safely cooperating in extremely high-stakes PD scenarios without violating their superrationality circumvention architecture.
Hmm.. On reflection, one would hope that most effective designs for time-constrained intelligent(decentralized, replication-obsessed) agents would not override superrationality("override": Is it reasonable to talk about it like a natural consequence of intelligence?), and that, then, the love override may not occur.
Hard to say.
Lois McMaster Bujold
The less you care about "the respect" others show towards you, the less power idiots can exert over you. The trick is differentiating whose opinion actually matters (say, in a professional context) and whose does not (say, your neighbors').
Due to being social animals, we're prone to rationalize caring about what anyone thinks of us (say, strangers in a supermarket when your kid is having a tantrum -- "they must think I'm a terrible mom!" -- or in the neighbors case "who knows, I might one day need to rely on them, better put some effort into fitting in"). Only very few people's opinions actually impact you in a tangible / not-just-social-posturing way. (The standard answer on /r/relationships should be "why do you care about what those idiots think, even in the unlikely case they actually want to help your situation, as opposed to reinforcing their make-believe fool's paradise travesty of a world view".)
Interestingly, internalizing such a IDGAF attitude usually does a good job at signalling high status, in most settings. Sigh, damned if you do and damned if you don't.
I am having trouble understanding the message here... and consequently how this is a good rationality quote.
Is this trying to say "don't bother trying to please people in childhood"?
Is it "don't bother trying to earn respect as an adult"?
Both are poor advice, in general, IMO.
I think it means something more like, "don't expect the behaviors that pleased adults when you were a child, to get you anywhere as an adult. Children are considered pleasing when they're submissive and dependent, but adults are respected for pleasing themselves first."
The rationality connection is, well, winning.
"It’s much better to live in a place like Switzerland where the problems are complex and the solutions are unclear, rather than North Korea where the problems are simple and the solutions are straightforward."
Scott Sumner, A time for nuance
The problems in North Korea are not so simple with straightforward solutions, when we look at them from the perspective of the actors involved.
For the average citizen in North Korea, there are no clear avenues to political influence that don't increase rather than decrease personal risk. For the people in North Korea who do have significant political influence, from a self-serving perspective, there are no "problems" with how North Korea is run.
North Korea's problems might be simple to solve from the perspective of an altruistic Supreme Leader, but they're hard as coordination problems. Some of our societal problems in the developed world are also simple from the perspective of an altruistic Supreme Leader, but hard as coordination problems. Some of the more salient differences are that those problems didn't occur due to the actions of non altruistic or incompetent Supreme Leaders in the first place, and aren't causing mass subsistence level poverty.
I do think North Korea leaders would prefer a state of affairs where it could educate it's own elite instead of sending the kids to Switzerland to get a real education.
North Korea's military would like to have capable engineers that can produce working technology.
On the other hand a simple act like giving the population access to internet might produce a chain reaction that blows up the whole state.
Jang Sung-taek was someone in North Korea with a lot of political power. According to Wikipedia South Korean believed that Jang Sung-taek was the defacto leader of North Korea in 2008.
Last year the North Korean state television announced his execution. His extended family might also have gotten executed.
One of the charges was that he "made no scruple of committing such act of treachery in May last as selling off the land of the Rason economic and trade zone to a foreign country..."
It's worth noting that Western countries did engage in policies to block Jang Sung-taek efforts to create economic change in North Korea.
That simply means that Switzerland has already solved the easier problems North Korea struggles with. To paraphrase, an absence of low-hanging fruit on a well-tended tree means you're probably in a garden.
Isn't that the point of the quote?
Maybe, but if so the quote is ineffective at conveying it.
Paul Graham
The situation is far worse than that. At least a compiled program you can: add more memory or run it on a faster computer, disassemble the code and see at which step things go wrong, rewind if there's a problem, interface with programs you've written, etc. If compiled programs really were that bad, hackers would have already won (as security researchers wouldn't be able to take apart malware), drm would work, no emulators for undocumented devices would exist.
The state of the mind is many orders of magnitude worse.
Also, I'd quibble with "we don't know why". The word I'd use is how. We know why, perhaps not in detail (although we sort of know how, in even less detail.)
i largely agree in context, but i think it's not an entirely accurate picture of reality.
there are definite, well known, documented methods for increasing available resources for the brain, as well as doing the equivalent of decompilation, debugging, etc... sure, the methods are a lot less reliable than what we have available for most simple computer programs.
also, once you get to debugging/adding resources to programming systems which even remotely approximate the complexity of the brain, though, that difference becomes much smaller than you'd expect. in theory you should be able to debug large, complex, computing systems - and figure out where to add which resource, or which portion to rewrite/replace; for most systems, though, i suspect the success rate is much lower than what we get for the brain.
try, for example, comparing success rates/timelines/etc... for psychotherapists helping broken brains rewrite themselves, vs. success rates for startups trying to correctly scale their computer systems without going bankrupt. and these rates are in the context of computer systems which are a lot less complex, in both implementation and function, than most brains. sure, the psychotherapy methods seem much more crude, and the rates are much lower than we'd like to admit them to be - but i wouldn't be surprised if they easily compete with success rates for fixing broken computer systems, if not outperform.
But startups seem to do that pretty routinely. One does not hear about the 'Dodo bird verdict' for startups trying to scale. Startups fail for many reasons, but I'm having a hard time thinking of any, ever, for which the explanation was insurmountable performance problems caused by scaling.
(Wait, I can think of one: Friendster's demise is usually blamed on the social network being so slow due to perpetual performance problems. On the other hand, I can probably go through the last few months of Hacker News and find a number of post-mortems blaming business factors, a platform screwing them over, bad leadership, lack of investment at key points, people just plain not liking their product...)
in retrospect, that's a highly in-field specific bit of information and difficult to obtain without significant exposure - it's probably a bad example.
for context:
friendster failed at 100m+ users - that's several orders of magnitude more attention than the vast majority of startups ever obtain before failing, and a very unusual point to fail due to scalability problems (with that much attention, and experience scaling, scaling should really be a function of adequate funding more than anything else).
there's a selection effect for startups, at least the ones i've seen so far: ones that fail to adequately scale, almost never make it into the public eye. since failing to scale is a very embarrassing bit of information to admit publicly after the fact - the info is unlikely to be publicly known unless the problem gets independently, externally, publicized, for any startup.
i'd expect any startup that makes it past the O(1m active users) point and then proceeds to noticeably be impeded by performance problems to be unusual - maybe they make it there by cleverly pivoting around their scalability problems (or otherwise dancing around them/putting them off), with the hope of buying (or getting bought) out of the problems later on.
Plutarch, from Life of Theseus.
This makes me think that some of this practice might have been motivated by professional pride on the part of the mapmakers. Such as, "oh, the only reason I didn't go farther was because of the ravenous beasts, and my rival would never be able to push the boundaries farther either so you might as well buy/trust in my mapmaking"
You may be right, but I'm also inclined to include that it's fun to draw monsters.
Saul Alinsky, in his Rules for Radicals.
there is a familiar phenomenon here, in which a certain kind of would-be economic expert loves to cite the supposed lessons of economic experiences that are in the distant past, and where we actually have only a faint grasp of what really happened. Harding 1921 “works” only because people don’t know much about it; you have to navigate through some fairly obscure sources to figure out [what actually happened]. And the same goes even more strongly — let’s say, XII times as strongly — when, say, [Name] starts telling us about the Emperor Diocletian. The point is that the vagueness of the information, and even more so what most people [think they] know about it, lets such people project their prejudices onto the past and then claim that they’re discussing the lessons of experience.
Paul Krugman on the use of examples to obscure rather than clarify
What's the alternative. Site what's currently going on in other countries (people generally aren't to familiar with that either)? Generalize from one example (where people don't necessarily now all the details either)?
Yes. Because both of those have actual data, and are thus useful - your reasoning can be tested against reality.
We just really don't know very much about the roman economy, and are unlikely to find out much more than we currently do. Generalizing from one example isn't good .. science, logic or argument. But it's better than generalizing from the fog of history. Not a lot better - Economics only very barely qualifies as a science on a good day, but Krugman is completely correct to call people out for going in this direction because doing so just outright reduces it to storytelling.
On the other hand we do know a lot about what happened in 1921, Krugman just wishes we didn't because it appears to contradict his theories.
Um, no. History contains evidence, it's not particularly clean evidence, but evidence nonetheless and we shouldn't be throwing it away.
Frequency is not importance. I think this quote has more humorous than practical merit.
But frequency can be strong evidence of importance.
I suspect many people would experience significant psychological trauma if they were unable to rationalize for a week.
Yes. But probably not above the importance of sex...
Interesting. This suggests a method or measure of the importance of compartmentalization. Maybe rationalization is even neccessary for dealing rationally with real life (the word kind of gives it away). Could it be that is needed (in one way or the other) for AI to work in the face of uncertainty?
Only in the sense that lying can be called "truthization".
I read that. I agree with the argument. But it doesn't really address my intuition behind my argument.
The idea is that you have concurrent processes creating partial models of partial but overlapping aspects of reality. These models a) help making predictions for each aspect (descriptively), b) may help acting in the context of the aspect (operational/prescriptively) and c) may be on the symbolic layer inconsistent.
Do you want to kick out all the benefits to gain consistency? It could be that you can't achieve consistency of overlapping models at all without some super all encompassing model. Or it could be that such a super-model is horribly big and slow.
If we're going to be building a Seed AI, I really don't think a good design would involve the AI reasoning using multiple, partially overlapping, possibly inconsistent models, especially since I'm not sure how the AI would go about updating those models if it made contradictory observations. For example, upon receiving contradictory evidence, which of its models would it update? One? Two? All of them? If you decide to work with ad hoc hypotheses that contradict not only reality, but each other, just because it's useful to do so, the price you pay is throwing the entire idea of updating out the window.
If it's uncertainty you're concerned about, you don't need to go to the trouble of having multiple models; good old Bayesian reasoning is designed to deal with uncertainties in reasoning--no overlapping models required. Moreover, I have a difficult time believing that a sufficiently intelligent AI would face much of an issue with regard to processing speed or memory capacity; if anything, working with multiple models might actually take longer in some situations, e.g. when dealing with a scenario in which several different models could apply. In short, the "super all encompassing model" would seem to work just fine.
Bayesianism works well with known unknowns. But it doesn't work any better than any other system else with unknown unknowns. I would say that while Bayesian reasoning can deal well with risk, it's not great with uncertainty - that's not to say uncertainty invalidates Bayesianism, only to say that Bayesianism is not so spectacularly strong it is able to overwhelm such fundamental difficulties of epistemology.
To my mind, using multiple models of reality is more or less essential. My reasons for thinking this are difficult to articulate because they're mired in deep intuitions of mine I don't understand very well, but an analogy might help somewhat.
Think of the universe's workings as a large and enormously complicated jigsaw puzzle. At least for human beings, when trying to solve a jigsaw puzzle, focusing exclusively on the overall picture and how each individual puzzle piece integrates into it is an inefficient process. You're better off thinking of the puzzle as several separate puzzles instead, and working with clusters of pieces.
By doing this, you'll make mistakes - one of your clusters might actually be upside down or sideways, in a way that won't be consistent with the overall picture's orientation. However, this drawback can be countered as long as you don't look at the puzzle exclusively in terms of the individual clustered pieces. A mixed view is best.
Maybe a sufficiently advanced AI would be able to most efficiently sort through the puzzle of the universe in a more rigid manner. But IMO, what evidence we currently have about intelligence suggests the opposite. AI that's worthy of the name will probably heuristically optimize on multiple levels at once, as that capability's one of the greatest strengths machine-learning has so far offered us.
--Marcel Proust
--Rudyard Kipling, "Dane-Geld"
A nice reminder about the value of one-boxing, especially in light of current events.
Well, when this capitulation happened in 2012 no one except a few "right-wing nuts" seemed to care.
This was definitely not the right link to use, at all - how about wikipedia instead? Nor am I sure what point you want to make besides scoring political points - how about specific recommendations?
You don't see that last link as a publicity stunt? I tentatively suspect that it is - though maybe I should put that under 50% - with a lot of the remaining probability going to blackmail of some individual(s).
Jeff Bezos
George S. Patton
Henry St John, Viscount Bolingbroke, Reflections on Exile
Cf. Tolstoy: all happy families are alike, but every unhappy family is unhappy in its own way.
What happens twice probably happens more than twice: are there other notable expressions of this idea?
(There's a well-known principle in software development that's pretty close, though I can't find a Famous Quotation of it right now: when you're choosing a name for a variable or function or whatever, avoid abbreviations: there's only one way to spell a word right, and lots of ways to spell it wrong. Though this is not always good advice.)
Biblical verse on the asymmetry of error: "Enter through the narrow gate. For wide is the gate and broad is the road that leads to destruction, and many enter through it."
That's an interesting comparison. I always took the broad/narrow contrast to be about how easy each path is, and about how many take them, rather than how varied each is, but clearly the ideas are related.
"You should never bet against anything in science at odds of more than about 10^12 to 1 against."
Alas, as nice a quote as it is, it seems to be bogus:
The neutrino anomaly was about 5*10^6 to 1 against. Not quite 10^12 to 1, but I still think it shows that odds that small aren't what they're cracked up to be.
-- Oliver Burkeman, The Guardian, May 21, 2014
I enjoyed this quote, and have had a great number of self depreciating laughs with other young professionals about how we were totally winging it.
But it is not true.
There are those winging it, but they are faking it until they make it, and make up a smaller group than represented above. The much larger group is made from a rainbow of wrong! Biases, ignorance, bad information, misinformation, conflicting agendas, the list goes on.
The group of people just winging it, pushing their limits, faking it until they make it, are only piece of the bigger picture of stuff done wrong. It is not fair to overrepresent their influence. Although, it is always a comfort to know there are others out their in the same boat, just winging it.
"As a human being, you have no choice about the fact that you need a philosophy. Your only choice is whether you define your philosophy by a conscious, rational, disciplined process of thought and scrupulously logical deliberation—or let your subconscious accumulate a junk heap of unwarranted conclusions, false generalizations, undefined contradictions, undigested slogans, unidentified wishes, doubts and fears, thrown together by chance, but integrated by your subconscious into a kind of mongrel philosophy and fused into a single, solid weight: self-doubt, like a ball and chain in the place where your mind’s wings should have grown."
Ayn Rand
If the bolded pair of words were struck, I'd agree completely. Different people will have different balls and chains.
This quote was from a speech given to West Point cadets. By no means are they identical but it would be relatively hard to find a group of people more identical (from the perspective of being of the same gender, same age (within a few years) same nationality, and same general ideology).
A. False dichotomy - there are other choices. We might choose to compartmentalize our rationality, for example.
B. False dichotomy in a different sense - we actually don't have access to this choice. No matter how hard we work, our brains are going to be biased and our philosophies are going to be sloppy. It's a question of making one's brain marginally more organized or less disorganized, not of jumping from insanity onto reason. I'm suspicious that working with the insanity and trying to guide its flow is a better strategy than trying to destroy it.
C. Although not having a philosophy leaves us open to bias, having a philosophy can sometimes expose us to bias even further. It's about comparative advantage. Agnosticism has wiggle room that sometimes can be a place for bias to hide, but conversely ideology without self-doubt often serves to crush the truth.
A. How would you implement that choice?
B. We is a loaded term, speak for yourself. There's benefit to realizing that as a human you have bias. There's no benefit to declaring that you can't overcome some of this bias.
C Wouldn't that depend on your philosophy?
Wouldn't something good happening correctly result in a Bayseian update on the probability that you are a genius, and something bad a Bayseian update on the probability that someone is an idiot? (perhaps even you)
Yes, but if something good happens you have to update on the probability that someone besides you is a genius, and if something bad happens you have to update on the probability that you're the idiot. The problem is people only update the parts that make them look better.
Yes, but the issue is whether or not those are the dominant hypotheses that come to mind. It's better to see success and failure as results of plans and facts than innate ability or disability.
Not without a causal link, the absence of which is conspicuous.
Not necessarily. Causation might not be present, true, but causation is not necessary for correlation, and statistical correlation is what Bayes is all about. Correlation often implies causation, and even when it doesn't, it should still be respected as a real statistical phenomenon. All Jiro's update would require is that P(success|genius) > P(success|~genius), which I don't think is too hard to grant. It might not update enough to make the hypothesis the dominant hypothesis, true, but the update definitely occurs.
"Because" (in the original quote) is about causality. Your inequality implies nothing causal without a lot of assumptions. I don't understand what your setup is for increasing belief about a causal link based on an observed correlation (not saying it is impossible, but I think it would be helpful to be precise here).
Jiro's comment is correct but a non-sequitur because he was (correctly) pointing out there is a dependence between success and genius that you can exploit to update. But that is not what the original quote was talking about at all, it was talking about an incorrect, self-serving assignment of a causal link in a complicated situation.
-- Pope Francis, Open Mind, Faithful Heart: Reflections on Following Jesus
I think it's worth clarifying that Pope Francis and Jorge Mario Bergoglio are one and the same person.
Lesson learned: do not just copy-paste from Amazon.
-- Adam Cadre
This seems like explaining vs. explaining away. The process by which better players pick up wins is by winning the "contest of athletic prowess." The game itself is interesting to watch because we like to see competent people play, and when upsets happen, they often happen for reasons that are easily displayed and engaged with in terms of the mechanics of the game.
This is similar to choosing strict determinism over compatibilism. Which players are the "best" depends on each of those players' individual efforts during the game. You could extend the idea to the executives too, anyway--which groups of executives acquire better players is largely a function of which have the best executives.
Efforts are only one variable here, and the quote did say "largely a function of". Those being said, look at how often teams replay each other during a season with a different winner.
E. T. Jaynes, Probability: The Logic of Science
--Shunryu Suzuki
I think this is a very important sentiment. I'm however not sure how to get others to adopt it.
Tyler Cowen
We also confuse "what is important" with "what is interesting" fairly often.
But, as compiler optimizations exploit increasingly recondite properties of the programming language definition, we find ourselves having to program as if the compiler were our ex-wife’s or ex-husband’s divorce lawyer, lest it introduce security bugs into our kernels, as happened with FreeBSD a couple of years back with a function erroneously annotated as noreturn, and as is happening now with bounds checks depending on signed overflow behavior.
Hacker new comment
G. K. Chesterton, Orthodoxy.
I am reminded of:
@stevenkaas
In trying to find the above quote by wildcard searching on Google, I stumbled upon another quote of this nature by the dog's owner himself: "I want to love my neighbour not because he is I, but precisely because he is not I." There appears to be another one about science being bad not because it encourages doubt, but because it encourages credulity, but I'm unable to find the exact quote.
Who could have imagined that Zizek was so derivative! Oh wait...
Zizek himself lampshades the method here.
As does Chesterton, less explicitly:
and at length.
I get the impression that he (thankfully!) eased off on that particular template as time went on.
I'm inclined to think that non-ideological autocracy (we're in charge because we're us and you're you) is the human default. Anything better or worse takes work to maintain.
I'm not sure about that. In fact, I can't think of any actually non-ideologically autocratic society in history. Are you sure you're not confusing "non-ideological" with "having an ideology I don't find at all convincing"?
I seem to remember reading that tribes were more egalitarian than modern society, although its possible the author was just romanticising the noble savage.
There's reason to believe that foragers were more materially egalitarian than farmers, just because material wealth was harder to store. But it's not obvious that they were more egalitarian when it comes to political power or ability to do violence.
When the most powerful weapon is a mounted knight in full plate mail, its easy for a small minority to dominate. When the most powerful weapon is the pointed stick...
The medieval period is pretty late in the history of farming; I had in mind the early period of farming, when foraging and farming were more competitive.
But I think this focuses too much on visible organized violence and not enough on total violence. Were forager men more or less likely to beat their wives than farmer men? Forager parents vs. farmer parents? It seems possible that a larger percentage of the male forager population had potential access to rape through raids than the percentage of the male farmer population that had potential access to rape through soldiering, but I would want a lot of anthropological data before I made that claim confidently, which is why I don't think it's obvious.
This is a bit of a change in topic from the original comparison- tribal hunter-gatherers to modern society- but I think that the sorts of things people use violence and political power for are so different that they can't be compared that directly. As the saying goes, God created man but Sam Colt made them equal: in America it's not that uncommon for individual losers to shoot the most politically powerful man in the country, often leading to his death. I suspect the rate of losers in tribes murdering the local chief is much lower. But maybe what we want to compare is not 'ability to do violence' but 'ability to get away with doing violence,' but even then I don't think we have the data to make a good comparison. Was the ability of tribals to go on the run to escape vengeance better or worse than the ability of moderns? It seems like there are multiple dimensions with different directions for that comparison.
An interesting read, but I was not claiming that a more egalitarian distribution of physical power decreases violence - if anything, having one dominant power leads to peace because no-one challenges them, while as you say, the levelling power of firearms means that anyone can inflict violence.
AFAIK many tribal societies were much more violent - I read somewhere that in some tribes the majority of adult male deaths were due to homicide.
Skill is an a large premium. Thus those who have the free time to practice can end up dominating.
Actually, one thing that I noticed while reading this book is that despite engaging in violence far more frequently than people in non-tribal cultures, the Yanomamo don't really seem to have a conception of martial arts or weapons skills, aside from skill with a bow. The takeaway I got was that in small tribal groups like the ones they live in, there isn't really the sort of labor differentiation necessary to support a warrior class. Rather, it seems that while all men are expected to be available for forays into violence, nobody seems to practice combat skills, except for archery which is also used for food acquisition. While many men were spoken of as being particularly dangerous, in all cases discussed in the book, it was because of their ferocity, physical strength, and quickness to resort to violence. In fact, some of the most common forms of violent confrontation within tribes are forms of "fighting" where the participants simply take turns hitting each other, without being allowed to attempt to defend or evade, in order to demonstrate who's physically tougher.
I'm not sure how representative the Yanomamo are of small tribal societies as a whole, but it may be that serious differentiation of martial skill didn't come until later forms of societal organization.
This seems like Chesterton is making it up completely. Most progressives base the impulse on the hope that things could be better; dealing with the decay of conservatism is not a hypothesis that even enters in their minds. The 'truth of conservatism' (at least, the straw-conservatism defined by Chesterton here) is taken for granted by most people: if things keep on going like this, they'll keep on being like this.
No one has ever become a feminist by saying 'my god! if we leave things alone, the patriarchy will keep becoming even more oppressive and brutal with each year! We need to fight this slide of the status quo, and incidentally, it would be nice if we could not just repair the rot but also yank the status quo towards feminism and get women the vote and stuff like that'.
No, it tends to be more like 'the status quo is awful! Let's try to move it towards getting women the vote and stuff like that'.
AlyssaRowan On Hacker News
Schneier on Security blog post
Ayn Rand, to a Catholic Priest.
Philosophers have played a game going way back where they believe that popular religion comes in handy as a fiction for keeping the mob in line, but they view themselves as god-optional. The philosophes in the Enlightenment started the experiment of letting the mob in on the truth, and the experiment has apparently gone so far in parts of Europe like Estonia that some populations have lost familiarity with christian beliefs, or even how to pronounce Jesus' name in their own language. Or so Phil Zuckerman claims:
https://books.google.com/books?id=C-glNscSpiUC&lpg=PP1&dq=phil%20zuckerman&pg=PA96#v=onepage&q=estonia&f=false
The mob is pretty well educated these days, and the standard of living is so high that there's much less incentive to step out of line. I don't think we can compare modern nations to historical nations to make any claim about whether religion keeps people in line.
The claim that people can't pronounce Jesus' name might apply to former Soviet Union countries, but I doubt it applies anywhere else in Europe.
Do you know that Jesus's actual name is Yeshua?
We don't know that. It was likely some variant of the name commonly translated as "Joshua" in English. It could have been Yeshua or Yehoshua or a variety of slightly Aramacized variants of that.
But English language's "Jesus" is still far off.
Sure, but I fail to see how that's relevant to the point in question.
-- Lisa Bradley, a character in Brennan Lee Mulligan & Molly Ostertag's Strong Female Protagonist
Or by physics. Not all consequences for overconfidence are social.
Actually, well I suppose it depends on what you mean by "met".
There's no such things as gods.
I think this is about the only scenario on LW that someone can be justifiably downvoted for that statement.
I don't see why. Non-agents simply don't fit the definition of "god", so equivocating on the definition of "god" from "world-changingly powerful agent" to "abstract personification of causality itself" does not really shed any light on anything.
Let's look at why are asking the question. The relevant property in this discussion is "will punish you for being 'uppity" ". Being an agent isn't directly relevant to that.
But causality can't punish you for being uppity. You basically just cannot be uppity against causality.
This is false. Not only does the LW wiki have a definition of "god" that is a non-agent, the study of theology points one to numerous gods that people believe in that are non-agents. There's a reason that many of the popular monotheisms refer to their god as a personal god; it stands in contrast to the heresy of a non-personal (i.e., non-agent) god.
It isn't meant to be some rigorous account of how the world works, it's a deliberate mythology. I'm not entirely convinced as to whether it's a good idea, but aspie criticisms that amount to "god don't real" are missing the point entirely.
http://www.moreright.net/postrat-religion/
Actually, upon reading that article you've linked, I've found it to be cogent and well-written but emotionally toxic, tenuous in its connection to facts, and philosophically/existentially filled to the brim with lost purposes. To give examples, the obsession with preserving "European civilization" and the admiration for the internet's cult of ultra-masculinity (which should really be called pseudo-masculinity since it so exaggerates the present day's Masculinity Tropes that it dramatically misses other modes of masculinity, despite their actual historicity) portray the writer as chiefly, bizarrely concerned with present-day cultural trends rather than with the kind of good-in-themselves terminal values around which one could design a society from scratch if necessary.
I mean, sorry to be uncharitable in my reading, but I just don't see why I should want to build white European Christian or post-Christian society, in the first place. I know that reactionary and conservative communities give immense weight and worry to cultural goal-drift away from whatever weird version of white Christian/post-Christian society it is they actually like (derisive tone because it often seems they like The Silmarillion more than Actually Existing Europe), but it seems to me that the only way to really avoid random drift is to ground one's worldview in things that are actually, verifiably, literally true. Only an epistemic thought process will obtain consistent, nonrandom, meaningful results.
And since there is a truth of the matter when it comes to human beings' emotional and existential needs, it seems you couldn't get anywhere by doing anything but anchoring yourself to that truth and drawing as close as possible. Any deviation into lost purposes, ill-posed questions, and fallacious reasoning will be punished.
If you attach yourself to some invented image of some particular time-period in European history and try to pump all the entropy out of it, try to optimize everything to forcibly fit that image you've got in your head, you will only succeed in destroying everything else that you aren't acknowledging you care about. And since that image isn't even a terminal goal, a good-in-itself, the everything else will just be more-or-less everything.
If you separate Myth from Truth, Truth will burn you in hellfire. There is no escape.
(Also, citing an imageboard as a source of information about mythology and religion is just embarrassingly bad scholarship.)
Says the guy citing a deliberately informal wiki as a source of information about historical cultures :P
Fine, but Dungeons and Dragons is also a constructed, deliberate mythology, and you wouldn't respond to a quote about "You haven't met gods" by saying, "Actually, I role-played encountering Boccob the Uncaring, God of Magic, just last Tuesday."
Well actually, I would respond that way, but as a joke. I would not expect to be taken seriously.
Why are you arguing about taste? People adapt metaphors to help them think and act effectively. Human brains like agent-metaphors a lot: witness the popularity of the Moloch essay.
Your problem with classical religion might be that a lot of silly people are classically religious.
"But is the metaphor true" is kind of a silly question, imo.
Also, if there is an agenty God, it/she/he made sure to construct a world where nudges here and there are hard to trace.
No, my actual problem here is that these metaphors are not useful for making predictions.
Is that your line for good language use, prediction effectiveness? Do you have an issue with Scott's Moloch metaphor also? What about poetic language more generally?
Look: I am not a major fan of using poetic language to describe real life. Really. Just don't like it. And the problem with Scott's "metaphor" is that it wasn't a metaphor: he actually explicitly tagged the post as having an epistemic status of Fanciful Visionary Visions. It wasn't supposed to be anything approaching a useful sociological analysis that cuts reality at the joints. It wasn't supposed to be a rational way to think about the world.
But because it told a colorful story that stirs the emotions, people remember it far more prominently than any of Scott's writing on mere statistics that actually addresses reality, and now I have to put up with people pretending there's a demon at work in the world.
Fair enough. Why insist others share this preference? I like poetry (T. S. Eliot for example).
A ton of math is about metaphors (Lakoff wrote a book about this).
I up-voted it for dissenting against sloppy thinking disguised as being deep or clever. Twisting the word 'god' to include other things that do fit the original, literal or intended meaning of the term results in useless equivocation.
I'm not sure this is very rational. Assuming that you are more competent than you really are -- which seems to be a matter of hubris -- is indeed capable of destroying you.
— George R. R. Martin, Wikiquote, audio interview source
(Changed from an earlier quote I decided I'd keep for later.)
Wow. I am, uh, embarrassed to say that I somehow managed to get caught up in the replies to this comment without ever actually seeing the quote itself until now. (In my defense, I did get here through the Recent Comments sidebar, but still... yeah, not one of my prouder moments.) So, now that I've finally gotten around to reading the quote, uh...
...Maybe I'm dense, but I'm not quite understanding this one. I mean, I understand that it's an explanation of Martin's philosophy of writing, but I'm not really seeing the rationality tie-in. I could probably shoehorn in an explanation for why and how it relates, but the problem with such an explanation is that it would be exactly that: shoehorned in. I feel as though advice of this sort would be much better suited to a writing thread than to a rationality quotes thread. Could someone explain this one to me? Thanks in advance.
Fair point. To be honest, I just got this quote from Martin's Wikiquote page after I decided to save the original and needed something to replace it. (I suppose I could've done something like change the whole post to "[DELETED]" and then retract it, but this seemed good enough at the time.)
I can't really make a rigorous case for this quote's appropriateness here, what actually drove my decision to use this was basically a hunch. My after-the-fact rationalization is that maybe this quote sort of touched on the Beyond the Reach of God sense that death is allowed to happen to anyone, at any time, and especially in dangerous situations, as opposed to most fiction which would only allow the hero to die in some big heroic sacrifice?
For an after-the-fact rationalization, that's actually not bad. On the other hand, I think Martin might actually push it a little too far; reality isn't as pretty as most fiction writers make it out to be, true, but it isn't actively out to get you, either. The universe is just neutral. While it doesn't prevent people from suffering or dying, neither does it go out of its way to make sure they do. In ASoIaF, on the other hand, it's as though events are conspiring to screw everyone over, almost as if Martin is trying to show that he isn't like those other writers who are too "soft" on their characters. In doing so, however, I feel he fell into the opposite trap: that of making his world too hostile. Everything went wrong for the characters, which broke my suspension of disbelief every bit as badly as it would have if everything had gone right.
For me, it's not just a problem of suspension of disbelief, it's a problem of destroying involvement in the story. If too much bad happens to the characters, I'm less likely to be emotionally invested in them. Martin's "The Princess and the Queen" (a prequel to ASoIaF) in Dangerous Women is especially awful that way, through the characters aren't developed very much, either. I'm hoping he does a better job in the main series.
His reputation as a "bloody minded bastard" aside, Martin has creznaragyl xvyyrq bss n tenaq gbgny bs bar CBI punenpgre va gur ebhtuyl svir gubhfnaq phzhyngvir cntrf bs gur NFbVnS frevrf fb sne (abg pbhagvat cebybthr/rcvybthr punenpgref, jubz ab bar rkcrpgf gb fheivir sbe zber guna bar puncgre). Gur raqvat bs gur zbfg erprag obbx yrnirf bar CBI punenpgre'f sngr hapyrne, ohg gur infg znwbevgl bs gur snaqbz rkcrpgf uvz gb or onpx va fbzr sbez be nabgure. (Aba-CBI graq gb qebc yvxr syvrf, ohg gur nhqvrapr vf yrff nggnpurq gb gurz.)
Prediction: 30% chance it's a Christmas related quote.
Nope, just saving my first choice of quote for the beginning of the next thread. I figure if I post a good quote now, people will mostly only see it from the recent comment and recent quote feeds, and after a few others get posted, people will mostly forget about it and not, if they were to like it, upvote it. Whereas if it were one of the first posts in a thread, and people liked it and started upvoting it, it would stay high on the page and gather even more attention and upvotes, creating a positive feedback loop which would give me karma.
Machiavellian, isn't it? I doubt it'll work out that well, but I figure it's worth a shot.
^Everyone should upvote this in an ironic celebration of your honesty.
I think that we use "Best" (which is a complicated thing other than "absolute points") rather than "Top" (absolute points) precisely to reduce the effectiveness of that strategy.
That's interesting. What criterion/criteria does "Best" use, then?
And on a different but related note: does it really negate the strategy? I note that, despite using the "Best" setting, this page still tends to display higher-karma comments near the top; furthermore, most of those high-karma comments seem to have been posted pretty early in the month. That suggests to me that Gondolinian's strategy may still have a shot.
Technical explanation
Non-technical explanation
All right, thanks. So, I gave both articles a read-through, and I think that as described, the system implemented won't necessarily negate the strategy (though it may somewhat reduce said strategy's effectiveness). Really, it all depends on how awesome Gondolinian's quote is; if it's awesome enough to get a rating that's 100% positive, then the display order will be organized by confidence level, which in practice just means a greater number of votes most of the time (more votes → less uncertainty), which in turn means it'll need to be posted earlier, which brings us back to the original situation, blah blah blah etc. (A single downvote, however, would be sufficient to screw up the entire affair, so there's that.) I guess that's why you originally said it would only reduce the strategy's effectiveness, not eliminate it entirely.
That's awesome. My metaphorical hat is off to Gondolinian for figuring out a way to game the system--and crucially, take the second step: countering akrasia and actually doing it. Instrumental rationality at its finest.
Don't bet on it. :)
"There is no such thing as uncharted waters. You may not have the chart on hand to show you how to navigate these waters, but the charts exist. Google them."
Joe Queenan, WSJ 11/30/14
Too strong to be literally true but still
Think it's false, both literally and figuratively. Moreover, the guy needs to get out of his cubicle and go to interesting places :-)
95% of the people, 95% of the time is a less good standard when dealing with interesting people, isn't it ;)
EDIT: Downvote for... accepting a different opinion? Duly noted; will do so more quietly in future.
There's a law about that :-P
As far as literal charts of literal bodies of water on the surface of the earth, satelite photography actually has pretty much solved that problem.
As far as metaphorical waters, human civilization is larger than most people really think, and consists disproportionately of people finding and publishing answers to interesting questions. "Don't assume the waters are uncharted until you've done at least a cursory search for the charts" is sound advice.
Ahem. Do you really think that a picture of water surface which looks pretty much the same anywhere is equivalent to a nautical chart?
Proper nautical charts are very information-dense (take a look) and some of the more important bits refer to things underwater.
Raymond Smullyan, This Book Needs No Title, taking joy in the merely real
Duplicate
Whoops! Thank you.
Publius Cornelius Scipio Nasica Corculum
Since you're probably aware that one Roman senator (Cato) ended his speeches with "Carthage must be destroyed," you should also know that another responded with the opposite.
How is this a rationality quote?
Accurate beliefs, efficient altruism, and giving historical credit to the good guys. What does it say about us that (I would guess) most well educated westerners know about the "Carthage must be destroyed" quote but not the "Carthage must be saved" one?
It says that we care about the real as opposed to the imaginary. That is entirely to our credit.
Regardless of what may be considered moral, Carthage was destroyed. Educated people who wish to understand ancient history therefore naturally wish to learn of Cato's anti-Carthaginian campaign, precisely because it was successful. In addition, Cato the Elder was considered a model of behaviour by subsequent generations of Romans, in a way that Corculum was not, therefore to understand ancient Rome we have to understand the behaviour they valourised.
Similarly, Fumimaro Konoe is not nearly as famous as Hideki Tojo. This is not because educated Westerners favour Tojo's foreign policy, but because Tojo won the debate and Japan went to war.
While I agree with the overall sentiment, I think it's important not to overdo this approach. Let me explain.
Consider the situation where you have a stochastic process which generates values -- for example, you're drawing random values from a certain distribution. So you draw a number and let's say it is 17.
On the one hand you did draw 17 -- that number is "real" and the rest of the distribution which didn't get realized is only "imaginary". You should care about that 17 and not about what did not happen.
On the other hand, if we're interested not just in a single sample, but in the whole process and the distribution underlying it, that number 17 is almost irrelevant. We want to understand the entire distribution and that involves parts which did not get realized but had potential to be realized. We care about them because they inform our understanding of what might happen if the process runs again and generates another value.
Similarly, if you treat history as a sequence of one-off events, you should pay attention only to what actually happened and ignore what did not. But if you want to see history as a set of long-term processes which generate many events, you're probably interested in estimating the entire shape of these processes and that includes "invisible" parts which did not actually happen but could have happened.
There are obvious methodological pitfalls here and I would recommend wielding Occam's Razor with abandon, but that should not conceal the underlying epistemic point that what did not happen could be important, too.
You make a good point.
Good point.
Why is Publius Scipio Nasica a "good guy"? His opposition to Carthage's destruction was based on his idea that without a strong external enemy Rome will descend into decadence. (see Plutarch). That, to me, tentatively places him into the "pain builds character so I will make sure you will have lots of pain" camp which is not quite the good guys camp.
Well, it did.
That's an awesome response.
Forgive my fulfilling of Godwin's Law, but if a Nazi leader repeatedly told Hitler "Don't kill the Jews because struggling against them in the economic marketplace will make Germans stronger" would you consider this leader a "good guy"?
No, I would not.
And the equivalent position, actually, would be "Do not kill all the Jews at once, keep on killing them for a long time because the struggle will keep the Germans morally pure".
The intent matters.
– Jimmy Kimmel
It's actually only about 45 percent. The death rate for the world as a whole is about 93 percent.
That's just not true. Death rate, as the name implies, is a rate - the population that died in this year divided by the average total population. If "death rate" is 100%, then "birth rate" is 100% by the same reasoning, because 100% of people were born.
That depends on whether fetuses are people ...
If yes, the actual birth rate is around 80%. http://www.cdc.gov/reproductivehealth/Data_Stats/Abortion.htm
I wouldn't consider abortion a "birth", per se.
Exactly, so only people who aren't aborted count as born, in which case the birth rate is 80%.
Ah, "actual" threw me off. So you mean something close to "The lifetime projected probability of being born(/dying) for people who came into existence during the last year".
G. K. Chesterton, The Everlasting Man
Pathological counter example: "Passive propulsion in vortex wakes" by Beal et al. PDF
Chesterton was talking about Neoreaction, right?
ETA: A note of clarification for those in need of it: I am not actually claiming that Chesterton was talking about Neoreaction.
What Chesterton was actually talking about was a reaction against liberal Protestantism in favour of more traditional Catholicism (represented e.g. by the "Oxford Movement" in the Church of England), against a prevailing tide in the direction of greater liberalism within Christianity and greater skepticism about Christianity.
So: not literally neoreaction, obviously, but something with a thing or two in common with neoreaction.
If RichardKennaway's meaning is "ha ha, Azathoth123 is using this as support for a neoreactionary view, when in fact Chesterton had something entirely different in mind" then I don't think that's altogether fair.
(I have no idea whether the people who have downvoted Richard did so because they thought he was saying that and that Chesterton really was talking about (something like) neoreaction, or because they thought he was actually claiming that Chesterton was talking about neoreaction when really he wasn't. This is one of the problems with downvoting as opposed to disagreeing. On the other hand, perhaps they downvoted because RK's comment could be taken either way with roughly equal plausibility and was therefore needlessly unclear.)
Something along those lines. What makes Azathoth123's recent Chesterton quotes rationality quotes? Chesterton wrote:
and of course neoreaction is something that is going against a stream, and in its eyes progressivism is, well, I'm not sure how the metaphor works out from here, because what neoreaction is going against is progressivism, which makes progressivism the stream itself, rather than a dead thing floating down some other stream. But anyway, why should we take Chesterton's quote as real wisdom? As a literal statement about the physics of floating bodies, it is true but uninteresting. As a metaphor, he is applying it to Christianity, or to his preferred form of it, everything else being either the stream or the flotsam (the metaphor has the same problem here).
So just what truth is being asserted here, that for Chesterton supports Catholicism and for Azathoth123 supports neoreaction? Contrarianism, the view that the majority is always wrong? That is all the metaphor amounts to. This sits oddly with the contention, also made by neoreactionaries, that their preferred view of society is actually the great stream within which progressivism is a historical anomaly, a trifling eddy that will not last (e.g. Anissimov and advancedatheist in recent comments on LW). I'm sure that if that metaphor had suited Chesterton in making some point, he would have elaborated it at no less length. But a metaphor proves nothing: it is a method of presentation, not of argument.
So now that you've made your argument more explicit, I largely agree, but let me defend Chesterton (and maybe to some extent Azathoth123) just a little. Specifically, I'll argue (1) that his metaphor is a bit more coherent than you give it credit for, (2) that he isn't claiming anything as silly as that the majority is always wrong, and (3) that if he's understood right then what he says isn't as hard to reconcile with the neoreactionaries' claims as you suggest. But I agree (4) that he isn't making any very interesting factual statement, (5) that it doesn't offer all that much support for neoreaction, and (6) that he's engaging in rhetoric rather than anything much like reasoned argument.
Chesterton's argument, so far as there is one, seems to go like this. "Everyone thought traditional Christianity was dying or dead, washed away by a great stream of enlightenment and skepticism and liberalism. Some of its parts might stay in place for a while before eventually being eroded away, but the ultimate outcome seemed in little doubt. But now we see something -- let's call it neocatholicism -- not merely hanging on as the tide rushes past but actually heading in the opposite direction, getting more traditional over time rather than less. This shows that neocatholicism, and by extension traditional Christianity generally, is still alive and kicking. Let us put this in the context of these other times I've described, when Christianity appeared to be dead or dying but then regained ascendancy; this is just another example of the same phenomenon, even though we haven't yet quite reached the bit where the Church triumphs again."
So Chesterton is saying that his Great Thingy is enduring where what presently looks like a great all-consuming stream is in fact ephemeral; give it another century or so, he says, and the Church will still be there stronger than ever while the stream of skeptical enlightenment dwindles and is eventually forgotten. So I don't see the difference you do with the neoreactionaries' view of their model of society.
I don't think the claim here is that the majority is always wrong. Nor that "only a living thing can go against the stream" proved Christianity right or proves neoreaction right. I think it's intended to be something less ambitious: the emergence in Chesterton's time of a Catholic reaction showed (according to Chesterton) that Catholic Christianity was a still-somewhat-vigorous living thing and shouldn't be written off, and the emergence of neoreaction in our time shows (according to Azathoth123, if I'm interpreting him right) that a reactionary view of society is a still-somewhat-vigorous living thing and shouldn't be written off.
(So the actual alleged truth being asserted would be something like this: "Something that goes against the current of popular opinion, not merely holding on but pushing in the opposite direction, must have some vigour to it, and it may turn out to win in the end." Which, like the literal statement about floating bodies, is probably true but not very interesting.)
To support his stronger thesis that traditional Catholic Christianity is (not merely not quite dead yet, but) everlasting and always ultimately triumphant, Chesterton appeals to a bigger historical context in which (so he says) it has repeatedly seemed dead but returned greater and more terrible than ever before. I'm not sure whether the neoreactionaries make a similar claim.
There's still very little here in the way of actual argument, but that's how Chesterton rolls. Some clever and counterintuitive ideas, a paradoxical way of presenting them, lots of rhetorical fireworks and ingenious metaphors, and his work is done. If you get as far as thinking carefully about whether what he says is actually correct then his verbal pyrotechnics obviously weren't impressive enough.
"Don’t let anybody discourage you or tell you that intelligence doesn’t pay or that success in life has to be achieved through dishonesty or through sheer blind luck. That is not true. Real success is never accidental and real happiness cannot be found except by the honest use of your intelligence."
Ayn Rand
Too strong.
Nobody EVER got successful from luck? Not even people born billionaires or royalty?
Nobody can EVER be happy without using intelligence? Only if you're using some definition of happiness that includes a term like "Philosophical fulfillment" or some such, which makes the issue tautological.
-- Douglas Hofstadter, Godel, Escher, Bach
It is a good quote in general, but not quite a rationality quote.
I thought it was a nice illustration of the distinction between map and territory, or between different maps of the same territory. In other words, JFK and the speaker's uncle were very close together by a certain map, but that doesn't mean they were very similar in real life.
Abigail
(Is self-reference ok? This struck me.)