Rationality quotes: May 2010
This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you've seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions.
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments/posts on LW/OB.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (288)
Samuel Florman
Whoops, posted this in the wrong quotes thread.
"71-hour Ahmed was not superstitious. He was substitious, which put him in a minority among humans. He didn’t believe in the things everyone believed in but which nevertheless weren’t true. He believed instead in the things that were true in which no one else believed. There are many such substitions, ranging from ‘It’ll get better if you don’t pick at it’ all the way up to ‘Sometimes things just happen."--Terry Pratchett, Jingo
Any ideas for other substitions along the same lines? I came up with these:
Your first impressions of people are frequently wrong.
You should buy low and sell high, instead of the opposite.
Random things are random, even when the same number comes up three times in a row.
"Everything is vague to a degree you do not realize till you have tried to make it precise" - Bertrand Russell, “The Philosophy of Logical Atomism” (Part of the full sentence)
Ooh, ignore my note about duplication - yours has a better citation than the previous appearance of the quote.
--Eliot Z. Cohen, The Four Emotions of Tai Chi, The Ultimate Guide to Tai Chi.
Duplicate.
Wikiquote has this as:
Thanks. Given that most sources I found referenced the 'documentary' Zeitgeist, I am inclined to believe any other source above it. Provenance aside, I thought the quote deserved a mention.
---Portal (emph. mine)
Relevance: rationalists should win, importance of saying "Oops"
Makes me think of the FAI problem... As does this:
Errol Morris
(Clearly this isn't an actual definition, but it works pretty well if you reframe it as evidence rather than as a necessary or sufficient condition.)
-- Michael Bishop(50 Years of Successful Predictive Modeling Should Be Enough: Lessons for Philosophy of Science).
-- Michael Bishop(50 Years of Successful Predictive Modeling Should Be Enough: Lessons for Philosophy of Science).
-- Allan Cromer
-- Mark Twain
--Samuel Johnson
This seems at odds with our notion of subjective probability, where we assume that significant lingering doubt after confidently assigning a 99%+ probability is evidence that your calibration is poor, and your estimate should have been lower.
Does the man really believe the voyage is, all things considered, a good one?
I'm not sure I understand what you mean when you say
The most confusing part about this is the part about poor calibration.
As for the rest, I don't deny that the fact that the man is unwilling to undertake the voyage is evidence that he doesn't think it is worthwhile, at least in ordinary contexts. But I think there is little to recommend the view that acting against your best reflective judgment is impossible or even extremely rare.
Is Samuel Johnson's quote a valid or true statement? I understand your central thrust--the inability to do something personally (such as control one's sexual urges) and the disposition to encourage others to overcome that inability are not necessarily contradictory--indeed, they may fall together naturally.
However, in Samuel Johnson's world, and the world in which this "issue" comes up the most, politics, we might imagine that there exist two types of people: sociopathic individuals hungry for power, and individuals who are sincere.
If sociopathic individuals hungry for power are more often hypocrites, then we might, as an efficient rule of thumb (not being able to distinguish the two save through their observable actions!) condemn hypocrites because they are likely to be power-hungry individuals.
As a bayesian update, in the world of politics, we expect that hypocrites are more likely to be power hungry or sociopathic. I see Samuel Johnson's quote as potentially true, but ignoring a world of imperfect information and signaling.
Fair enough. Maybe it is typically reasonable to charge people with hypocrisy when they neglect to follow their professed ethical codes.
I still like the quote, even if it is hyperbolic. It is useful to be reminded that there are important cases where failure to live up to one's professed code does not warrant this kind of criticism. Being overly concerned with hypocrisy can make you be unconcerned with living up to a meaningful ethical code. This is especially important in the context of consequentialist morality. This is just a hunch, but I think there are a fair number of intelligent people who shy away from a demanding code for fear of being charged with hypocrisy. But there need be no genuine hypocrisy, at least in any deeply regrettable sense, in professing a demanding ethical code and failing to live up to it. Better to try to live up to a demanding code and fail than meet the demands of an uninspiring and mundane one. (In this kind of case, of course, you aren't just professing the code to curry political favor.)
"Jews don't read books: We study them." (Rabbi Arnold Jacob Wolf)
"He who dies with the most toys is nonetheless dead." --anonymous
Unless one of the toys in question is a cryostat. Then there's still hope.
"If you can't explain it simply, you don't understand it well enough." Albert Einstein
This relates well to my earlier frustration about the cop-out of vaguely appealing to life experience in an argument, without actually explaining anything.
Does the length of his sequences imply that Eliezer doesn't understand their subject matter, or that the universe is sometimes actually complicated?
Simplicity and concision are independent. I don't find Eliezer's sequences complicated. They are long, but simple all the way through.
Simplicity and grandmother-explainability are also not the same thing. I'd reject the grandmother quote, but this one I don't have a problem with, even if Einstein never said it.
Something I tell students when I'm teaching programming is "What is not clearly said was never clearly thought."
By the way, Eliezer has already explicitly rejected a similar quote attributed to Einstein.
They're both of dubious authenticity anyway. (I searched around for this version too, and the earliest mention of it I could find was in a 1977 Reader's Digest, and that's only according to a citation in a 2006 book.) That has nothing to do with whether it's true, of course — if a vague maxim like this can count as a rationality quote at all, then that is independent of whether or not Einstein said it.
Maybe this detracts from my previous agreement with the quote, but there's a difference between explaining in person, vs. explaining in writing for a general audience. With the former, you can get immediate feedback as to which parts you're not explaining well and appropriately redirect your focus, while in the latter you have to cover all the possible confusions.
This phenomenon was revealed most starkly in one of the articles in the quantum physics sequences, when I replied to the article by saying,
And Eliezer Yudkowsky said in response:
The fact that something can be explained simply doesn't deny the problem of inferential distance, in my view; it just means that each step is simple, not that there won't be many steps depending on how much of the listener's knowledge you can build on.
"Simply" doesn't necessarily mean "concisely" (outside of mathematical formalizations of Occam's Razor). Conciseness is preferable when possible, but being too terse can start impacting comprehensibility. (Think of three programs that all do the same thing: a 1000-line C program, a 100-line Python program, and a 20-line Perl program. The length decreases with each one, but readability probably peaks with the Python program.)
The quote says "If you can't explain it simply", not "If you don't explain it simply". In this case, even if we do switch to "concisely" I think it checks out. Indeed, most of the major points Eliezer makes in the sequences could be stated much more briefly, but I get the sense that his goal in writing them is more than just transmitting his conclusions and his reasoning. No, it seems he's writing with the goal of making his points not just intellectually comprehensible but obvious, intuitive, and second-nature. (Of course any intuition-pumpery, analogies, and anecdotes are used to complement good reasoning, not to replace it.) But I have little doubt that, if he really wanted to, he could he boil them down to their essential points, at the potential cost of much of the richness of his style of explanation.
(In any case, I'm not convinced that this quote is specific enough to serve as a usable norm. How simple? How much is "well enough"? Everyone will automatically assign their own preferred values to those variables, but then you're just putting words in Einstein's mouth, or rather, putting meanings in his words; you're taking whatever rule you already follow and projecting it onto him. Fittingly, this is a case where a longer explanation would have been simpler (i.e. more understandable).)
Edit: I think I remember Eliezer once writing something like "Generally, half of all the words I write are superfluous. Unfortunately, each reader finds that it's a different half." That seems relevant as well. (Anyone remember the source of that?)
The latter. On the other hand, the sequences could greatly benefit from some ruthless editing.
EDIT: 5 minutes after I wrote this comment, I googled a part of it, because I was not sure about my English. (I'm Hungarian.) This comment was already indexed by google.
I've noticed that things on LW get indexed by Google really quickly. Wonder why that is. Maybe because LW uses a Google Custom Search, Google pays especially close attention to changes on it?
I think Google pays close attention to anything with a feed (maybe "Anything Google Reader users have subscribed to", since they're necessarily processing the data anyway?). Whenever I post to my own blog, not particularly notable in an absolute sense, the post shows up nearly instantly in Google Alerts.
I think it is a combination of the Digg engine's Recent Posts feature directly interfaced by Google, and LW's high page rank.
This is something I actually struggle with a lot. I read something that strikes me as profound, and that I agree with, but as soon as I try to explain it it's all gone, and I'm left with bits and pieces that don't make much sense to anyone else.
I'm not sure if this is a failure on my part to understand, simplify an idea, or explain it.
Whenever I'm reading things that I want to actually learn and retain, I read with pencil and notebook and write down all the important points in my own words. I've found this to be helpful because it forces me to slow down and think about what I'm reading and how each new piece of information relates to everything that came before it. I've also found that having pencil and paper close at hand encourages picture drawing, which is often helpful when learning something (though it depends on what you're reading).
I had a similar problem when I read Feynman's QED. His explanation felt so simple and easy to understand when I read it, but when I tried to explain it to someone else I couldn't make it make sense.
It means that you had a deep understanding for a few seconds, and then lost it. Or that you got trapped in the same confusion as the author, absorbed what made it seem appealing, and then "corrected away" the confusion.
To determine which one happened, try the following:
Eventually, you should be able to either gain the understanding, or recognize where the error is.
This is an excellent diagnosis, and those are excellent suggestions for really learning the material.
Right on. I'm thinking about writing an "explain yourself" series that shows how you can overcome the supposed barriers to explaining your position if there's actual substance to it to begin with.
ETA: 5 upvotes so far -- sounds like a vote of confidence for such an article.
ETA2: Message heard loud and clear! I'm working on an article for submission, which may expand into a series.
.
How is that related to rationality?
Probably the close similarity to this site's oft-quoted "Shut up and multiply."
-Voltaire
(The phrase was written by Evelyn Beatrice Hall as a summary of Voltaire's attitude toward free speech. Since then, people started attributing it to Voltaire himself, and the myth has spread far and wide, as nobody really checks to see if he actually said that. Hearing something somewhere is plenty of evidence for most people, most of the time, and the conviction gets more solid over time. Which brings me to my second rationality quote, from Winston Churchill: "A lie gets halfway around the world before the truth has a chance to get its pants on.")
An older version: A lie is halfway round the world before the truth can get its boots on.
There is an excellent Terry Pratchett book, "The Truth," which features that phrase as a major plot point.
I looked up the book "Gems from Spurgeon" cited in that link. Here's the whole book.
--Review of The Art of Choosing, by Sheena Iyengar
All liquids, not just drinks? ...I wonder when Coca-Cola will start making liquid soaps, fuel, and lubricants.
One thing is for sure, Coca-Cola corp is definitely losing the overall fermion market to more streamlined business models.
That should be "Iyengar" with an i.
Thanks =)
From Thomas Macaulay's 1848 History of England.
.................................
-Vorpal
A rationality quote from Space Battles? Really?
(I regard many posts in Non-sci-fi as examples of politics as a mind killer. Edit: I'm pretty sure even most of the denizens there wouldn't claim it's really about debate, at any rate. But... the quote is Actually pretty funny)
Yeah, I used to post a fair bit on Space Battles years and years ago as a teenager, but in retrospect the vast majority of debates there (both the Star Trek vs. Star Wars type stuff and the political stuff) were about trying to "win" the argument high-school-debate style, rather than trying come to a reasonable conclusion or discover something new. It was fun until I realized that everyone was just arguing around in circles though :).
ps Star Trek > Star Wars 4 lyfe
-- Scott Atran
That's not an 'extreme' case, it's a misleading one. What kind of idiot tries to make a point about the means used to achieve the goal of "Advancing reason" by pointing out that the same means won't work for rescuing a hostage?
You have not made the case that the point is idiotic. Are you under the impression that the idiocy is self-evident to this audience?
Um, yes. That action A is bad for goal X isn't evidence that it's bad for goal Y, unless Y is very similar to X. "Saving the hostage" and "Advancing reason" aren't similar goals.
Leaving aside your claim that Atran's analogy is idiotic, the evidence seems to point away from your claim that this idiocy is self-evident to this audience. The Atran quote stands at 7 votes, while your comment stands at –2 votes.
That isn't proof, of course. Upvoting doesn't necessarily imply agreement, though the upvoters—and I am one—probably consider the quote to be at least non-idiotic. And your comment's downvotes (none of which are mine) may be due more to its strong language than to disagreement. But still, isn't this strong evidence that the quote's idiocy is not self-evident to the rest of this audience?
That quote is valued for more than its objectively shoddy analogy. Its larger point is plausible and potentially useful. However, I'd like to see some experimental evidence that shows how well mocking+shaming people with dumb beliefs works; polite persuasion is definitely pretty ineffective.
Also, on average, more people read and vote on a parent comment than its reply. Without seeing the number of total (approximate) views and downvotes, you can't be sure what people think of it.
Furcas is right: the only way in which the hostage-takers are an extreme case is: suppose they have especially irrational beliefs, and that your goal is to make them more rational with high-rudeness persuasion/shaming; then they are more likely to become extremely angry (decapitating a hostage) than to be persuaded. If that's what Atran intended, he communicated it unclearly. Likely, it's just illogical emotional rhetoric.
It's definitely obvious upon weighing that the "extreme case" analogy is flawed; still, Furcas could have saved the world (but not himself) time by laying out the case before being challenged.
I agree that such experimental evidence would be valuable. My guess is that the effectiveness is determined primarily by the respective statuses of the mocker and the mockee within the mockee's own tribe. If the mockee doesn't consider the mocker to be sufficiently high-status in that tribe, then the mockee will elect to gain status within the tribe by counter-mocking the mocker.
The problem with mocking religious extremists is that we are low-status in their tribes. To get our mocking to work, we need to gain status in their tribes first. By starting off with the mocking, we are just giving the extremists opportunities to gain status, and lessen our own, by mocking us.
It's useful to remember that mocking is a very cheap signal. Pretty much anyone with a certain minimum of free time and verbal wit can do it. Even successful mocking (that is, mockery that increases your status and decreases the mockee's within your tribe) doesn't correlate strongly with being right in most tribes. This is especially the case in the tribes where religious extremism has a lot of purchase.
Árni Magnússon
There are people who do both, unintentionally or not. And the last line strikes me as cynicism for cynicism's sake.
-- Sam Harris (emphasis in original)
I would love to agree with the sentiment in that quote, but offhand, I can't think of any examples.
Certainly the day-to-day job of the scientist is to prove himself or herself wrong in as many ways as possible, so as not to leave that job to others. But what eventually yields prestige is being right.
One possible counter-example I can think of is the Michelson-Morley experiment, the "most celebrated null experiment in the history of science" to quote one short-breathed biographer. But by several accounts I have read it only became "the most celebrated" thirty-odd years later, once the significance of Einstein's work had sunk in. Before that it seems to have been possible at least to regard it as an anomaly to explain away, for instance via "ether drag" theories.
So even this attempt to prove myself wrong doesn't reach as far as I should hope.
The first person to come to mind for me was Friedrich Ludwig Gottlob Frege who is famous for basically inventing symbolic logic (specifically, predicate logic with quantified variables). He spent an enormous amount of time working on the thesis that the results of mathematics flow rather directly from little more than the rules of logic plus set theory. He aimed to provide a constructive proof of this thesis.
Bertrand Russell discovered a logical flaw (now called Russell's paradox) in Frege's first book containing the constructive proof when the second book in his series was already in press and communicated it to Frege. Russell wrote of Frege's reaction in a bit of text I recall reading in a textbook on symbolic logic but found duplicated in this document with more details from which I quote:
I don't think science generally lives up to its own ideals... but as I grow older and more cynical I find myself admiring the mere fact that it has those ideals and that every so often I find examples of people living up to them :-)
Also, my understanding is that neither Michelson nor Morley ever stopped believing in a luminiferous aether and spent much of their remaining careers trying to show there was one.
"A free man thinks of death least of all things; and his wisdom is a meditation not of death but of life."
-Baruch Spinoza
Does that mean "a free person thinks that death is the worst of all things" or "a free person thinks less often about death than about any other thing"?
(The former doesn't seem to have that much to do with freedom, so I'm guessing he meant the latter... in which case I agree with him, but probably not in the way he intended: yes, we won't think about death very often once we're free from it.)
I think he means that it is irrational to ponder death when those moments can be spent living life productively. Not sure if I agree -- doesn't the thought of one's death often propel us to great action, while lack of such thoughts leads to complacency? Anyways here is the the proof from the Ethics:
Proof.— (67:1) A free man is one who lives under the guidance of reason, who is not led by fear (IV:lxiii.), but who directly desires that which is good (IV:lxiii.Coroll.), in other words (IV:xxiv.), who strives to act, to live, and to preserve his being on the basis of seeking his own true advantage; wherefore such an one thinks of nothing less than of death, but his wisdom is a meditation of life.
Jane Jacobs
Reminds me of non-atheists who try to give advice to those of us who would like to make atheism into a significant movement ("New Atheists" etc.) under the implausible pretense that they're just trying to help us be more effective.
Your point seems to have a valid core, but perhaps making the observation right next to this quote is not... um... is not what one might do after reflecting for a while about moral symmetry and the content of the message that suggests that we should generally try to focus on positive outcomes and the unrealized potential in things we already love?
You're right.
Niels Henrik David Bohr (1885-1962)
"Society begins to appear much less unreasonable when one realizes its true function. It is there to help everyone to keep their minds off reality." Celia Green, The Human Evasion.
http://deoxy.org/evasion/4.htm
And Rumours of War, time-travel story on the Ynglinga Saga blog.
I was deeply confused for a moment, since I know that no such passage appears in the Ynglinga Saga and that the Icelandic prose style means no such passage ever could; perhaps clarify that that is something entirely different?
Good point. Clarified.
Discussion of how not to get lost in the woods
Only slightly less interesting in the same comment:
This matter of case studies is intensely valuable.
"He remembered the pride filled glow that had swamped Gyoko's face and he wondered again at the bewildering gullibility of people. How baffling it was that even the most cunning and clever people would frequently see only what they wanted to see, and would rarely look beyond the thinnest of facades. Or they would ignore reality, dismissing it as the facade. And then, when their whole world fell to pieces and they were on their knees slitting their bellies or cutting their throats, or cast out into the freezing world, they would tear their topknots or rend their clothes and bewail their karma, blaming gods or kami or luck or their lords or husbands or vassals—anything or anyone—but never themselves."
-Shogun
-- Imre Lakatos, "What Does a Mathematical Proof Prove?"
ETA: When I first read this remark, I couldn't decide whether it was terrifying, or just a very abstract specification of a deep technical problem. I currently think it's both of those things.
Link appears to be broken.
Fixed, thanks.
--Alan Perlis, Epigrams in Programming
Cox's theorem seems to reduce the gap between the formal and the informal, by deriving probability theory from axioms that seem easier to informally assess.
Yes, and that is to me one of the main attractions of Bayesianism; but nevertheless, there is still a jump there between our informal considerations and formal means, and that ineradicable jump is what Perlis is talking about.
-- John Stuart Mill
-- Alfred North Whitehead
-- Bertrand Russell
As it turns out, it is perfectly rational to inspect evidence that contradicts your beliefs more closely than evidence that confirms them. If evidence goes against your beliefs it is more likely to be fake evidence. As the saying goes, your strength as a rationalist is your ability to detect fabricated evidence.
EDIT: As Ben Elliot points out in this post this argument really only applies if you happen to be a perfect Bayesian (and you aren't). In real life you're biased toward confirmatory evidence, and so often you really should check it over disconfirmatory evidence. However, it is worth keeping track of which things you're doing to compensate for a human bias, and which things you're doing because they're optimal for Bayesians.
No. This is an easy mistake to make, but it has bad consequences. Detecting fake evidence is only an instrumental goal, in service to the more important goal of maximizing the accuracy of your beliefs. There is a common trap, where all the evidence you see, on both sides, would fall if you challenged it, but you only challenge the pieces you disagree with; and then you add up the remaining, also-invalid evidence, and conclude that you were right all along. It is theoretically possible to counter this bias directly, by discounting confirmatory evidence that you haven't taken the time to challenge according to a well-calibrated prior probability that it's invalid. However, I don't know if anyone can actually do this in practice, and it seems like it would be very difficult to do without carefully formalizing and writing down everything.
I was worried by my own conclusion, so I built a mathematical model to check it*:
Suppose that there is an urn with 100 balls. You're 99% sure that there are 99 white balls and 1 black, but there's a 1% chance that there are 99 black balls and 1 white.
You're about to be scored on the probability you assign to the correct state of the urn, using a logarithmic scoring method, but before that happens your friend takes a ball from the urn, looks at it, and puts it back. Your friend then tells you what colour it was.
Your prior that your friend would lie is 10%.
Suppose you are given the chance to check the colour of the ball your friend drew. How much are you willing to pay for this knowledge? Will you pay more or less if your friend said that the ball was black?
By my calculations** the expected utility
if your friend said "white", and you don't check is -0.013581774
if your friend said "white", and you check is -0.0037359
if your friend said "black", and you don't check is -0.391529169
if your friend said "black", and you check is -0.155101993
So your you will pay 0.0098 to check if your friend said "white" but 0.2364 if they said "black".
You try harder to investigate unexpected evidence!
If you're really sure of your conclusion, my maths is probably wrong somewhere. If you think the model itself is inappropriate, please point out how.
EDIT: Of course, if your friend lies 50% of the time, you care just as much about checking confirmatory evidence as disconfirmatory evidence, but then your friend isn't really "offering you a fact".
*And if I wasn't worried, I wouldn't have built a mathematical model!
**Using base 2 logs.
jimrandomh's point might have been that you don't try at all to investigate expected evidence. So you'd pay 0.2364 to check if they said 'black', but if your friend said 'white' you wouldn't pay to check anything, you'd scoff at the suggestion.
Your result is important, though. Equal checking is wrong, lopsided checking is wrong. Only checking exactly as much as is required by the mathematics is okay.
I dig what you've done, but as a mathematician, my instinct would be to do calculations like this with variables instead of made-up numbers... that way you can be sure that your results hold In General.
Sure, I just couldn't be bothered. If you ignore the complicating factor of having black balls in the white urn and vice-versa (i.e. we look at the problem where the urn is 100% white or 100% black). then the maths is easier, and we can quickly see that this always holds whatever variables we put in.
In fact, it also works for many different scoring algorithms. If our scoring algorithm is f(x) then it the proof works whenever xf(x)+(1-x)f(1-x) is increasing on [0.5,1].As far as I can tell this happens all of the time that f(x) is increasing on [0,1].
But do keep in mind that that instinct is often strong evidence, especially in Near domains.
-- Stephen J. Gould
-- John Dewey
-L. A. Rollins, Lucifer's Lexicon
Vote this up! What could be more rational than to be skeptical about skepticism?
Being skeptical about skepticism about skepticism?
I was disappointed to find that Ambrose Bierce's Devil's Dictionary didn't have such a term. I had thought it would be a similar definition, and an ironically close name, though searching again showed me that the Lexicon is based on the Dictionary.
That what you were thinking of?
I was looking for skeptic specifically, but that's a close second.
(In a thread where people were asked whether or not they had a religious experience of "feeling God"):
-- Axiomatic
Is this really different from the mentality that says people permanently dying is a good thing because it's a feature of atheism, which is a good belief system because it's true?
Its a shame the idea that "god" is a person with a personality has competed-out other ways of thinking of god. Is there a deep mystery that our own consciousness even exists? Are we connected in that mystery with the billions of other consciousnesses around us? In ignorance of what even consciousness is, are we sure it inheres in our bodies and not somewhere else?
If god is the label for consciousness beyond your own consciousness, AND you admit the probability that god is not an angry-father-like personality that wants to help some people, hurt other people, and COULD fix everything if he wanted to, the world gets a lot more interesting.
In my opinion, the experience described in this quote is a classic mystical experience. That it leads you away from the god-as-angry-father picture of god is likely true of every other mystic, especially the famous ones.
Read the Mysterious Answers to Mysterious Questions sequence and the Twelve Virtues (especially that of Curiosity). We can't be "connected in that mystery" because the feeling of mysteriousness is a type of ignorance, and ignorance of some phenomenon is a fact about our minds, not about the phenomenon. When something seems mysterious to us, the proper thing to do is to think about how to solve it, not to worship our ignorance.
If God means all that, then you've just changed the definition so much that there's no point in calling it "God" anymore. To make sure you're not just sneaking in connotations, try describing whatever it is you're calling "God" but giving it a different label — say, "spruckel". "Spruckel is the consciousness beyond your own consciousness". Does that feel different to you than "God is the consciousness beyond your own consciousness"? If so, you need to consider what the word "God" is doing in your mind when you hear it, and specifically notice that it's something the word is doing rather than anything about what you claim to be defining it as. If not, then... well, then you won't mind henceforth using the word "spruckel" for this thing you're describing instead.
"Spruckel" is my new go-to nonsense word. It sounds like it should be a three-inch-tall woodland creature of some kind.
According to Richard Verstigan's Restitution of Decayed Intelligence (1605), the ancient Saxons called the month of February "Sprout-kele":
Mr. Axiomatic is fortunate to have exercised Original Sight to arrive at the realization that mere reality is worthy of joy. I was oblivious to this until I started reading all the popular atheists saying how reality is great, and then when Eliezer handled it in a series with the utmost clarity and depth.
In spirit of full disclosure, not all religions were possessed by tawdry fantasies. Some embraced the regularity and beauty physical law as a sign of Bog's greatness. Unfortunately this little glitch contributed to me getting stuck thinking that Judaism is actually was rational for 20 years. I stopped thinking too early.
"R. Simeon b. Pazzi said in the name of R. Joshua b. Levi on the authority of Bar Kappara: He who knows how to calculate the cycles and planetary courses, but does not, of him Scripture saith, but they regard not the work of the Lord, neither have they considered the operation of his hands." (Babylonian Talmud, Sabbath 75, about 1700 years back)
Right, and this is what I was used to as well, though I wasn't familiar with that quote. ("Bog" is handy. I like that.)
As for the "glory" -- yes, I've felt it too. Exactly, exactly the same way. "The world is sufficient." But that sense of joy can't be enough to keep you going, because sometimes the world is horrible, and it is not sufficient, not for me, not as long as I have the capacity to love people and worry for them. Joy is there, but it's not the whole story.
Got Bog from Heinlein. I nice positive side effect of shedding mental handcuffs is that I restarted my sci-fi reading career, and being out for 20 years left me with a huge green pasture ;)
I also think my own break with religion started with an emotional experience, or perhaps the experience just broke the dam of all the mental incoherence I have piled up under the carpet. I saw pics from Haiti of medical workers piling up children's bodies; I 'knew' then that if god exists he does not give a crap about things I care about; I was never 'religious' enough to think that me and my children are any 'better' than what I saw in front of me. The rest was a trivial exercise in comparison (mostly historical research and some logic).
In general the problem with religion that it's a web of beliefs, and people cannot extricate themselves one strand at a time, the strands simply tend to regrow (though weaker, I think). You need a powerful emotional experience to pull enough threads all at once.
Incidentally, this is a big benefit on the something to protect emphasis here.
You probably know this, but Bog is the Russian (similar in other Slavic languages) word for God.
Funny, of course I know it - Russian was my first language, but somehow I parsed it as being a whimsical made up word; I knew I was out of practice, but not this much!
The exact opposite happened to me: I read a bunch of sci fi, and since very few of the authors I read were religious, I was essentially getting an atheistic worldview through books. That conflicted with my religious beliefs, and God lost.
I would question that this is a rationality quote. It's a quote about how atheism is better for aesthetic reasons.
On the surface, yes.
It's an anecdote that the "numinous" feelings that the religious sometimes cite as evidence of God can equally well be interpreted the opposite way. We can pull out Bayes' Theorem to show that these numinous feelings really don't make belief in God more rational. This isn't a hugely controversial point here, but I think what this says about seizing on how evidence supports one's side without considering the ramifications for the other is worth remembering.
True, but I had the feeling that some readers here would like it anyway. (I view this as more of a "quotes LW readers would like" thread than a literal "rationality quotes" thread.)
Also, it does fit into the joy in the merely real ethos, which in turn makes it emotionally easier to accept rationalism and reductionism.
Immanuel Kant, Critique of Pure Reason (trans. Norman Kemp Smith), p. A5/B8.
Hmm, this would be cooler if not for the fact that light does move faster in a vacuum.
"the light dove." :P
Oh wow, I completely missed the point. Thanks =)
Duplicate.
It's wise to make a habit of hitting the search bar before posting quotes.
Duplicate.
Sorry about that.
Duplicate.
-Richard A. Posner, Catastrophe: Risk and Response, p. 13
Most "doomsday predictions" do not actually predict the total annihilation of the human race.
It might be postulated that we don't have records of most correct doomsday predictions because the predictor and anyone listening met with doom.
He is assuming that there will be a doomsday - also known as begging the question (http://www.don-lindsay-archive.org/skeptic/arguments.html#begging). It is also quite possible that no doomsday predictions are true. This is one of my gripes with existential risk theories, all I have read depend on the assumption that eventually there will be an end.
No, I don't think so. He is making a claim about what implications follow from a certain fact. That fact is the definition of "a doomsday prediction". All that follows from that definition is that all but one will be false. Of course, even that last one (so to speak) might be false, but, even if this is so, it doesn't follow from the definition.
This is not a case of begging the question. It is just being clear about what implies what.
I got nothing from my tracking system until I used it as a source of critical perspective, not on my performance but on my assumptions about what was important to track.
-- Gary Wolf
No problem can stand the assault of sustained thinking.
--Voltaire
Someone just threw you off the Golden Gate Bridge.
There's one problem thinking won't much help with.
But then again, to make that point I had to reach for a problem nothing could be done about.
There are problems which happen so quickly that you can't do sustained thinking while you're in the middle of them, but sustained thinking might help install good reflexes for the general case.
For example, I fell safely on ice for the first time this past winter. I'm reasonably sure that the Five Tibetans (a sort of cross between yoga and calesthenics) strengthened the muscles around my knees and possibly had other good effects such that I didn't twist my knee.
The thinking how to fall to get a minimal possible damage is still a potential way out.
At least, the thinking increases your odds to survive in any situation you are thrown into.
How many people died needlessly of chocking, when they could invent the auto Heimlich - but they failed to do so?
Well, unless I've remembered it wrong, only two or three people have ever survived that fall. If I'm wrong, substitute a plane. Or a personal unprotected atmospheric re-entry.
Sometime there really are problems that can't be helped.
Falling toward a black hole would do. No way out, except in the form of Hawking radiation, much later in your death.
But don't give up even then! Schwartzshild coud be wrong. Think hard in any circumstances!!
Alas, rigorous truth is the constant enemy of the aphorism.
Q: How much does the smoke weight?
A: Subtract from the weight of the wood that was burned the weight of the ashes that remain, and you will have the weight of the smoke.
--Immanuel Kant
He left out the weight of the air...
Huh?
Edit: Never mind. Googled.
I agree, of course. But don't be too harsh on Immanuel Kant, who had no knowledge of modern chemistry but was able to understand, that Aristotle was essentially wrong in his views about "natural places of light things up on the sky and heavy things down here on Earth".
People understood that Aristotle's understanding of natural place didn't work long before Kant. As early as the 1300s, Oresme laid out problems with this view. The work of Galileo and others made it clear that it didn't make sense. Newton removed any remaining doubts about this. And Newton died about when Kant was born. That Kant knew that Aristotle was wrong is no credit to Kant.
As to the chemistry matter, I'm not completely sure but I think that idea also was around before Kant. Robert Boyle wrote The Skeptical Chemist about 70 years before Kant was born and he touches on the idea of conservation of mass. Hooke also died before Kant was born and did work involving mass loss in chemical reactions. I don't think this can be substantially credited to Kant either.
Kant "quoted a philosopher" in his book. An unnamed philosopher, who answers the described way.
Kant promoted the idea of his predecessors and contemporaries against the still popular views of Aristotle, in his time.
Even today, you can hear a lot of "five elements" and "the fifth element" and so forth. An Aristotelian myth, very much alive even today.
What?
Is this another non-US thing? I never hear talk of the "four elements" except in mocking of how foolish the ancients must have been to think that (for instance) wood contains fire. And it was hardly an Aristotelian original.
I recall being taught them (as in, the teacher said "these are the 4 elements: earth, fire, wind, and water" and had us each make a full page drawing to plaster on the wall; no mention that it was an antiquated Greek model or anything) in kindergarten and/or elementary school in Peru. Aether was also mentioned as the 5th element, but it was handwaved as being too advanced for us or something. Frankly, I don't think they had any idea what the hell they were talking about; somebody just told them that those were the elements and they passed it on.
That is one of the most deeply fascinating and frightening anecdotes I have ever heard on LW.
Could you please elaborate on why you regard it as such? I can think of a couple of things to take away from it that would be frightening (that people will repeat "earth, fire, wind, and water" as easily as they will "carbon, oxygen, nitrogen, and fluoride", for instance), but I feel like I must be missing something because I wasn't expecting that kind of response.
Part of it is that. But it also that someone could be so divorced from modern science that it wouldn't occur to them that earth varies in nature. Or that they hadn't heard that water is hydrogen and oxygen. Or if they have heard that, they didn't try to reconcile it at all with the claim about water being elemental. The notion that there are people out there who are that uncritical not for any motivated reason (as some religious individuals are) but out of simply humdrum everyday lack of thinking. And that such people would then go on to teach other people?
I guess I shouldn't have found this as disturbing as I did. But I generally have a low opinion of humans, and it seems like no matter how cynical or pessimistic I am, I'm sill surprised by their behavior.
I had a science teacher in about 5th grade who told us that
She was surprised and skeptical when I told her that cells were made of atoms.
Not quite as extreme, but I had a science teacher (iirc, in junior high (that's 7th through 9th grade) who said very firmly that the sun is not a star, the sun is the sun.
Also, the kids are pretty into Avatar: The Last Airbender.
(Though I realize you were talking about serious belief)
The four elements is still really popular in new-agey circles. I believe my element is "air" and it has something to do with my birthday or astrological sign. The four element thing is really central to Wiccan practice, or it least it was in middle school when I learned this stuff (doing spells and shit was at that time very popular among 14-year-old girls and I was a 14-year-old boy).
I had never heard of quintessence until I studied Aristotle, though.
Doreen Valiente talks about the four elements and Spirit as a fifth element in An ABC of Witchcraft (1973); see for example the "Pentagram" entry. (I was into Wicca when I was a 14-year-old boy too!)
In what sense did the Wiccans believe in the four (or five) elements?
Wikipedia is more trustworthy than I am (they same the same about it being more about ritual than belief). I don't remember anything about the 5th element but wikipedia says that is there too. What we did was apparently "casting the circle" and I sat on the east side (or facing the east, I can't remember) and read something about wind and slyphs. I recall thinking that being born with air as my element should imply that I had more control over the air/wind, but that's probably the 14-year-old version.
Sorry, I was automatically not counting the Wiccans as 'serious' which is probably unfair.
I'm a somewhat casual Neo-pagan-- I enjoy the rituals.
As far as I can tell, the four elements are viewed as a convenient source of symbolism, but not believed in literally.
I don't know about Wiccans, but Neo-paganism is a community of practice, not belief. Neo-pagans cover the range from atheism to literal belief.
Another example: I don't know if Eric Raymond would self-describe as atheist, but he is a neopagan with, as far as I can tell, a naturalistic worldview.
Edit -- a key quote:
I shouldn't speak for actual Wiccans, my experience was mostly love spells and giggling. I did sit in a circle once and "call" the air element after which people did the same for fire, earth and water. Then someone stole some of my hair to make me fall in love with them and we all smoked cinnamon sticks.
Really?
Ah. I didn't realize you were speaking of Aether specifically.
It's a reasonable hypothesis that Kant came up with, but until he's tested it -- or at least thought of a way to test it -- he should have been more tentative about it.
Really? Why is the fact that you've thought of a way to test something a reason to be more confident of it?
I agree that if he had actually tested it that would have been reason for more confidence, but intention to experiment is not Bayesian evidence.
Hmm. What do we mean by weight? Mass * g?
We live in an age of uncertainty, complexity, and paranoia. Uncertainty because, for the past few centuries, there has simply been far too much knowledge out there for any one human being to get their brains around; we are all ignorant, if you dig far enough. Complexity multiplies because our areas of ignorance and our blind spots intersect in unpredictable ways - the most benign projects have unforseen side effects. And paranoia is the emergent spawn of those side effects; the world is not as it seems, and indeed we may never be able to comprehend the world-as-it-is, without the comforting filter lenses of our preconceptions and our mass media.
-- Charles Stross (Afterword: Inside the Fear Factory)
--Autonomous Technology: Technics-Out-Of-Control (1989), Langdon Winner
Not to mention that if all but a few were destroyed and there was a need to rebuild technology and set up society again basic skills needed to do this would be non existent in the general public things like chemistry, electronics and mechanics, things we base our lives on today, are not common knowledge and we wouldnt be able to rebuild what we have today
What -are- you talking about?
We have massively literate societies and a culture in which all the knowledge is shared massively. After a crisis, the remaining few would have to pick up a lot of skills they lack before crisis, but they would have the means to do so in said stores of knowledge, plus the immense advantage of knowing that the things destroyed are possible. The general public -is- capable of learning.
Hunter-gatherers had no knowledge of chemistry, electronics, and mechanics, nor any concept that the things we do with them were possible.
Quoting myself, but since this is a reply maybe I can get away with it. I left this as a comment several months ago about a danger in the current recession that most commentators seem to miss:
You can see the same kinds of problems with people being unable to do basic home repair, like fixing a faucet or a porch railing. I remember in the late 1970s there were a lot of people doing their own remodeling and stuff, partially because of the sucky economy at the time.
If the economy doesn't really start to improve, we could be looking at a situation worse than the Great Depression, even if none of the financial indicators get as bad, simply because people are much more dependent on buying services through the economy and less able to do for themselves than any previous "hard times".
At least we have the Internet, so we are better able to find directions on how to do something we've never done by ourselves before.
The Internet sucks for learning. See my short post http://williambswift.blogspot.com/2009/04/web-is-still-not-adequate-for-serious.html . Plus what you need for actually doing things are skills which you cannot pick up by reading, even with decent sources.
Two comments:
1) Magic: the Gathering strategy was developed and refined almost entirely through the Internet. If you want to be a competitive Magic player, you need the Internet.
2) If you need narrow advice - "how to fix a broken faucet" is pretty narrow - than the Internet works pretty well. If you want to learn to be a plumber, yeah, the Internet kinda sucks, but if you have relatively limited needs, it works.
Yes, it's an interesting topic. I find interesting the general phenomenon that economic development seems to make economies ever more fragile and liable to collapse; http://globalguerrillas.typepad.com/ is the best source I know of for this kind of thinking.
Actually, I think modern economies have more redundancy and are less prone to a catastrophic collapse than more primitive ones. My point was that people seem to have become lazier, especially intellectually, over the last few decades, which could cost them dearly in a prolonged economic contraction.
More redundancy? I don't see that at all.
Where's the redundancy in your water supply? 'Bottled water at my local Walmart' doesn't count. Where's the redundancy in your shelter? You don't know how to build one, even if you had the saws and whatnot to make use of the trees in your yard (assuming you have a yard with trees in it and aren't - like millions - an apartment dweller). There's no redundancy in your food supply; even rural dwellers might no longer have some chickens in the yard which could be eaten, or a solid vegetable garden. And so on.
And 'lazier' is a cop-out. If modern economies lead to mental laziness, and that reduces resiliency/redundancy, modern economies reduce resiliency-redundancy! The exact mechanism doesn't matter - the ultimate result does. 'The operation was a success; unfortunately, the patient died.'
They have more redundancy at least to the extent that they operate with a greater surplus of wealth above subsistence. The greater interconnectedness and surplus wealth of modern economies also allows for resources to be quickly re-allocated across large geographical distances in response to a localized disaster.
In primitive economies the majority of the population are often living very close to a subsistence level and are able to accumulate little in the way of savings or capital to fall back on in hard times. In wealthier modern economies a disruption may cause dramatic swings in relative wealth but starting from a much higher level means there is a considerable cushion before facing a life threatening situation.
How do you propose to measure redundancy? One possible way to attempt to quantify redundancy might be to look at how modern vs. primitive economies cope with natural disasters. Modern economies usually see greater damage in dollar terms than primitive economies but much less loss of life as a percentage of the affected population. The lower casualty rates can be attributed to a number of properties of modern economies that derive from their greater wealth and interconnectedness. This includes things like higher quality, more robust buildings; greater stocks of non-perishable food, clean water and medicine; better trained, funded and equipped emergency services; quicker and better resourced rescue efforts from outside the worst affected area; a population that is not starting out in a state of malnourishment or ill health and greater individual resources enabling many to get out of the worst affected area.
Another possible test of redundancy would be to look at how modern economies cope with large scale warfare. Both Japan and Germany were more advanced pre-WWII than many poor countries today. Both countries lost a major military conflict which involved extensive destruction of infrastructure and massive civilian and military casualties. Both countries recovered over time and there are few if any examples of countries which started out with less modern economies, suffered comparable levels of damage due to warfare and demonstrated greater resiliency by recovering faster.
So in what sense are primitive economies more resilient than modern economies? You might argue that they suffer less dramatic swings in wealth in response to disruption than wealthier modern economies but in a disaster situation I would suggest the really important thing is not the magnitude of the change in wealth but whether it takes you 'below zero' and leads to individual deaths or total societal collapse. On this measure the historical record suggests to me that modern economies are more robust than primitive ones.
Another possible meaning might be that while no individual primitive economy is more robust than a modern one they are less interconnected and so failure in one does not cause a cascade to others. This sounds plausible in theory but I don't see strong historical evidence in this direction.
Finally I suppose you may be claiming that modern economies are more vulnerable to some black swan event beyond anything that appears in the historical record. This is obviously a hard theory to test. My feeling however is that a disaster of unprecedented type or scale would not be qualitatively different to previous disasters. You might see greater swings in 'dollar damage' or even relative wealth but the modern economies would still do better in absolute terms before and after such an event than primitive economies.
This is obvious; but it seems like little of the surplus is devoted to distributing infrastructure and resources or defending against rare contingencies like a highly specialized and interdependent society must. Let's say that America is per capita $20,000 higher than subsistence thanks to specialization and interdependence; what fraction of that goes to the previous listed needs? FEMA, for example, is a few billion a year or ~17$ per capita; even adding in all the other disaster-preparedness services such as the strategic petroleum reserves, does it compensate enough?
I don't. That's far above my pay-grade. It's an interesting area of thinking and like most interesting areas, doesn't have all the answers rigorously worked out - any more than SIAI has all the details of AI worked out, and much of which thinking relies on us finding certain propositions plausible.
Rare examples of nation-building gone right. How's Haiti working out? Or Argentina? Both used to be among the richest countries in the world. I've heard Iran was depressed for centuries after the Mongols destroyed their extremely elaborate agricultural systems. Primitive places like Afghanistan just keep on trucking.
And then there are examples of highly advanced economies sabotaging themselves. The Mayans come to mind, as does the 'Fertile Crescent' - thanks to salinization caused by millennia of agriculture, not so fertile any more!
The Great Depression. The Asian currency crisis. Recent events.
I think we need to clarify a lot of our underlying assumptions and terminology if we are to bridge the yawning epistemic gap that appears to lie between us. Let me try and clarify my interpretation of some of the terms we are using and see if we are on the same page.
You originally said: "I find interesting the general phenomenon that economic development seems to make economies ever more fragile and liable to collapse". There's at least three terms we could be disagreeing on here: economic development, fragile and collapse. By economic development I understood 'a trend towards greater complexity, interdependence and specialization'. By fragile I understood 'easily broken or destroyed' rather than merely volatile or erratic. By collapse I understood 'cease to function due to a sudden breakdown' rather than merely impaired function. I dispute the strong interpretation of this sentence implied by the definitions I give here but do not necessarily dispute a weaker interpretation.
The other area that needs clarification is covered by your question 'does it compensate enough?'. I certainly think that economic development will tend to make societies better off in absolute terms under essentially all disaster situations that we have a historical precedent for. If you measured volatility of wealth by some measures you might find modern economies more volatile but in the context of concern for 'collapse' or existential risk it is not volatility in itself that is dangerous but the potential for going 'below zero' - being wiped out in investment terms. If you are at subsistent level a 10% drop in wealth (in the broadest sense) could be fatal. In a modern economy losing 50% of your wealth is painful but completely survivable for most people. In other words it is possible for modern economies to be both more volatile from some perspectives and less prone to collapse because of the much greater buffer provided by the extra wealth they create.
I feel that if you are going to make the claim and wish to defend it then it is incumbent upon you to at least attempt to propose some measure by which the truth of your claim might be judged. Otherwise you are merely engaging in wordplay and not rationality.
Haiti's problems are deep rooted. It has nothing that can be described as a modern economy and that is part of its problem. Argentina is a very different case. It has had a history of economic mismanagement and financial crises but it is in a completely different league to Haiti (10x GDP per capita and vastly better off by any measure of economic or social development). Argentina is actually something of a success story in Latin America at the moment after its troubles at the turn of the century and Buenos Aires is considered a 'hot' destination.
But we could get into a long and involved discussion of history and debate interpretations and how they support or contradict your theory. I'd rather hold off on that until we can establish the exact nature of any disagreement we have.
I consider these support for my view in that all were examples of great volatility but not of anything approximating collapse. They in no way canceled out the benefits of the periods of economic growth that preceded them (and followed in the first two cases).
I actually think people tend to underestimate the frequency and severity of crises of various kinds but overestimate the long term impact. I am much more pessimistic than average about the current economic situation (see my New Year's predictions here for example) but much more optimistic about how things will ultimately turn out than most people would be if they expected the same level of disruption.
All I was doing was raising an interesting theory and linking to those who do defend it.
I feel as if I had mentioned that the sky looked kind of blue today and wasn't that kind of interesting, and the person standing next to me immediately said, 'oh, blue - physiologically or by wave-length? Are you taking into account Rayleigh scattering? For that matter, is it blue by absorption or reflection? Let's see your numbers, chap, I think you may be having me on.'
You ask some interesting questions, but I see now that any reply is just going to lead to an in-depth argument/discussion which I am not equipped for and don't really feel like having now. If you want to argue about, I've already pointed to a relevant forum.
EDIT: To the downvoters: consider what you're saying by downvoting - 'I disapprove of someone explicitly withdrawing from a conversation, and would rather that one side simply never reply and leave the other person hanging.' Is that really what you would prefer?
Is bad government a sort of disaster which should be considered in this discussion?
West Germany bounced back a lot more than East Germany.
More primitive societies don't have centralized government, so they don't have the risk of government going bad on a grand scale.
The canonical example here is, I think, China. Going from the impressive Renaissance-like period of 100 Schools of Thought during the Warring States period, to Zheng He, then to stultification.
Possibly. I wasn't considering it because I took 'modern economies' to imply (more or less) liberal democracies with (more ore less) free markets. I interpreted the original comment to be in reference to the theory that the increasing interconnectedness, globalization and specialization we observe within such economies is making them more vulnerable to catastrophic collapse. Bad government is certainly a problem but I hadn't seen it as a major component of this line of thinking.
It is an interesting question whether more complex economies (in the sense I describe above) must necessarily go hand in hand with more centralized government. I don't think that is the case and I certainly hope it is not the case (because it implies that complex economies must inevitably self-destruct) but it is a disturbing possibility.
The Soviet Union or the Third Reich were more like a "modern economy" than they're like hunter-gatherers or primitive agriculturalists, and (though it doesn't seem likely so far), a modern economy is more likely to have a government that goes bad than it is to turn into h-g or p.a.
When I was talking about centralized government, I didn't mean central economic planning. (Did you?) I just meant that modern governments have well-defined centralized control over (usually) a good-sized region and population.
I'm not sure if economic development actually makes economies more fragile and prone to collapse - if a very undeveloped economy collapses, do we not notice it because we label it something else, e.g. famine or civil unrest?