Rationality quotes: October 2010
This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you've seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions.
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments/posts on LW/OB.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (472)
If capitalism is the evolutionary engine that leads to AI, then the advent AI cannot be separated from the larger economic consequences of AI. In my judgment, the single most realistic way to design God-AI that is friendly is to evolve such AI directly out the economy that succeeds human capitalism, i.e. as an economic servant to human needs. While this is not a guarantee of friendly AI in itself, any attempt to make AI friendly purely on the basis of absolute, unchanging principles is doomed to ultimate failure because this is exactly how human intelligence, at its best, does not work.
Mitchell Heisman, Suicide Note p315
The problem is that understanding the economy is probably harder than understanding human intelligence. After all, the global economy is the product of over 6 billion human brains interacting with each other and their environment.
I looked at that site. The guy writes like a crackpot.
Specific quote is a bit reminiscent of Social Justice Warriors who oppose capitalism, but see capitalism as being defined by oppression, inequality, and other bad stuff rather than by capital.
... as opposed to Libertarian Warriors who support capitalism, but see capitalism as being defined by freedom of speech, at-will employment, and legalized drugs and prostitution rather than by capital?
(Blue, Green, let's call the whole thing ao.)
Yeah, kinda.
The Battle for Capitalism has always seemed to be a bit unusual here, though. Especially as it kind of looks like he just did try to make an AI with unchanging principles.
Seconding CronoDAS, that's an awful book, but I found one funny sentence in it:
David Letterman, To Bill O'Reilly, in discussion about the supposed War on Christmas, as quoted in "In Letterman appearance, O'Reilly repeated false claim that school changed 'Silent Night' lyrics", Media Matters for America, (2006-01-04) (From Wikiquote)
I imagine this is getting up-voted here in response to the sentiment, and I'm not going to vote it down. But this approach is more often used by deists against rationalists, and the next step is book-burning.
This quote, for me, shows two ideas: The I defy the data that khafra mentions below, as well as, on Letterman's side, an ability to accurately detect bs and dismiss it without having spent significant resources on formal debate. That ability seems incredibly useful to me, and definitely worth cultivating.
I associate the second idea with the Prior Information Chapter of HPMoR
I saw it as a real-life example of I defy the data.
Do deists really go around telling people how unintelligent they are? Around here they tend to be insecure about their intelligence and try hard to act smart. But the intellectual status of religious belief is something that varies by culture.
Paul K. Feyerabend [The final chapter of Against Method, 1975]
The virtue which is nameless?
Actually, no. In the rest of "Against Method" it's pretty obvious that Feyerabend is saying is that since even our methodologies and standards of evidence are paradigm-dependent, none of them really allow us to objectively connect with reality. As a result, epistemology and science are "anything goes"; any standard of evidence is acceptable no matter what it is. So he's closer to a relativist than a rationalist.
(Edited for clarity)
"[H]e who commands thirty legions is the most learned of all"
Favorinus explaining why he admitted that Emperor Hadrian had won their debate.
"Won't you stop citing laws to us who have our swords by our sides?" Pompey
Dexter's Laboratory
Serion Ironcroft
That philosophy is not Psyclobin Complete. :)
No philosophy survives sticking a crowbar into your own brain.
Hmm... The complaint was that Psyclobin (for example, amongst other everyday occurrences) causes one to see things that you should not believe in. I'm not sure where the analogy holds with a crowbar, or else what point you were trying to make.
The philosophy does not survive in the sense that it is instantiated in ones brain and the brain has been destroyed. But the crowbar experiment does not thus show that the beliefs thus destroyed are false.
I don't know - Philip K. Dick seemed to do all right. And I heard of at least one schizophrenic who tried to record the voices in her head on a tape and figured out they weren't real that way.
Great quote, but what's with the quotation marks?
It looks like this is actually a quote from Carolyn Merchant's The Death of Nature; only the parts in quotation marks are Bacon's words, taken from "The Great Instauration", "The Masculine Birth of Time", and "De Dignitate".
Confirmed from the linked Amazon.com page by searching the preview for "searchers and spies of nature" (no quotes).
That's exactly what I did! (And looked up the sources in the endnotes.)
From a hacker news thread on the difficulty of finding or making food that's fast, cheap and healthy.
"Former poet laureate of the US, Charles Simic says, the secret to happiness begins with learning how to cook." -- pfarrell
Reply: "Well, I'm sure there's some economics laureate out there who says that the secret to efficiency begins with comparative advantage." -- Eliezer Yudkowsky
I don't understand this one. A poetry guy says something practical (and completely unrelated to poetry) is a valuable thing, and Eliezer replies that an economics guy would say something about economics?
The message eludes me.
My take: Comparative advantage as I understand it is about specializing and being better off for it (in simplistic terms).
So Eliezer is hinting that you should become good at thing X where X isn't cooking and pay for someone who has specialized in cooking to cook for you, and you'll both be better off.
Edit: I think he phrased it in the way (Economics laureate etc) as parody and to highlight the appeal to authority in the original (why should a poet laureate, no more than a normal poet or any other person what the secret to happiness was).
</ humour destruction through explanation>
Jacques Ellul, "The Technological Society"
William T. Powers
-- Metallica
-- tocomment, in a Hacker News post
Brilliant. And if they did make it into a tshirt (as per reply) I'd quite possibly buy one!
Appropriate, since it's about as wise as the average T-shirt slogan.
This sounds like a bad idea.
A real-world example of Parfit's Hitchhiker was prominently in the news recently, about firefighters that watched a guy's house burn down because didn't buy a subscription, even though he offered to pay when they arrived at the scene (which I assume means with all the penalties for serving a non-member, etc.). The parallel to PH became clear from this exchange with a writer on Salon:
Obviously, this doesn't carry over the "perfect predictor" aspect, but I'm guessing the FD's decision maker could do much better than chance in guessing whether they'd be able to recover the money -- and the homeowner suffered as a result of not being able to credibly tell the FD (which, of course, has its own subjunctive decision-theoretic concerns about "if I put out the fires of non-payers when they ...") that he would pay later.
(Sorry if this has been posted already, and let me know if this belongs somewhere else like the new discussion forum.)
Update: Okay, it looks like details are in dispute -- by some accounts, he wasn't offering the penalty rate, and people dispute whether the nonpayment was deliberate or an oversight (and the evidence strongly favors the former). "You'll say anything", indeed.
Wow. I just felt a surge of patriotism. I had no idea that sort of system was in place in any first world country. I'm sure it's all Right, True, and Capitalistic but I must say I prefer the system here.
In fact, in rural areas (where I grew up) most firefighters are actually volunteers. Those that I knew considered the drastic enhancement to sexual attractiveness to be more than enough payment. ;)
It is very likely that this is an issue of a particular locality and that plenty of places in the U.S. are sane about matters like this. (You'll also note that it made the news, suggesting people may not have realized this kind of thing was possible.)
From what I know, it's utterly common for several different fire departments to respond to a single call that happens to be near, even if not in, their specific jurisdictions, and I was utterly shocked to read this story.
It's a government-run fire station, so it's not all that capitalistic.
Really? Going for a 'worst of both worlds' approach it would seem. ;)
If you are going to make fire fighting a pay for individual service system instead of a cooperation problem handled by central authority and taxation then you may as well at least get the efficiency benefits of competition in private industry. In fact a completely capitalistic organisation with no interest in public welfare would probably not have had a problem like we see in this instance. The organisation would have set up payment contingencies such that they can sell their services at a penalty rate to those who didn't buy according to the preferred subscription/insurance model.
For some strange reason a lot of US policy in particular seems to fall into the "worst of both worlds" camp ( I would consider their health insurance system as an example). As I'm not an American I don't know why this is the case.
Neither do Americans.
Sure we do. It's all the other party's fault.
I agree with this statement. Either extreme would probably be better than what we actually ended up with.
Could you give other examples? I certainly accept health insurance and this particular fire department, but I don't think it is a representative fire department. Is the common theme the word "insurance"?
"Too big to fail" banks: they profit when their gambles pay off, we bail them out when they don't. Also arguably telecommunications carriers that have quasi-natural quasi-monopolies.
I'd go along with both of those examples (though the US has a history of corporate bailouts that extends far beyond current events). Also rent control (it has significant perverse effects on rental markets and often hurts the poor).
That's not to say other countries don't have their problems, I don't think the US is a uniquely bad policy maker, but there is something about the way the US government makes policy that seems to want to have its cake and eat it too. When they try that it usually doesn't end well.
What little I know of that system scares me.
I'm an economist and it makes no sense to me at all. It seems almost like someone carefully identified the efforts insurance markets make to mitigate the failures in health markets and then crippled them. I actually have trouble convincing some of my colleagues that I'm serious when I describe the regulatory structure.
Could you expand on the specific details of what went wrong?
The essential problem is the way health insurance works in the US. The basic function of insurance is to protect people from strongly adverse events that would put them into financial distress. Insurance companies have to charge more than an actuarially fair rate for insurance in order to make a profit. This means that it is inefficient to run small or high probability expenses through an insurance scheme. The only reason this happens in the US is the tax deductibility of insurance and the mandates on coverage in some states. This turns health insurance into an inefficient health savings scheme.
Furthermore community rating produces very adverse outcomes. By preventing insurance companies from pricing insurance policies at a different rate for each customer (thus creating an expected profit from each customer), the insurance company has an incentive to refuse cover to high risk people (i.e. those that need insurance the most) or drive them away by making their life a misery every time they try to lodge a claim. To the extent they can't do this it drives low risk people out of the market, which leave them exposed if they suddenly need emergency health care (this is especially problematic since low risk people are generally young and therefore have little savings) and insurance companies have to raise premiums further to make up for the loss of the highly profitable young people.
My advice to the US government would be to end community rating, guaranteed issue and mandated coverage. I would suggest eliminating the tax deductibility of insurance (or failing that, make putting money into a Health Savings Account tax deductible). Medicare and Medicaid should be discontinued and replaced with a system of income support where poor or unusually sick people would receive extra money in a health savings account that could be spent on healthcare or health insurance. If you have to include old people in the scheme explicitly to make it politically possible, that would be OK as a second-best solution.
The basic principle in this is to let market mechanisms work in the absence of a clear market failure and then deal with people who can't afford vital services by helping them directly. To what extent you provide that help is a terminal values question so I won't venture an opinion here, but however much or little you want to help, this system should result in cheaper insurance for most people and essential coverage for the poor or those in need of extraordinary levels of health care. It should also arrest the escalating health costs of the US government.
This idea seems to involve people negotiating their health care expenses with providers directly, which doesn't work. Or rather, it only works for the routine expenses, and not the unexpected ones. Some fraction of health care decisions are made under conditions that are literally "buy this or die", and a large fraction of the remainder are made by people who are in no condition to negotiate, so either some form of collective bargaining, or else direct regulation of prices, is required.
I assumed that the firefighters didn't accept the offer to pay them on the spot because that would send the signal to all the other houseowners that they could skip the regular fire department fee and then make an emergency payment when their house catches fire.
I see here a Newcomb-like situation, but in the reverse direction - the fire department didn't help the guy out to counterfactually make him pay his $75.
On the same theme as the previous one:
George Carlin
William T. Powers
I think that this quote misses an important point - and am in agreement with Academician.
Although the particular social etiquette habits of different cultures vary widely, many of them serve similar, underlying purposes.
Kurt Vonnegut makes my case beautifully, and as gently as always in 'Cat's Cradle'. Without going into the plot, there is a 'holy man' (actually, a rationalist in an impossible situation, IMHO); followers of this holy man, when they meet each other, undertake a ritual called "the meeting of souls" (or similar) :- they remove their shoes and socks, and sit down, legs extended, foot to foot.
Abstract: Ritual forms of social etiquette are human and beneficial (if not essential): the form that they take is non-essential.
There is a higher order of information in this than in the assumption that all rituals are simply arbitrary game-playing.
I'd be concerned that this phrasing would raise more sociopaths... because that's how they think about morality.
The idea of teaching relativism for moral specifics is good, but consider that there are aspects of morality common to all sustainable cultures. Powers' framing would describe these as "common game elements" or "aspects common to all these different games". I think they should be emphasized/emotionalized as a little more than that (even if they aren't), so to avoid sociopathy (if that's even possible).
Less specifically, and with more confidence: emotional intelligence is a thing, and children need to be taught that, too. Perhaps Powers could achieve this by teaching kids that "feeling good about doing good things" is part of the game, and maybe one of the objectives of the game.
Care to name a few that I cannot counter with some European culture of last 3000 years, without even going any further?
"It is generally undesirable for members of my own culture / social class to murder each other without just cause."
Before you respond, note that "Person X committed murder in Society Y and it was okay," is not a counterexample. You will need to present an entire culture which was
... and, I guess if you're still going for it, one which existed in Europe since 1000BC.
Sociopaths and mature adults share that conception. Both of these groups of people tend to have also discovered that it is usually not in their best interest to discuss the subject with people who do not share their maturity or sociopathic nature respectively.
The reason a sociopath must arrive at the insight Powers proposes we teach earlier is that they cannot survive without it. Where a normal individual can survive (but not thrive) with a naive morality a sociopath cannot rely on the training wheels of guilt or shame to protect them from the most vicious players in the game before they work things out.
I predict that Powers' curriculum would produce no more sociopaths, make those sociopaths that are inevitable do less damage and result in a whole heap less burnt out, anti-social (or no longer pro-social) idealists.
The analogy I use in my head instead of games is languages. They both have rules, but "games" implies something fake, not productive, and not to be taken seriously. "Languages" are tools we're accustomed to using for everyday functional reasons, and it's clearer that breaking their rules arbitrarily has a more immediate detrimental effect on their purpose (communication).
The most common way I use the metaphor explicitly is during a misunderstanding with a friend. "Wait--what does X mean in Sammish? Z? Ohh, now I get it. In Relsquish, X means Y. That's why I thought you were talking about Y."
The nice thing about this model is that, in a game, you expect everyone to know the rules before you sit down to play. If someone doesn't follow them, they're either too ignorant to play or cheating. When you're talking to someone who speaks a different language from you (even if they're just different versions of English, like Sammish and Relsquish are), occasional confusion is a matter of course. When you misunderstand each other, no one has "broken" the rules; it's just a mismatch. You identify that, explain in other words, and move on, with much fewer hard feelings or blame.
That is very fun to say. Rel-squish!
Haha. Yes, it is. I don't get to say it much, because I'm Fizz to almost everybody who knows me in person--so I refer to myself as speaking Fizzish instead. I didn't think it was worth the trouble of explaining that for the sake of the example, though. :)
Wrong. It's an idiotic game. Makes no sense at all.
Eh? On looking it up [1], it seems about as sensible as any other children's game. It encourages dexterity and fitness, it's spontaneously played by children, and it only needs a stick of chalk and a pebble. Whence this burst of antipathy to a game mentioned only in passing?
[1] It was played when I was a boy, but in the culture I grew up in, it was exclusively a girls' game. I never figured out what the rules were just from seeing it played in the street.
You know what children's game is wrong? Elbow Tag. It requires a large group, but at any given moment, only two people are actually playing. If the chasee is faster than the chaser, there is an equilibrium state that lasts until the chasee has mercy on the rest of the group and voluntarily lets someone else play...but even if that happens, since the new chasee is rested and the chaser is the same, you're normally back in the same boat.
Ugh. I have no idea what wedrifid has against hopscotch, but I empathize with the sentiment.
-- Jack T. Chick
Compare:
-- Richard Dawkins
I was about to stand and applause, until I realized...
Let's say I like flying, I like the earth's ecology, I think large-scale flying is killing the earth's ecology, I think my individual flying is not capable of making a difference to the planet's ecology, and I think technologically advanced cultures capable of sustaining commercial human flight only appear superior because they're able to offload the costs of their advancement to the rest of the earth's population [1].
And I'm at 30,000 feet. Am I a hypocrite?
Worse, am I Richard Dawkins, once you clip of the last item on the first paragraph?
[1] Not my actual beliefs. Except one.
I think you may have misunderstood the point Dawkins was making. It wasn't "if you're in an aeroplane, you aren't entitled to denigrate the society whose achievements made that possible". It was "If you're in an aeroplane, you aren't entitled to claim that all truth is relative, because the fact that the aeroplane stays in the air is dependent on a very particular set of notions about truth, which demonstrably work better than their rivals -- as demonstrated by the fact that our aeroplanes actually fly."
Some context that may be helpful.
Okay, point taken. But to nitpick, that sounds more like epistemological relativism than cultural -- though he can be forgiven for not expecting his audience to be sensitive to the difference. And the context makes it clear too.
No, they don't want a dogmatic and intolerant pilot. They want an empirical pilot who trusts his observations and instruments and uses them to make the best judgement regarding how to operate the plane.
On the other hand, a dogmatic, absolutist pilot who is absolutely sure as to the best way to land the airplane under all conditions, ignores his instruments, weather conditions and data from the control towers, and never listens to his flight crew... is a recipe for disaster.
Dogmatic absolutists mistake observation, skepticism, tolerance and empiricism for "fuzzy thinking". They don't realize that their own thinking is the very opposite of scientific thinking- which is based on observation, not fixed dogmas.
I agree! And I think atheist writers, in their worst moments, fall into the same trap.
Could you give an example?
Well, the obvious one is the Dawkins quote on the airplane, already treated in ways I agree with by SilasBarta. More generally, I am troubled by atheist attacks on the idea of religious tolerance - Sam Harris says it's "driving us toward the abyss". I mean, really, if you find yourself nodding along to a pro-intolerance rant from Jack Chick then maybe you want to ask yourself some questions.
Even so, I, like Sam Harris and Jack Chick, think that Islam is awful and needs to be resisted.
Edit: Bleh, this comment came out wrong - it's more condescending than helpful. The subject is probably too complicated to deal with here. Basically I think religious tolerance has a fairly good track record and I'd want to be very careful in tinkering with it.
I agree with your last sentence. But I don't think you've provided an example of any of these writers doing any of the things attributed to "dogmatic absolutists" in N_MacDonald's last paragraph.
-- old Sufi parable
Hm, how about...
Beliefs are justified by their Solomonoff-nature. That is the core of Bayesianism. Science is bookkeeping.
It bothers me that "bookkeeping" is given a disparaging tone.
Perhaps because it is easy to ritualize bookkeeping? I think to remember that is to keep within the spirit of the twelfth virtue, the void.
That's because it's easy to misvalue assets if you're disconnected from the production process. So when you have specialized bookkeepers, others will typcially see them as ignorant of the true value of the assets, and associate this with bookkeeping per se, rather than bookkeeping with a screwy incentive structure and/or knowledge flows. Because this is the context in which most people interface with accountants, they tend to be associated with misvaluing assets. And thus:
"Beancounters didn't think a soldier's life was worth 300 [thousand dollars]." -- Batman Begins
Edit: Sorry, I forgot to translate all that: P(observe "accountant" | believe accountant misvalued assets) > P(observe "accountant" | ~believe accountant misvalued assets)
...
...
...reason #7 I love LessWrong: when they want to improve audience comprehension, people have to translate from English to mathematical formulas instead of the reverse.
If I could just recruit another equally capabler soldier for $ 299,000 or less with no ill consequences, then this seems like a shut up and multiply situation that accountants are trained for.
Hell, from a utilitarian perspective, if I saved a single soldier with that money instead of feeding and housing let's say, 300 African children for 10 years, then I made a stupid decision.
I think the accountant got things just about right.
Good point, bad example -- that's probably a case where accountants have the best knowledge of the costs of losing a soldier, and the generals are best capable of communicating it. The military also provides a certain payout to the family for a death.
Still, I find it hard to believe that there aren't some US soldiers for which it's worth spending 300k for the level of protection that a high-tech kevlar bodysuit provides. Special Forces goes to pretty insane lengths to provide protection, although perhaps the $300k unit cost would only be with a bulk discount, etc.
(Of course, it's fictional evidence anyway...)
Usually military personnel who have received expensive enough training to justify that are called officers, but there are definitely some exceptions. I wouldn't disagree.
And, now that you mention it, I could imagine the pay out being expensive enough that not paying the money would flatly irrational, but I don't know the number.
That isn't a counterargument. "Officer" is a (category of) rank, not a job description. A whole lot of actual military "action" work is in fact performed by officers, particularly if it involves high levels of skill. (For example, pilots are usually officers.)
Yes, they are. But I've never heard a pilot called a soldier. This goes for most jobs performed by people in the O Ranks.
I am using Soldier to be interchangeable with Enlisted Man since I've seen and heard it used that way myself.
I assumed it was used that way in context, but maybe it wasn't.
No, "soldier", at least in U.S. military jargon, means "member of the Army" (as opposed to the other services). The Army chief-of-staff, a four-star general, will refer to themselves as a "soldier".
-- Proceedings of the 1968 NATO Conference on Software Engineering
"You can always reach me through my blog!" he panted. "Overpowering Falsehood dot com, the number one site for rational thinking about the future--"
Go ahead, down-vote me. It's still paradoxically-awesome to be burned in a Greg Egan novel...
What is the context of the quote? Is the OF.com guy a total dolt, an arrogant twat, a cloud cuckoolander, or what?
Thought this was worthy of its own thread in Discussion so interested people won't miss it: http://lesswrong.com/r/discussion/lw/2ti/greg_egan_disses_standins_for_overcoming_bias/
Found a couple of semi-spoilery reviews for Zendegi. Apparently it has stand-ins for Robin Hanson and SIAI as foils for the authorial message.
Oh, I see - the ref is to 'Overcoming Bias.com'. For a moment I was confused because overpoweringfalsehood.com doesn't work and I didn't see any URL in your profile and I thought you were talking about you being burned and not all of us.
Prompted by the discussion of Sam Harris's idea that science should provide for a universal moral code, I thought of this suitable reply given long ago:
(It also provides for some interesting perspective on the current epistemological state of various academic fields that are taken seriously as a source of guidance for government policy.)
The continuing controversy over well-established facts of evolution, even though the threat they pose to religious leaders' dominion is very indirect, would seem to prove Hobbes right.
Rene Descartes.
Ha! That's very clever and nicely phrased. (And true, sadly.)
— Marcus Aurelius
I've just been advised that he probably didn't say that.
Is there a general name for that shape of argument? It or something close to it seems to be a recurring pattern.
"People who can manage their lives will, despite MMOs.". (People who lose time playing MMOs are bad at managing life, so they would've lost the time anyway.)
"There's only two possible outcomes for their relationship. They split, or they stay together forever. If it's split, then the sooner it happens the better for everyone. If it's stay, then my meddling won't matter.".
"No one you would want to meet would find you boring.".
(Edit: removed opening "also".)
False dilemna. Also false dicholomy or possibly black and white thinking.
I've never heard a name for that, and it ought to have one. How about "the predestination fallacy"? They all seem to start with the assumption that something will go the same way no matter what, then conclude that therefore, pushing it in a bad direction is okay.
It's not always a fallacy. Examples:
You're trying to achieve some objective, and the difference between achieving it and not achieving it swamps all other differences between credible outcomes. It may then be rational to assume that your desired objective is achievable. (You have nasty symptoms, which can be caused by two diseases. One will kill you in a week whatever you do. One is treatable. If it's at all difficult to distinguish the two, you might as well assume you've got the treatable one.)
You're trying to achieve some objective, and you know it's achievable because others have achieved it, or because the situation you're in has been crafted to make it so. It's rational to assume it's achievable. (There's an example in J E Littlewood's "Mathematician's Miscellany": he was climbing a mountain, he got to a certain point and couldn't see any way to make progress, and he reasoned thus: I know this is possible, and I know I've come the right way so far, so there must be a hidden hold somewhere around there ... and, indeed, there was.)
It looks like it's called Morton's fork.
For the first two "then"s, the conclusions seem plausible but far from the only possible ones if the possibility of (knowable) gods were taken seriously. It sounds like saying that if you live under an unjust government, you should act like it doesn't exist until you get arrested, rather than either accepting it or trying to fight it.
As Marcus Aurelius was a philosopher king, I get the feeling this quote is in the context of the gods being unknowable. The unjust government, on the other hand, is here and knowable.
(R. Diekstra, Haarlemmer Dagblad, 1993, cited by L. Derks & J. Hollander, Essenties van NLP (Utrecht: Servire, 1996), p. 58)
I think of this as a rationalist parable and not so much a quote. It has a lot of personal resonance since I often had dog biscuits with my tea when I was younger.
Philosopher: Can we ever be certain an observation is true?
Engineer: Yep.
Philosopher: How?
Engineer: Lookin'.
Scrollover of SMBC #1879
Sssso there AR threeeeeee Hariats. Toldem so Oh god, my head.
**Note: the dupes are on purpose
Sssso there AR threeeeeee Hariats. Toldem so Oh god, my head.
**Note: the dupes are on purpose
I'd say a good engineer would reply: No observation is true, but truth doesn't matter if it works.
In that case, I'd say you're using a much too binary definition of "true". I'm sure this has been posted a dozen times before, but it seems relevant:
"When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together."
-Isaac Asimov
Exactly the sort of quote I was looking for. The philosopher is asking about absolute truth, the engineer only cares about finding parameters for a model of reality that works well enough for what you need it to do.
-- Rafael Sabatini, "The Sea-hawk"
-- Will Smith
Ken Binsmore - in a critique of Gauthier's "Morals by Agreement"
--Microeconomics, pg 39, Samuel Bowles
-- Samuel Wilberforce
-- T. H. Huxley
From an 1860 Oxford evolution debate (Quoted from Games, Groups and Global Good)
(Interestingly, the author, Robert May, after presenting these quotes, goes onto suggest that "Wilberforce, had he possessed an all-encompassing knowledge of the science of his day, could have won the debate. The Darwin–Wallace theory of evolution, at that time, had three huge problems.")
Oh what the heck, here are two of the problems that Robert May spoke of in the above quote:
...
Well, now I have to link this DC: http://dresdencodak.com/2009/08/06/youre-a-good-man-charlie-darwin-2/
Not a surprise at all. Most major new paradigms have huge gaping flaws; this is one of the core theses of Paul Feyerabend's brand of philosophy of science in works like Against Method (eg. look at his analyses of major flaws in Galileo).
(I was amazed this was not on the first three pages of a google search of the site.)
Richard Feynman in cargo cult science.
-- Larry Niven
I think there are legitimate questions about the advisability of marijuana, but this is a claim for which counterexamples are plentiful. Alan Moore springs to mind.
OTOH "you only tell your story once so do it on paper" is also in Dorothea Brandt's "Becoming a Writer".
Funny, I got the same advice sans drugs about NaNoWriMo a while back, and was just passing it on recently to someone else. The way it was put to me, though, was that "you can only tell your story once." Not literally, of course--you can relate what happens in it more than once--but you can only really tell it and put your heart into it once. Don't waste it talking to your friends about the idea. Get it on paper the way you feel it. Then tell your friends the lesser version afterwards, or just wait and let them read what you wrote down.
I am momentarily breaking hiatus specifically to say that you don't even need marijuana or alcohol to suffer from this. The normal human capacity for self-delusion and need for self-esteem are more than deadly enough all by themselves.
Personally, I'm still struggling to accept this lesson: that it's not enough to be a smart person who has good ideas; you need to do something that actually works. It is, in its own way, a highly counterintuitive idea, much like this notion that plausibility isn't enough, and beliefs should actually predict experimental results. I keep wanting to protest that I was morally right. Well, say that I was. In order for that moral rightness to change anything, I still need a method that really actually works, not just morally works.
Truth does not demand belief. Scientists do not join hands every Sunday, singing 'Yes, gravity is real! I will have faith! I will be strong! I believe in my heart that what goes up, up, up must come down, down, down. Amen!' If they did, we would think they were pretty insecure about it.
....research is after all, asking the universe silly questions and getting silly answers until neither question nor answer are silly any more
From a discussion of authodidacticism which may be of general interest.
Walpola Rahula
--Jean le Rond d'Alembert on infinitesimals (as quoted in Mathematics: the loss of certainty, by Morris Kline)
AI makes philosophy honest
-- Dan Dennet
— Timothy Leary
Not sure if this will qualify as a rationalist quote, but these are the last few lines from the Creation Hymn in the Rig Veda, the oldest of the Hindu sacred texts & estimated to be composed around 1100 BC. I like the note of uncertainty, rather rare among religious texts.
In its original, atheist Carvaka writings contained much verse (as Indian philosophy/theology usually does); see http://www.humanistictexts.org/Carvaka.htm In translation, they almost sound like senryū:
Reminds me of the doctrine that some Christians have, where anybody who dies before a certain age automatically goes to heaven, while people above that age can go to hell. The question then becomes: why don't parents kill their children, thus saving them from the all-too-likely possibility of eternal torture?
(Fun fact: most people who believe in hell can be made very uncomfortable if you look at the unfortunate implications of what they believe.)
I was once in a debate in which I pursued that point at some length. I don't think most people who believe in Hell find that particular point more difficult to rationalize than most of their other religious beliefs, but I bring it up because it led to a quote which, while only tangentially relating to rationality, strikes me as pretty memorable.
"That seems like an awfully selfish reason not to kill a million babies."
Some wisdom on warm fuzzies: http://www.pbfcomics.com/?cid=PBF162-Executive_Decision.jpg
[Not a quote, but doesn't seem suitable for a discussion article.]
Might this imply that we might still want open threads?
"Whereas the howto is, by definition, addressed to a lay audience, it currently takes an expert on howtos to know which title in the tangled mass will deliver the goods." ---Dwight MacDonald, 1954
Cited here in an article about recalls of dangerously inaccurate how-to books.
— The Simpsons, Season 22, Episode 3, "MoneyBART"
-- Richard Feynman, The Pleasure of Finding Things Out
-- Homer Simpson
"Because this is the Internet, every argument was spun in a centrifuge instantly and reduced down into two wholly enraged, radically incompatible contingents, as opposed to the natural gradient which human beings actually occupy." -Tycho, Penny Arcade
-- unknown
Compare:
Therefore, one-box. FOR THE EMPEROR.
Generally speaking, Warhammer 40k probably isn't a good source for philosophy.
I rate it above Decartes.
above "invented analytic geometry" Descartes or just above meditations descartes?
-- Brahma, Mahabharata