Rationality Quotes August 2012
Here's the new thread for posting quotes, with the usual rules:
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself
- Do not quote comments/posts on LW/OB
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (426)
--Bertrand Russell, A History of Western Philosophy
I often find that I'm not well read enough or perhaps not smart enough to decipher the intricate language of these eminent philosophers. I'd like to know is Russell talking about something akin to scientific empiricism? Can someone enlighten me? From my shallow understanding though, it seems like what he is saying is almost common sense when it comes to building knowledge or beliefs about a problem domain.
The idea that one should not philosophize keeping close contact with empirical facts, instead of basing a long chain of arguments on abstract "logical" principles like Leibniz's, may be almost common sense now, but it wasn't in the early modern period of which Russell was talking about. And when Russell wrote this (1940s) he was old enough to remember that these kind of arguments were still prevalent in his youth (1880s-1890s) among absolute idealists like Bradley, as he describes in "Our Knowledge of the External World" (follow the link and do a Ctrl-F search for Bradley). So it did not seem to him a way of thinking that was so ancient and outdated as to be not worth arguing against.
ETA: I meant, "The idea that one should philosophize keeping...", without not, obviously.
Ah very good, in that context it makes perfect sense.
-- Nick Szabo
What about compression?
Do you mean lossy or lossless compression? If you mean lossy compression then that is precisely Szabo's point.
On the other hand, if you mean lossless, then if you had some way to losslessly compress a brain, this would only work if you were the only one with this compression scheme, since otherwise other people would apply it to their own brains and use the freed space to store more information.
You'll probably have more success losslessly compressing two brains than losslessly compressing one.
Still, I don't think you could compress the content of 1000 brains into one. (And I'm not sure about two brains, either. Maybe the brains of two six-year-olds into that of a 25-year-old.)
I argue that my brain right now contains a lossless copy of itself and itself two words ago!
Getting 1000 brains in here would take some creativity, but I'm sure I can figure something out...
But this is all rather facetious. Breaking the quote's point would require me to be able to compute the (legitimate) results of the computations of an arbitrary number of arbitrarily different brains, at the same speed as them.
Which I can't.
For now.
I'd argue that your brain doesn't even contain a lossless copy of itself. It is a lossless copy of itself, but your knowledge of yourself is limited. So I think that Nick Szabo's point about the limits of being able to model other people applies just as strongly to modelling oneself. I don't, and cannot, know all about myself -- past, current, or future, and that must have substantial implications about something or other that this lunch hour is too small to contain.
How much knowledge of itself can an artificial system have? There is probably some interesting mathematics to be done -- for example, it is possible to write a program that prints out an exact copy of itself (without having access to the file that contains it), the proof of Gödel's theorem involves constructing a proposition that talks about itself, and TDT depends on agents being able to reason about their own and other agents' source codes. Are there mathematical limits to this?
I never meant to say that I could give you an exact description of my own brain and itself ε ago, just that you could deduce one from looking at mine.
But our memories discard huge amounts of information all the time. Surely there's been at least a little degradation in the space of two words, or we'd never forget anything.
Certainly. I am suggesting that over sufficiently short timescales, though, you can deduce the previous structure from the current one. Maybe I should have said "epsilon" instead of "two words".
Why would you expect the degradation to be completely uniform? It seems more reasonable to suspect that, given a sufficiently small timescale, the brain will sometimes be forgetting things and sometimes not, in a way that probably isn't synchronized with its learning of new things.
So, depending on your choice of two words, sometimes the brain would take marginally more bits to describe and sometimes marginally fewer.
Actually, so long as the brain can be considered as operating independently from the outside world (which, given an appropriately chosen small interval of time, makes some amount of sense), a complete description at time t will imply a complete description at time t + δ. The information required to describe the first brain therefore describes the second one too.
So I've made another error: I should have said that my brain contains a lossless copy of itself and itself two words later. (where "two words" = "epsilon")
See the pigeon-hole argument in the original quote.
If you can scan it, maybe you can simulate it? And if you can simulate one, wait some years and you can simulate 1000, probably connected in some way to form a single "thinking system".
But not on your own brain.
-- Nick Szabo
the overuse of "quantum" hurt my eyes. :(
M. C. Escher
-- Oscar Wilde
I like it, but what's it got to do with rationality?
To me at least, it captures the notion of how the perceived Truth/Falsity of a belief rest solely in our categorization of it as 'tribal' or 'non-tribal': weird or normal. Normal beliefs are true, weird beliefs are false.
We believe our friends more readily than experts.
And many charming people are also bad.
Thank you, Professor Quirrell.
It is absurd to divide people into charming or tedious. People either have familiar worldviews or unfamiliar worldviews.
It is absurd to divide people into familiar worldviews or unfamiliar worldviews. People either have closer environmental causality or farther environmental causality.
(anyone care to formalize the recursive tower?)
It's absurd to divide people into two categories and expect those two categories to be meaningful in more than a few contexts.
What about good vs bad humans?
Or humans who create paperclips versus those who don't?
I thought I just said that.
It is absurd to divide people. They tend to die if you do that.
It's absurd to divide. You tend to die if you do that.
It's absurd: You tend to die.
It's absurd to die.
It's bs to die.
Nobody alive has died yet.
“Males” and “females”. (OK, there are edge cases and stuff, but this doesn't mean the categories aren't meaningful, does it?)
On the face of it I would absolutely disagree with Wilde on that: to live a moral life one absolutely needs to distinguish between good and bad. Charm (in bad people) and tedium (in good people) get in the way of this.
On the other hand, was Wilde really just blowing a big raspberry at the moralisers of his day ? Sort of saying "I care more about charm and tedium than what you call morality". I don't know enough about his context ...
But is it necessary to divide people into good and bad? What if you were only to apply goodness and badness to consequences and to your own actions?
If your own action is to empower another person, understanding that person's goodness or badness is necessary to understanding the action's goodness or badness.
But that can be entirely reduced to the goodness or badness of consequences.
Since I can't be bothered to do real research, I'll just point out that this Yahoo answer says that the quote is spoken by Lord Darlington. Oscar Wilde was a humorist and an entertainer. He makes amusing characters. His characters say amusing things.
Do not read too much into this quote and, without further evidence, I would not attribute this philosophy to Oscar Wilde himself.
(I haven't read Lady Windermere's Fan, where this if from, but this sounds very much like something Lord Henry from The Picture of Dorian Gray would say. And Lord Henry is one of the main causes of the Dorian's fall from grace in this book; he's not exactly a very positive character but certainly an entertainingly cynical one!)
I don't know that you can really classify people as X or ¬X. I mean, have you not seen individuals be X in certain situations and ¬X in other situations?
&c.
That's excellent advice for writing fiction. Audiences root for charming characters much more than for good ones. Especially useful when your world only contains villains. This is harder in real life, since your opponents can ignore your witty one-liners and emphasize your mass murders.
(This comment brought to you by House Lannister.)
Carl Sagan
— Nick Szabo, quoted elsewhere in this post. Fight!
Knowledge and information are different things. An audiobook takes up more hard disk space than an e-book, but they both convey the same knowledge.
This is one of the obvious facts that made me recoil in horror while reading Neuromancer. Their currency is BITS? Bits of what?
Are you sure you are thinking of the right novel? Searching this for the word "bit" did not find anything.
He may have been thinking of My Little Pony: Friendship is Magic.
Was the parent upvoted because people thought it was funny, or because they thought I had provided the correct answer, or because I mentioned ponies, or some other reason?
probably because you mentioned ponies.
Which got even more upvotes... [sigh]
Please don't become reddit!
Yes.
Apparently so! Then, which book was it?? Shoot.
I think this is just a misuse of the word "information". If the bits aren't equal value, clearly they do not have the same amount of information.
I think value was used meaning importance.
Clearly some bits have value 0, while others have value 1.
-- [Edit: Probably not] Albert Einstein
Do you have a source? Einstein gets quoted quite a lot for stuff he didn't say.
Hmm. There are hundreds of thousands of pages asserting that he said it but for some reason I can't find a single reference to it's context.
Thanks. Have edited the quote.
For future reference: wikiquote gives quotes with context.
Thanks, I already plugged them :)
Yes, and even more annoyingly, he gets quoted on things of which he is a non-expert and has nothing interesting to say (politics, psychology, ethics, etc...).
Genii seem to create problems. They prevent some in the process, and solve others, but that's not what they're in for: it's not nearly as fun.
-- G. K. Chesterton, Orthodoxy
-Friedrich Nietzsche
-Kirill Yeskov, The Last Ringbearer, trans. Yisroel Markov
Context, please?
I dunno. I read The Last Ringbearer (pretty good, although I have mixed feelings about it in general), but it doesn't seem interesting to me either.
Mithril is described as an alloy with near-miraculous properties, produced in ancient times, which cannot be reproduced nowadays, despite the best efforts of modern metallurgy. The book is a work of fiction.
Alternatively, mithril is aluminum, almost unobtainable in ancient times and thus seen as miraculous. Think about that the next time you crush a soda can.
(belated...)
Incidentally, in many cases modern armor is made of aluminum, because aluminum (being less rigid) can dissipate more energy without failing. A suit of chain mail made of aircraft-grade aluminum would seem downright magical a few centuries ago.
Aluminum was entirely unobtainable in ancient times, I believe. It fuses with carbon as well as oxygen, so there was no way to refine it. And it would have made terrible armor, being quite a lot softer than steel. It also suffers from fatigue failures much more easily than steel. These are some of the reasons it makes a bad, though cheap, material for bikes.
Pure aluminum can be found without reducing it yourself, but it's very rare. You'd have to pluck it out of the interior of a volcano or the bottom of the sea- and so it seems possible that some could end up in the hands of a medieval smith, but very unlikely.
Oh, I don't know, one would say the same thing about meteoritic iron, and yet there are well documented uses of it.
(Although apparently the Sword of Attila wasn't really meteoritic and I got that from fiction.)
— Kirill Yeskov, The Last Ringbearer, trans. Yisroel Markov
Louis C.K.
-Adam Smith, The Theory of Moral Sentiments
Now that we are informed of disasters worldwide as soon as they happen, and can give at least money with a few mouse clicks, we can put this prediction to the test. What in fact we see is a very great public response to such disasters as the Japanese earthquake and tsunami.
True, but first of all, the situation posited is one in which China is "swallowed up". If a disaster occurred, and there was no clear way for the generous public to actually help, do you think you would see the same response? I'm sure you would still have the same loud proclamations of tragedy and sympathy, but would there be action to match it? I suppose it's possible that they would try to support the remaining Chinese who presumably survived by not being in China, but it seems unlikely to me that the same concerted aid efforts would exist.
Secondly, it seems to me that Smith is talking more about genuine emotional distress and lasting life changes than simply any kind of reaction. Yes, people donate money for disaster relief, but do they lose sleep over it? (Yes, there are some people who drop everything and relocate to physically help, but they are the exception.) Is a $5 donation to the Red Cross more indicative of genuine distress and significant change, or the kind of public sympathy that allows the person to return to their lives as soon as they've sent the text?
If help is not possible, obviously there will be no help. But in real disasters, there always is a way to help, and help is always forthcoming.
Even if help is not possible, there will be "help."
I was expecting the attribution to be to Mark Twain. I wonder if their style seems similar on account of being old, or if there's more to it.
Tentatively: rhetoric was studied formally, and Twain and Smith might have been working from similar models.
I think it means you're underread within that period, for what it's worth.
The voice in that quote differs from Twain's and sounds neither like a journalist, nor like a river-side-raised gentleman of the time, nor like a Nineteenth Century rural/cosmopolitan fusion written to gently mock both.
Though the voice isn't, the sentiment seems similar to something Twain would say. Though I'd expect a little more cynicism from him.
Why did people in olden times hate paragraphs so much?
Paragraphs cost lines, and when each line of paper on average costs five shillings, you use as many of them as you can get away with.
I propose all older works be therefore re-typeset as their creators obviously intended. It'll be like Ted Turner colorizing old movies, except the product in this case will become infinitely more consumable instead of slightly nauseating.
I support this motion, and further propose that formatting and other aesthetic considerations also be inferred from known data on the authors to fully reflect the manner in which they would have presented their work had they been aware of and capable of using all our current nice-book-writing technology.
...which sounds a lot like Eliezer's Friendly AI "first and final command". (I would link to the exact quote, but I've lost the bookmark. Will edit it in once found.)
I concur, with the proviso that "nice technology" must also include the idea compression style of Twitter.
Also, if paper was so expensive, why the hell did they overwrite so much? Status-driven fashion?
Some writers were paid by the word and/or line.
I think much of it is that brevity simply wasn't seen as a virtue back then. There were far fewer written works, so you had more time to go through each one.
I think it's the vagary of various times. All periods had pretty expensive media and some were, as one would expect, terse as hell. (Reading a book on Nagarjuna, I'm reminded that reading his Heart of the Middle Way was like trying to read a math book with nothing but theorems. And not even the proofs. 'Wait, could you go back and explain that? Or anything?') Latin prose could be very concise. Biblical literature likewise. I'm told much Chinese literature is similar (especially the classics), and I'd believe it from the translations I've read.
Some periods praised clarity and simplicity of prose. Others didn't, and gave us things like Thomas Browne's Urn Burial.
(We also need to remember that we read difficulty as complexity. Shakespeare is pretty easy to read... if you have a vocabulary so huge as to overcome the linguistic drift of 4 centuries and are used to his syntax. His contemporaries would not have had such problems.)
For context, the first paragraph-ish thing in Romance of the Three Kingdoms covers about two hundred years of history in about as many characters, in the meanwhile setting up the recurring theme of perpetual unification, division and subsequent reunification.
Why do some people so revile our passive feelings, and so venerate hypocrisy?
Because it helps coerce others into doing things that benefit us and reduces how much force is exercised upon us while trading off the minimal amount of altruistic action necessary. There wouldn't (usually) be much point having altruistic principles and publicly reviling them.
That's quite a theory. It's like the old fashioned elitist theory that hypocrisy is necessary to keep the hoi polloi in line, except apparently applied to everyone.
Or not? Do you think you are made more useful to yourself and others by reviling your feelings and being hypocritical about your values?
The standard one. I was stating the obvious, not being controversial.
I never said I did so. (And where did this 'useful to others' thing come in? That's certainly not something I'd try to argue for. The primary point of the hypocrisy is to reduce the amount that you actually spend helping others, for a given level of professed ideals.)
Sorry, I wasn't getting what you were saying.
People are hypocritical to send the signal that they are more altruistic than they are? I suppose some do. Do you really think most people are consciously hypocritical on this score?
I've wondered as much about a lot of peculiar social behavior, particularly the profession of certain beliefs - are most people consciously lying, and I just don't get the joke? Are the various crazy ideas people seem to have, where they seem to fail on epistemic grounds, just me mistaking what they consider instrumentally rational lies for epistemic mistakes?
According to GiveWell, you could save ten people with that much.
Explanations are all based on what makes it into our consciousness, but actions and the feelings happen before we are consciously aware of them—and most of them are the results of nonconscious processes, which will never make it into the explanations. The reality is, listening to people’s explanations of their actions is interesting—and in the case of politicians, entertaining—but often a waste of time. --Michael Gazzaniga
Does that apply to that explanation as well?
Does it apply to explanations made in advance of the actions? For example, this evening (it is presently morning) I intend buying groceries on my way home from work, because there's stuff I need and this is a convenient opportunity to get it. When I do it, that will be the explanation.
In the quoted article, the explanation he presents as a paradigmatic example of his general thesis is the reflex of jumping away from rustles in the grass. He presents an evolutionary just-so story to explain it, but one which fails to explain why I do not jump away from rustles in the grass, although surely I have much the same evolutionary background as he. I am more likely to peer closer to see what small creature is scurrying around in there. But then, I have never lived anywhere that snakes are a danger. He has.
And yet this, and split-brain experiments, are the examples he cites to say that "often", we shouldn't listen to anyone's explanations of their behaviour.
I smell crypto-dualism. "I thought there was a snake" seems to me a perfectly good description of the event, even given that I jumped way before I was conscious of the snake. (He has "I thought I'd seen a snake", but this is a fictional example, and I can make up fiction as well as he can.)
The article references his book. Anyone read it? The excerpts I've skimmed on Amazon just consist of more evidence that we are brains: the Libet experiments, the perceived simultaneity of perceptions whose neural signals aren't, TMS experiments, and so on. There are some digressions into emergence, chaos, and quantum randomness. Then -- this is his innovation, highlighted in the publisher's blurb -- he sees responsibility as arising from social interaction. Maybe I'm missing something in the full text, but is he saying that someone alone really is just an automaton, and only in company can one really be a person?
I believe there are people like that, who only feel alive in company and feel diminished when alone. Is this is just an example of someone mistaking their idiosyncratic mental constitution for everybody's?
Obviously not, since Gazzaniga is not explaining his own actions.
He is, among other things, explaining some of his own actions: his actions of explaining his actions.
You seem to have failed to notice the key point. Here's a slight rephrasing of it: "explanations for actions will fail to reflect the actual causes of those actions to the extent that those actions are the results of nonconscious processes."
You ask, does Gazzaniga's explanation apply to explanations made in advance of the actions? The key point I've highlighted answers that question. In particular, your explanation of the actions you plan to take are (well, seem to me to be) the result of conscious processes. You consciously apprehended that you need groceries and consciously formulated a plan to fulfill that need.
It seems to me that in common usage, when a person says "I thought there was a snake" they mean something closer to, "I thought I consciously apprehended the presence of a snake," than, "some low-level perceptual processing pattern-matched 'snake' and sent motor signals for retreating before I had a chance to consider the matter consciously."
Yes, he says that. And then he says:
thus extending the anecdote of snakes in the grass to a parable that includes politicans' speeches.
Or perhaps they mean "I heard a sound that might be a snake". As long as we're just making up scenarios, we can slant them to favour any view of consciousness we want. This doesn't even rise to the level of anecdote.
There is a famous study that digs a bit deeper and convincingly demonstrates it: Telling more than we can know: Verbal reports on mental processes.
From the abstract:
It seems to me that "cognitive processes" could be replaced by "physical surroundings", and the resulting statement would still be true. I am not sure how significant these findings are. We have imperfect knowledge of ourselves, but we have imperfect knowledge of everything.
Did you in fact buy the groceries?
-- Benjamin Franklin
The sentiment is correct (diligence may be more important than brilliance) but I think "all amusements and other employments" might be too absolute an imperative for most people to even try to live by. Most people will break down if they try to work too hard for too long, and changes of activity can be very important in keeping people fresh.
It's possible that what Franklin meant by "amusements" didn't include leisure: in his time, when education was not as widespread, a gentleman might have described learning a second language as an "amusement".
I've heard this a lot, but it sounds a bit too convenient to me. When external (or internal) circumstances have forced me to spend lots of time on one specific, not particularly entertaining task, I've found that I actually become more interested and enthusiastic about that thing. For example, when I had to play chess for like 5 hours a day for a week once, or when I went on holiday and came back to 5000 anki reviews, or when I was on a maths camp that started every day with a problem set that took over 4 hours.
Re "breaking down": if you mean they'll have a breakdown of will and be unable to continue working, that's an easy problem to solve - just hire someone to watch you and whip you whenever your productivity declines. And/Or chew nicotine gum when at your most productive. Or something. If you mean some other kind of breakdown, that does sound like something to be cautious of, but I think the correct response isn't to surrender eighty percent of your productivity, but to increase the amount of discomfort you can endure, maybe through some sort of hormesis training.
Playing chess for 5 hours a day does not make chess your "sole study and business" unless you have some disorder forcing you to sleep for 19 hours a day. If you spent the rest of your waking time studying chess, playing practice games, and doing the minimal amount necessary to survive (eating, etc.), THEN chess is your "sole study and business"; otherwise, you spend less than 1/3 your waking life on it, which is less than people spend at a regular full time job (at least in the US).
In my model this strategy decreases productivity for some tasks; especially those which require thinking. Fear of punishment brings "fight or flight" reaction, both of these options are harmful for thinking.
I think that both you and Mr. Franklin are correct.
To wreak great changes one must stay focused and work diligently on one's goal. One needn't eliminate all pleasures from life, but I think you'll find that very, very few people can have a serious hobby and a world changing vocation.
Most of us of "tolerable" abilities cannot maintain the kind of focus and purity of dedication required. That is why the world changes as little as it does. If everyone, as an example who was to the right of center on the IQ curve could make great changes etc., then "great" would be redefined upwards (if most people could run a 10 second 100 meter, Mr. Bolt would only be a little special).
Further more...Oooohh...shiny....
Yes -- and to me, that's a perfect illustration of why experiments are relevant in the first place! More often than not, the only reason we need experiments is that we're not smart enough. After the experiment has been done, if we've learned anything worth knowing at all, then hopefully we've learned why the experiment wasn't necessary to begin with -- why it wouldn't have made sense for the world to be any other way. But we're too dumb to figure it out ourselves! --Scott Aaronson
Or at least confirmation bias makes it seem that way.
Also hindsight bias. But I still think the quote has a perfectly valid point.
Agreed.
“Ignorance killed the cat; curiosity was framed!” ― C.J. Cherryh
(not sure if that is who said it originally, but that's the first creditation I found)
Should we add a point to these quote posts, that before posting a quote you should check there is a reference to it's original source or context? Not necessarily to add to the quote, but you should be able to find it if challenged.
wikiquote.org seems fairly diligent at sourcing quotes, but Google doesn't rank it highly in search results compared to all the misattributed, misquoted or just plain made up on the spot nuggets of disinformation that have gone viral and colonized Googlespace lying in wait to catch the unwary (such as apparently myself).
Yes, and also a point to check whether the quote has been posted to LW already.
-- Dirichlet
(Don't have source, but the following paper quotes it : Prolegomena to Any Future Qualitative Physics )
-- Hermann Hesse, Demian
Douglas Hofstadter
That quote is supposed to be paired with another quote about holism.
Q: What did the strange loop say to the cow? A: MU!
-- Knock knock.
-- Who is it?
-- Interrupting koan.
-- Interrupting ko-
-- MU!!!
ADBOC. Literally, that's true (but tautologous), but it suggests that understanding the nature of their sum is simple, which it isn't. Knowing the Standard Model gives hardly any insight into sociology, even though societies are made of elementary particles.
Interviewer: How do you answer critics who suggest that your team is playing god here?
Craig Venter: Oh... we're not playing.
Fiction is a branch of neurology.
-- J. G. Ballard (in a "what I'm working on" essay from 1966.)
Noam Chomsky
Ballard does note later in the same essay "Neurology is a branch of fiction."
I am a strange loop and so can you!
http://xkcd.com/435/
Thomas Jefferson
-- The dullest blog in the world
Why do I find that funny?
-- The comments to that entry.
When I stumbled on that blog some years ago, it impressed me so much that I started trying to write and think in the same style.
-- Catelyn Stark, A Game of Thrones, George R. R. Martin
-Seth Godin
Could you add the link if it was a blog post, or name the book if the source was a book?
Done.
A common piece of advice from pro Magic: the Gathering plays is "focus on what matters." The advice is mostly useless to many people though because the pros have made it to that level precisely because they know what matters to begin with.
perhaps the better advice, then, is "when things aren't working, consider the possibility that it's because your efforts are not going into what matters, rather than assuming it is because you need to work harder on the issues you're already focusing on"
It does not! It does not! It does not! ... continued here
-- Erika Moen
I wonder how common it is for people to agentize accidents. I don't do that, but, annoyingly, lots of people around me do.
-- Niclas Berggren, source and HT to Tyler Cowen
Sounds like a job for...Will_Newsome!
EDIT: Why the downvotes? This seems like a fairly obvious case of researchers going insufficiently meta.
— Abraham Lincoln
--rickest on IRC
That's not what "reinventing the wheel" (when used as an insult) usually means. I guess that the inventor of the tyre was aware of the earlier types of wheel, their advantages, and their shortcomings. Conversely, the people who typically receive this insult don't even bother to research the prior art on whatever they are doing.
To go along with what army1987 said, "reinventing the wheel" isn't going from the wooden wheel to the rubber one. "Reinventing the wheel" is ignoring the rubber wheels that exist and spending months of R&D to make a wooden circle.
For example, trying to write a function to do date calculations, when there's a perfectly good library.
Ta-nehisi Coates
Gary Drescher, Good and Real
-- A Softer World
[Meta] This post doesn't seem to be tagged 'quotes,' making it less convenient to move from it to the other quote threads.
— Arthur Conan Doyle, “The Hound of the Baskervilles”