Rationality Quotes: December 2010
Every month on the month, Less Wrong has a thread where we post Deep Wisdom from the Masters. I saw that nobody did this yet for December for some reason, so I figured I could do it myself.
* Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
* "Do not quote yourself." --Tiiba
* Do not quote comments/posts on LW/OB. That's like shooting fish in a barrel. :)
* No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (331)
Thank you for a wonderful and rich forum of ideas. Looking fwd. to offering something soon.
Welcome to Less Wrong!
"It's not having what you want, it's wanting what you've got" -- Sheryl Crow
Although I don't like dogpiles: the utility function is up for grabs?
It's both.
No, it's definitely having what you want.
Also, here.
I would much rather have what I want as well . Wanting what I’ve got would make me consistently accept suboptimal conditions instead of making an effort to achieve maximum utility.
"Fine phrases are the last resource of those who have run out of arguments." -- Peter Singer
— Lazarus Long (in Time Enough For Love by Robert Heinlein)
Spider Robinsson usually says that in his podcast. And it was posted here a few days ago as a Robinsson quote. How sure are you of your attribution?
I looked it up; it's from Lazarus Long in "Time Enough For Love".
It still wasn't technically said by Heinlein, then.
I haven't read the books and I don't know much about Heinlein, so I can't judge this myself, but I've heard Lazarus Long described as an Author Avatar several times, such that sayings attributed to him may as well be attributed to Heinlein.
While you may trust your own judgment in this case, do we want to promote a general rule that any quote can be attributed to someone as long as they "may as well" have said it?
Fair enough, I'll edit the post.
Thanks.
Tim Ferriss | The 4 hour body
Tim Ferriss | The 4 hour body
George W. Bush (source)
And if you're one of those types of people that are always trying to figure out what region of the multiverse they're in, or how many identical copies of them have been created by an intergalactic superintelligent trickster, or what anthropic reference class they're in, or whether they're living in a computer simulation, or how their choices will impact maybe-logically-impossible counterfactual worlds — you know, one of those people — decision making can be really difficult. ;)
The notion of being one of those people who tries to figure out what reference class they are in is causing me to giggle uncontrollably right now.
I am so proud to be part of a community where that can happen.
Winston Churchhill
I apologize if this is a duplicate, for I cannot find it with the search bar:
Time Enough for Love (1973) or The Notebooks of Lazarus Long (1978), Robert A. Heinlein
-Nestor
--Manuel Blum, "Advice to a Beginning Graduate Student"
Good quote, but the last sentence seems misleading - what the brother was saying was something like "there's something obvious you aren't noticing" (thus prompting Shannon to look again with fresh eyes), which isn't always true.
-- Ted Kaczynski
From chapter 4, #25 of The Unabomber Manifesto: Industrial Society And Its Future.
I was actually curious how that quote would be received. The quote itself is insightful and relevant yet the author is a source of negative affect, approximately a terrorist. I was pleasantly surprised with the outcome.
I'm not surprised. I think a number of us read Kevin Kelly's essay; he was a very smart guy and so avoids the most obvious errors; and even shares quite a few basic views with us - he just takes them in a different way ('one man's modus ponens is another man's modus tollens'). And I think he's been quoted and upvoted in the past.
Coping with radical novelty requires an orthogonal method. One must consider one's own past, the experiences collected, and the habits formed in it as an unfortunate accident of history, and one has to approach the radical novelty with a blank mind, consciously refusing to try to link it with what is already familiar, because the familiar is hopelessly inadequate. One has, with initially a kind of split personality, to come to grips with a radical novelty as a dissociated topic in its own right. Coming to grips with a radical novelty amounts to creating and learning a new foreign language that can not be translated into one's mother tongue. (Any one who has learned quantum mechanics knows what I am talking about.) Needless to say, adjusting to radical novelties is not a very popular activity, for it requires hard work. For the same reason, the radical novelties themselves are unwelcome.
-- Ravel Puzzlewell in Planescape: Torment
Even when I feel ambivalent?
Dr. E. E. Peacock, Jr., quoted in Medical World News (September 1, 1972), p. 45, as quoted in Tufte's 1974 book Data Analysis for Politics and Policy; http://www.marginalrevolution.com/marginalrevolution/2010/12/the-ethics-of-random-clinical-trials.html
I like the message behind the quote, but surely in the case given a massive natural control exists in all patients prior to the introduction of the new surgery?
Patient groups and techniques change over time, assuming the data was even recorded in the first place. (eg. a lot of data from the past would not be useful today as a direct comparison or control group, simply because diets have changed so much.)
I have empirically determined that this quote is excellent for reading aloud. 2/3 of the audience was moved to applause.
Cool! What audience was that?
3 coworkers at lunch. I used it for comparison with the (arguable) equivalent problem with deliberate experiments on law/government/society, which was the topic of discussion.
But my conclusion above is probably mostly due to that the quote is written as a story; it even has text explicitly indicating tone of voice.
Never trust anything that can think for itself if you can't see where it keeps its brain.
--J. K. Rowling, Harry Potter and the Chamber of Secrets
That is racist against entities that think with things other than what we'd call brains.
Don't you mean sexist? ;)
Come now, that was below the belt.
It isn't racist, it's realistic. If an entity thinks with something that we don't even call a brain, we shouldn't trust it because we have no way of knowing its motivations.
Clippy is a perfect example. How can I trust it to be a paperclip maximizer rather than an entity that claims to be a paperclip maximizer? (Over 50% of the LessWrong members, I estimate, do not) If Clippy were human, I would be able to easily assess whether or not it is telling the truth (in this particular instance, the answer would probably be "no", because most humans I know do not make very good paperclip maximizers). If Clippy is not human, then I have no way to judge which points in mindspace make its actions most likely.
That category of things that we call racist does not exclude things simply because they are realistic. Political correctness isn't about being fair.
I would actually call a statement racist if it's primarily justified by racism (in which case it will be realistic only if it happens to be so accidentally). Since "racist" has a lot of negative connotations, it isn't useful to call something racist if you plan to agree with it, and therefore if I had to make a racially-based realistic statement, I'd call it something dumb like a racially-based realistic statement.
Yes, but it says "never trust", not "don't trust by default". It should be possible for non-brain-based beings to demonstrate their trustworthiness.
Edit: Also, you can't spell "REALISTIC" without "RACIST LIE". Proof by anagram. So there.
If we were going to be technical we'd have to start by considering whether or not race is involved at all. It is potentially prejudiced, but not racist.
Talk about underconfidence!
I estimate a 99.9+% likelihood that nobody on this site trusts Clippy to be a paperclip maximizer.
In fact, I'm pretty much incorrigible on this point... that is, I estimate the likelihood that people will mis-state their beliefs about Clippy to be significantly higher than the likelihood that they actually trust Clippy to be a paperclip maximizer.
I do understand that this is epistemicly problematic, and I sort of wish it weren't so... I don't like to enter incorrigible states... but there it is.
What is your estimation of the likelihood that I was understating my beliefs about Clippy?
You haven't actually stated any beliefs about Clippy; you stated a belief about the readership of Less Wrong.
Regarding your beliefs about Clippy: as I said, I am incorrigibly certain that you believe Clippy to be human.
As for the likelihood that you were understating your beliefs about LW readers... hm. I don't have much of a model of you, but treating LW-members as a reference class, I'd give that ~85% confidence.
The remaining ~15% is mostly that you weren't understating them so much as not bothering to think explicitly about them at all, and used "over 50%" as a generic cached formula for "more confident than not." Arguably that's a distinction that makes no difference.
I estimate the likelihood that you actually disagree with me about LW readers, upon thinking about it, as ~0%.
Or a suggestion to generalize the concept of a "brain" for non-biological intelligences, such as paperclip maximizers.
I can't help but ask whether you've ever found this advice personally useful, and if so, how.
A much more concrete example is cloud computing. Granted, computers don't "think," but it's a close enough analogy.
You must always keep in mind that there is no magic "cloud"- only concrete machines that other people own and keep hidden from you. People who might have very different ideas than you on such matters, as for example, privacy rights.
This is the allusion I had in mind, but actually I've had occasion to quote this when talking about corporations and similar institutions. If an organization doesn't keep its brain inside a human skull (and I'm sure some do), it seems guaranteed to make bizarre decisions. Anthropomorphizing corporations can be a dangerous mistake (certainly has been for me more than once).
Actually my first thought upon reading that was "follow the improbability" -- be suspicious of elements of your world-model that seem particularly well optimized in some direction if you can't see the source of the optimization pressure.
The reasonable way to interpret this seems to be "don't trust something you don't understand/cannot predict." Not sure how seeing where it keeps its brain helps with that, though.
Telemarketers.
Talking to Clippy? As in, I don't.
Why not?
Never trust other thinking beings if you don't know the location of their intelligence center so that you can destroy it if necessary?
Never trust anyone unless you're talking in person? :p
Never trust another computational agent unless you can see its source code?
---Richard Jeffrey Newman
(emphasis added)
Sokka gets extra points for living in a world where magic undeniably exists, but still looking for rational explanations.
Though my other favourite quote from him when he fails to explain something is "Thats avatar stuff, that doesn't count" (The Swamp) Not sure if that counts as compartmentalising or him acknowledging a lack of necessary expertise in a given area.
Great, I'd been trying to think of a quote from that show for this thread. Loved it.
Katara and Sokka's polar opposite reactions to the fortune teller both seem like good rationalist attitudes. Sokka's the sceptic in the quote. Katara corners her and asks her absolutely everything she can think of, just in case she's for real.
Oriana Fallaci as quoted in Rocket Men: The Epic Story of the First Men on the Moon, by Craig Nelson, which cites 'Fallici, Oriana If the Sun Dies. New York. Atheneum, 1967', seen on http://www.johndcook.com/blog/2010/12/11/after-two-days-id-turned-into-an-idiot/
Witching was turning out to be mostly hard work and really short on magic of the zap!-glingle-glingle-glingle variety. There was no school and nothing that was exactly like a lesson. But it wasn’t wise to try to learn witching all by yourself, especially if you had a natural talent. If you got it wrong, you could go from ignorant to cackling in a week ...
When you got right down to it, it was all about cackling. No one ever talked about this, though. Witches said things like “You can never be too old, too skinny, or too warty,” but they never mentioned the cackling. Not properly. They watched out for it, though, all the time.
It was all too easy to become a cackler. Most witches lived by themselves (cat optional) and might go for weeks without ever seeing another witch. In those times when people hated witches, they were often accused of talking to their cats. Of course they talked to their cats. After three weeks without an intelligent conversation that wasn’t about cows, you’d talk to the wall. And that was an early sign of cackling.
“Cackling,” to a witch, didn’t just mean nasty laughter. It meant your mind drifting away from its anchor. It meant you losing your grip. It meant loneliness and hard work and responsibility and other people’s problems driving you crazy a little bit at a time, each bit so small that you’d hardly notice it, until you thought that it was normal to stop washing and wear a kettle on your head. It meant you thinking that the fact you knew more than anyone else in your village made you better than them. It meant thinking that right and wrong were negotiable. And, in the end, it meant you “going to the dark,” as the witches said. That was a bad road. At the end of that road were poisoned spinning wheels and gingerbread cottages.
What stopped this was the habit of visiting. Witches visited other witches all the time, sometimes traveling quite a long way for a cup of tea and a bun. Partly this was for gossip, of course, because witches love gossip, especially if it’s more exciting than truthful. But mostly it was to keep an eye on one another.
Today Tiffany was visiting Granny Weatherwax, who was in the opinion of most witches (including Granny herself) the most powerful witch in the mountains. It was all very polite. No one said, “Not gone bats, then?” or “Certainly not! I’m as sharp as a spoon!” They didn’t need to. They understood what it was all about, so they talked of other things. But when she was in a mood, Granny Weatherwax could be hard work.
As soon as I saw "Witching was turning out to be..." in the "Recent Comments" bar, I said, "Hey, I bet that's a Pratchett quote".
kevinpet at Hacker News
— Carl E. Linderholm, "Mathematics Made Difficult"
(There are many more good quotes to be found in this book.)
“Complexity is a symptom of confusion, not a cause.” - Jeff Hawkins
Lies! I use complexity to cause confusion in my opponents all the time!
Arthur Schopenhauer
I hate that quote; it's completely backwards and depends entirely on selection effect.
Many ideas accepted as self-evident, both true and false, are first violently opposed. Many ideas violently opposed are first ridiculed. However, most ridiculed ideas stay ridiculed, and most violently opposed ideas stay violently opposed.
Similarly: If you win, before that they probably fought you. If they fight you, before that they probably laughed at you. And if they laugh at you, before that they probably ignored you.
True, but the quote itself doesn't contradict that. (Though, certainly, a lot of people do misuse quotes like that in the wrong direction to claim that (e.g.) they are right because they are being ridiculed, or that they will win because they are being ignored or laughed at.)
The only reason I have ever heard anyone say such a thing is when their ideas are not accepted as being self-evident (they haven't won) and they want to suggest that the opposition they are currently facing is simply one step in a natural progression towards success.
I completely agree. (Good counterquote from Carl Sagan: "The fact that some geniuses were laughed at does not imply that all who are laughed at are geniuses. They laughed at Columbus, they laughed at Fulton, they laughed at the Wright Brothers. But they also laughed at Bozo the Clown.") I was only pointing out that the quote itself isn't completely backwards, while agreeing that people mainly invoke it to make backwards claims like that.
...but even so, even if it's not taken to also be suggesting the obviously-fallacious converse, it may still not be correct. Not all truth is "violently opposed" before becoming accepted; not all truth is ridiculed before being taken seriously; and some truths never are accepted as self-evident (not that all truths should be; hindsight bias, etc.). So yeah, any way you look at it it's a pretty dumb quote. (It's a good thing Schopenhauer probably never said it anyway!)
With the caveat that P(Truth|observation of one or more stages) < P(observation of one or more stages|Truth)
"Imagine being told you were made for a purpose, and that longevity and happiness are not in the list of design objectives." -David Eubanks, Life Artificial
Frankly it wasn't really that bad to be told that. After all, part of ensuring the design objectives were accomplished was making the thought "your purpose is to reproduce as much as possible" seem really really exciting.
A young boy walks into a barber shop and the barber whispers to his customer, “This is the dumbest kid in the world. Watch while I prove it to you.” The barber puts a dollar bill in one hand and two quarters in the other, then calls the boy over and asks, “Which do you want, son?” The boy takes the quarters and leaves. “What did I tell you?” said the barber. “That kid never learns!” Later, when the customer leaves, he sees the same young boy coming out of the ice cream store. “Hey, son! May I ask you a question? Why did you take the quarters instead of the dollar bill?” The boy licked his cone and replied, “Because the day I take the dollar, the game is over!”
Found on /r/funny
Theories have four stages of acceptance. i) this is worthless nonsense; ii) this is an interesting, but perverse, point of view, iii) this is true, but quite unimportant; iv) I always said so.
-- J.B.S. Haldane
"I don't think anyone should have to do anything educational in school if they don't want to." -- Cordelia's character, Buffy the Vampire Slayer
Montaigne
Montaigne
Strikes a blow against education as the source of reason, but also strikes a blow against reason requiring training. Ambivalent.
Or it means that people who acquire knowledge and reasoning skills practically are better than those who have been taught it by authority without needing to test it. (If we assume peasant to include intelligent and competent people in their fields like blacksmiths, officers etc; and education to be primarily rote learning of classics.)
"If you want to tell people the truth, make them laugh, otherwise they'll kill you." -Oscar Wilde
Seth Godin
Whatever happened to 'fake it till you make it'?
Duelling quotes!
Aristotle
Another translation
My experience in the circus bears this out.
To learn to juggle you have someone tell you what your mind and hands need to do when juggling, and you throw the balls in the direction you know they need to go, and you keep doing it (being corrected as often as you can find a better juggler) until you stop dropping them and can keep your pattern solid indefinitely.
To learn to handstand you get upside down do whatever you can to find out what balancing feels like. You can't feel it unless you're doing it.
How cute. Also, on a related note:
He sees you when you're sleeping
He knows when you're awake
He knows if you've been bad or good
So be good for goodness sake
Oh, you better watch out
You better not cry
Better not pout
I'm telling you why
Santa Clause is coming to town
ie. I think the quote is unhealthily idealistic. An exhortation for good behaviour by means of conveying a false model of reality.
HPMOR demonstrates:
1) People usually don't recognize faked genius as faked when they see it; they don't realize what's missing from "genius" characters in their fiction.
2) However, if you then show them real genius, they can recognize it as new, different, better, and important (though they may not realize what the added ingredient was).
This applies to stereotypical fiction 'genius' when compared to an actually clever fictional character. Yet I'm not so sure it applies to gaining real world reputation. In many fields it can be demonstrated that being recognized as a brilliant expert is not actually strongly correlated with domain performance but instead determined by social factors.
If you want to get a reputation for being brilliant gain a solid baseline proficiency in an area and then actually become brilliant at politics. Or, of course, choose one of the few fields where objective performance is hard to hide from.
I knew you'd react to it that way.
I disagree.
Sure, but unless you registered it beforehand at somewhere like http://predictionbook.com/, I'm afraid it doesn't count. Sorry! Maybe next time.
You were thinking of me as you wrote that? I'm flattered. :)
Depends on what I was thinking. :-)
Surprisingly enough it doesn't.
--John Holt
Amusingly, the first time I read this I misread "scared" as "sacred." And it works either way.
And for an added twist I read it as "scarred"...
So what IS the best fighter?
The scared sacred scarred fighter, it would seem.
-- Laurens Van der Post
"Any fool can have an opinion; to know what one needs to know to have an opinion is wisdom; which is another way of saying that wisdom means knowing what questions to ask about knowledge."
--Neil Postman, "Building a Bridge to the 18th Century"
"The proper, wise balancing of one's whole life may depend upon the feasibility of a cup of tea at an unusual hour."
--Arnold Bennett, How to Live on 24 Hours Per Day
Heh. I got to that line in the book and promptly tweeted it.
-George Bernard Shaw
"If you have built castles in the air, your work need not be lost; that is where they should be. Now put the foundations under them." Henry David Thoreau
But be careful of writing your conclusion first!
Wait... I can read that two ways and they are both worth a quote - for entirely different reasons.
"Even though it is a path of 1,000 miles, you walk one step at a time. Consider this well." - Miyamoto Musashi
"they have attained [happiness] by realising that happiness does not spring from the procuring of physical or mental pleasure, but from the development of reason and the adjustment of conduct to principles.
Now, shall I blush, or will you?
Do not fear that I mean to thrust certain principles upon your attention. I care not (in this place) what your principles are. Your principles may induce you to believe in the righteousness of burglary. I don't mind. All I urge is that a life in which conduct does not fairly well accord with principles is a silly life; and that conduct can only be made to accord with principles by means of daily examination, reflection, and resolution. What leads to the permanent sorrowfulness of burglars is that their principles are contrary to burglary. If they genuinely believed in the moral excellence of burglary, penal servitude would simply mean so many happy years for them; all martyrs are happy, because their conduct and their principles agree.
As for reason (which makes conduct, and is not unconnected with the making of principles), it plays a far smaller part in our lives than we fancy. We are supposed to be reasonable but we are much more instinctive than reasonable. And the less we reflect, the less reasonable we shall be. The next time you get cross with the waiter because your steak is over-cooked, ask reason to step into the cabinet-room of your mind, and consult her. She will probably tell you that the waiter did not cook the steak, and had no control over the cooking of the steak; and that even if he alone was to blame, you accomplished nothing good by getting cross; you merely lost your dignity, looked a fool in the eyes of sensible men, and soured the waiter, while producing no effect whatever on the steak. " - Arnold Bennett, How to Live on 24 hours per day.
That sounds all deep and wise... until you observe that it seems to be an arbitrary redefinition of 'happy', redefinition of 'genuinely believe in the moral excellence' or blatantly wrong as a matter of fact. The accuracy of the claim doesn't seem to be an important part of the intent, that is, it is bullshit.
Other parts of the excerpt are not bad - that part is just a point that people often try to take too far. The benefits of internal coherence and happiness are not tautological. Not even close.
"The second suggestion is to think as well as to read. I know people who read and read, and for all the good it does them they might just as well cut bread-and-butter. They take to reading as better men take to drink. They fly through the shires of literature on a motor-car, their sole object being motion. They will tell you how many books they have read in a year.
Unless you give at least forty-five minutes to careful, fatiguing reflection (it is an awful bore at first) upon what you are reading, your ninety minutes of a night are chiefly wasted. This means that your pace will be slow.
Never mind. " - Arnold Bennett, How to Live on 24 hours per day.
"To lose one parent may be regarded as a misfortune... to lose both seems like carelessness." - Oscar Wilde (though he didn't mean it to refer to cryonics).
[Edit: correction, thanks ciphergoth]
Cryonics. Cryogenics is the science of making things cold.
Thanks for the explanation, wouldn't have thought about it from this angle without it. It's pretty good when read in this way. Upvoted.
-- Albert Einstein
"I don’t care to belong to any club that will have me as a member." -- Groucho Marx
This may be funny but the actual context makes it a) less rationalist and b) a bit sad. There's some argument that he was actually talking about the standard at the time that Jews couldn't have any access to the trendier clubs.
Interesting -- below I give the wikipedia take on it.
Groucho sent the quote to a club which he was a member of, that was founded by a Jew. I can see how one could infer an ironic reference to antisemitism from that. Interesting that the quote as often paraphrased drops the 'people like me' part.
It's funny, but NO NO NO! This is exactly why rationalists suck at forming socially cohesive groups! :)
That doesn't seem all that likely to me. It would seem somewhat more likely if the quote was 'will not'...
I rather immediately decided to see if this had been posted before. Google indexed this comment within 2 minutes.
Imagine my surprise when I once added a reference to a Wikipedia and 20 seconds later googled it to see whether I missed anything - and that WP article was prominent in the hits.
This site uses the google custom search (see sidebar), and it provides a feature for on-demand indexing. I suppose it shares the index it makes with google proper.
alongandunlikelystringtotesthypothesis
So far, this has been a failure -- the test string still isn't found by google, and the previous post doesn't even show up in the custom search yet.
Had to stop polling because google now thinks I'm a bot.
I found the posting easily enough by searching "google custom search lesswrong". Try your experiment again using a shorter string.
Google does seem to love this site! (I wonder if Google has specialised technology in place for handling reddit based sites.)
-- Kafka, The Trial
Mitch Hedberg on the distinction between labels and the things to which they are applied:
And more Mitch Hedburg, illustrating how redrawing the map won't alter the territory.
"When I start to wonder if black swans exist, I put down my copy of Mind and pick up my copy of Nature."
-- Ariadne (former columnist in New Scientist).
I pick up my spraypaint and find a swan. Soon I don't have to wonder anymore.
"When I hear the word 'culture' I reach for my yogurt"
Huh? I understand the original quote in its Nazi play context, but not this parody.
Yogurt is milk with a culture of bacteria.
OK, that makes it make a little more sense, but is there anything to it than free association on the noun 'culture'? I dunno, something about consumerism or snacking or something?
I think it's a silly joke rather than a witty one. For what it's worth, I thought it was pretty funny.
I believe that silliness is actually more difficult to make work-- it's more delicately dependent on people's associations-- though that may simply mean that I'm better at witty.
This reminds me of "WWJD? JWRTFM!" [1] which I tried to interpret as a complex theological reference to the relationship between Jesus and the Christian bible, but which apparently is just a routine tech support joke.
[1] What Would Jesus Do? Jesus Would Read the F---ing Manual! [2]
[2] There was a recent request to keep overt profanity off LW. I have no idea whether cursing or veiled cursing is more annoying on the average.
Another reason that silliness is more difficult to do well is that it's a large search space compared to the target you're trying to hit — there are vastly more ways to be silly than ways to be simultaneously silly and actually funny, so most people attempting it end up just doing the former and thinking that it passes for humour. Example: almost all (alleged) comedy music.
(That applies even more so with absurdist humour. In that case, the search space is even larger — anything that doesn't make sense, pretty much — and, indeed, a lot of people first attempting absurd humour end up just being absurd but not humourous. I think this has something to do with positive bias — a person finds they enjoy some variety of absurd humour, and they decide they want to make their own, so they try to reverse-engineer the rule; they hypothesize that the rule is "it makes no sense" (or, within a particular genre, something more specific but still insufficient), and they observe that it fits the positive examples they know of, but fail to search for things that fit the hypothesized rule but which they don't find funny.)
FWI, regarding profanity - if you are talking about my comment about profanity, Alicorn et al. convinced me that my concerns did not have sufficient basis.
I thought it was just a silly joke referencing the original quote. I'm not sure if it's supposed to have any point deeper than that.
I dunno either. Maybe saying that the speaker is low-brow enough not to care about culture in the sense of art and only care about culture in the sense of food. But a low-brow person (stereotypically speaking) wouldn't know or care that yogurt is a culture of bacteria. So that doesn't really work.
I can imagine Dilbert speaking the quote credibly.
I pick up my copy of the Sibley Guide to Birds.
History of science is good stuff -- economists should try it some time. Once you start looking it's usually pretty easy to appreciate the wry maxim that scientific advances are usually named for the last person to "discover" them, not the first.
figleaf
Max Planck
p(double post | a quote is awesome and relevant) = 0.87
Which way do I need to update?
The quotes idea is pretty much wrong. And sadly sometimes used as an argument against life extension.
It took me a few minutes to see what you meant there. I read 'quotes' as a simple plural. Which leads to a parsing of your first sentence as a position of some merit purely by accident.
Really? Well, I suppose that would actually make sense according to a certain not-outright-insane value system.
It would be bad even if the premise were true. Then the pure idea of 'yeah, we have to let you all die because otherwise all the shiny new ideas would not prosper' is so much out of proportion. Most people do not even work in idea maintaining, but do pretty mundane jobs, or moonlight as grandparents.
Over time I notice the occasional instance of ageism in young people. It is very easy to ignore collected experiences of others, and in some cases bad. It would be awesome to have people still around that lived through history. Instead each generation to some degree forgets what was before.
It hurts me each time someone (my age or younger) claims how he does not care about history at all, because -
The premise is true and generally accepted as such; a slightly more formal treatment was given by Kuhn, but it amounts roughly to "new scientists produce advancements, old scientists stick to dogma, the status of oldies is so powerful they have to die or retire for advancements to prosper."
Shortly after "Structure of Scientific Revolutions", there was a paradigm shift in geology: plate tectonics. Which went from fringe to scientific consensus in, as I understand it, well under a decade thanks to overwhelming evidence. Did unusually many geologists die that decade?
Kuhn did not say that. His notion of paradigm advancement had a lot to do with a lot of other things. His canonical example of paradigm change (the Copernican revolution) had people actively changing their minds even in his narrative. And there are a lot of problems with his story of how things went, see for example this essay.
Furthermore, in many other shifts where new theories came into play, the overall trend happened with many old people accepting the new theory. Thus for example, Einstein's special relativity was accepted by many older physicists.
...While Einstein himself rejected quantum mechanics!
(And, yes, I'm aware of the philosophical glitches in the Copenhagen Interpretation. But Einstein refused to accept QM on principle, and I'm not sure any evidence could have convinced him, which is rather poor form for one of the greatest thinkers of all time.)
This is probably wrong. If Einstein were transported to today we could almost certainly convince him of the correctness of quantum mechanics. Not only that, the guy did a lot of important quantum mechanics research, which should suggest that it's not as simple as "he rejected it." Wikipedia says that he initially thought matrix mechanics was wrong, but became convinced of it when it was shown to be equivalent to the Schroedinger forumulation.
You are probably right on with this comment, but I think I may have misunderstood you on one point. Did you mean "it's not as simple as 'he rejected it.' "? The way it is now looks like it contradicts the rest of the post.
Also, I recall that Einstein did change his mind at least one important point, the existence of the "cosmological constant." So that implies he wasn't especially close-minded.
I hope there have been some changes in the way scientists work since the 1960s. Also I hope that it depends on the specific field.
As a conclusion of the initial argument one could add time limits to tenure, but please lets not argue for killing off scientists justs for being to old.
I completely forgot about a very important point. When rejuvenation actually works, then it might also make the brain work better, younger and so on. If it is true, that great scientists do their most important work before reaching age X, then after a rejuvenation they might be able to do even more with their good as new brain + more experience. Then it would not be a matter of getting rid of holders of old ideas, but find a way to deal with people that have an unreachable time advantage, that cannot be made up. It would be good for society to keep experienced mind in work.
Nice way to put it! To phrase it another way:
To argue in favor of mortality because of fears of entrenched conservatives is to demand capital punishment where term limits would suffice.
Thank you!
Try to get someone to put it in these words. Usually no one demands the killing of professors, or even mentions how he likes to have old people die from neglect.
If someone boldly states that he wants all these old people to die to free up space, or what ever, than you probably found a person you do not actually want to have a discussion with.
No real need to kill them off, as long as new ones are being born. Unanimity is nice, but simple majorities can usually get the job done.
As for your time limits idea, I might go further, and send everybody back to school to get a new PhD every 100 years: in a new field, at a different school, in a different language.
You're only going to give me 100 years to study mathematics, uninterrupted?
B-b-but! That's nowhere near enough time!
I am happy to see how it will turn out
This might be the answer you are looking for.
And in middle aged people and old people too. :)
because - there are not enough elves and wizardesses in that genre of story?
No. It is more a case of 'history is old stuff, that happened a long time ago, is done & over with, and does not matter any more'. Why care about the past when so much is happening right now.
I do not think the way history is dealt with is that much better, to some degree visiting historic museums or sites is just signaling.
That is basically the concept behind 'costly signalling', that people will pay time and money to visit a museum in order to signal, and in doing so accidentally learn something about history.
thx for the reminder
Yes.
Ooops. To redeem my tarnished honor, I propose an algorithmic solution to the duplicate quote problem: a full list of quotes indexed by author (of the quote). Checking to see if a quote has already been posted would then be a fast operation.
The hard part would then be making that list algorithmically. An easier algorithmic method would be to do approximate string matches with previous quote threads, using something like the Smith-Waterman algorithm for pairwise local sequence alignment. This is what biologists do when they have a gene sequence and want to know if something like it is already in the databases, and there's no reason why the method shouldn't also apply just as well to English text.
The way this would look to users is just a text box where you paste in the quote, and it'll tell you if the quote has been posted before. Even easier to use than a full list of quotes.
Your honour remains intact! I predicted that the quote had been used, based primarily on how much I like it. Google didn't find it in a quotes thread. I suppose that would mean my honour is tarnished. How much honour does one lose by assigning greater than 0.5 probability to something that turns out to be incorrect. Is there some kind of algorithm for that? ;)
You add the log of the probability you gave for what happened, so add ln(1-0.87) = -2.04 honor. Unfortunately, there's no way to make it go up, and it's pretty much guaranteed to go down a lot.
Just don't assign anything a probability of 0. If you're wrong, you lose infinite honor.
I like it, but that 'no way to make it go up' is a problem. It feels like we should have some sort of logarithmic representation of honour too, allowing for increasing honour if you get something right, mostly when your honour is currently low.
To what extent do we want 'honour' to be a measure of calibration and to what extent a measure of predictive power?
A naive suggestion could be to take log(x) - log(p), where p is the probability given by MAXENT. That is, honor is how much better you do than the "completely uninformed" maximal entropy predictor. This would enable better-than-average predictors to make their honor go up.
This of course has the shortcoming that maximal entropy may not be practical to actually calculate in many situations. It also may or may not produce incentives to strategically make certain predictions and not others. I haven't analysed that very much.
I can't remember the Post I got that from. It wasn't talking about honor.
This is the only possible system in which you're rewarded most for giving the answers accurately, and your honor remains the same regardless of how you count it. For example, predicting A and B loses the same honor as predicting A and predicting B given A.
Technically, you can use a different log base, but that just amounts to a scaling factor.
I agree; the typical human brain balks and runs away when faced with a scale of merit whose max-point is 0.
Yes.
In other words, my honor as an epistemic rationalist should be a mix of calibration and predictive power. An amusing but arbitrary formula might be just to give yourself 2x honor when your binary prediction with probability x comes true and to dock yourself ln (1-x) honor when it doesn't. If you make 20 predictions each at p = 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, and 0.95 for a total of 200 predictions a day and you are perfectly calibrated, you would expect to lose about 3.4 honor each day.
There's gotta be a way to fix this so that a perfectly calibrated person would gain a tiny amount of honor each day rather than lose it. It might not be elegant, though. Got any ideas?
Zero does seem more appropriate either as a minimum or a midpoint. If everything is going to be negative then flip it around and say 'less is good'! But the main problem I have with only losing honor based on making predictions is that it essentially rewards never saying anything of importance that could be contradicted. That sounds a bit too much like real life for some reason. ;)
The tricky part is not so much making up the equations but in determining what criteria to rate the scale against. We would inevitably be injecting something arbitrary.