Rationality quotes: April 2010
This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you've seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions.
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments/posts on LW/OB.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (307)
--Sarpedon, The Iliad, as quoted in Eric Drexler's Engines of Creation
PartiallyClips
"Wow! That seems…incredibly hard to believe. I’m not saying that just because it sounds crazy means its not true. Plenty of crazy things are true. But this claim is based on the results of just one study, conducted with the help of a biased author." --Jason Swett (my older brother)
Shai Simonson and Fernando Gouvea, "How to Read Mathematics"
Nasim Taleb
-- The Gods, XKCD
"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell
I prefer Yeats' phrasing:
By the way, I am uncertain as to how to think about the quantification (number / proportion / "ballpark estimate") of real people who fit the concept of Russel's "wiser people", or Yeats' "best".
How far off would I be if I were to estimate the quantity of such wiser and better people as "less than one third of the population of any given tribe" ?
Is anyone brave enough to say it should be thought of as a drastically smaller quantity?
Is anyone brave enough to realize how much they themselves actually fit the description for the "fools and fanatics" or "worst" -- and then, after realizing it, actually become the better?
Or am I perhaps better off to not pick at the idea?
I'm quite comfortable to ballpark < 5%.
That is about my impression too. I'm less sure about the 'worst'. I'd go with up to a third but perhaps symmetry is intended.
Phillip K. Dick
Welcome to Less Wrong, though! Introduce yourself.
Duplicate.
G. K. Chesterton
That quote had already been posted by Eliezer Yudkowsky on 2/2/2010.
-- Gautama Buddha
I like to point out that spreading this quote is an example of violating it: Buddha never said that. I'm not sure who did originally write it, but it's not found in any Buddhist primary source. "Do not believe in anything simply because it is spoken and rumored by many!"
I've heard it might be a rough paraphrase of a quote from the Kalama Sutta, but in its original form, it would not qualify as a "rationality quote"; it's more a defense of belief in belief, advising people to accept things as true based on whether believing it is true tends to increase one's happiness.
Edit: See RichardKennaway's reply; he is correct about this one. I think I was thinking of a different quote along similar lines.
What is a Buddhist primary source? None of the discourses were written down until some centuries after the Buddha's time. The discourses that we have do themselves exist and whatever their provenance before the earliest extant documents, they are part of the canon of Buddhism. The canon has accreted layers over the centuries, but the Kalama Sutta is part of the earliest layer, the Tripitaka.
You've heard? That it might be? :-)
It is readily available online in English translation. It attributes these words directly to the Buddha:
and in another translation:
If I had the time, I'd be tempted to annotate the passage with LessWrong links.
ETA: For the second translation, the corresponding paragraph is actually the one preceding the one I quoted. The sutta in fact contains three paragraphs listing these ten faulty sources of knowledge. Buddhist scriptures are full of repetitions and lists, probably to assist memorisation.
ETA2: Rationalist version: Do not rest on weak Bayesian evidence, but go forth and collect strong.
Great catch. Upvoted.
I actually don't think this is right though. I'm pretty sure the original form is about the importance of personal knowledge from direct experience. I think the wikipedia article makes this clear, actually. I suppose you're taking your reading from:
But the emphasis here should be on "when you yourselves know", not "these things lead to benefit and happiness". Keep in mind the kind of teachings being addressed are often strategies for happiness so it makes sense to be concerned with whether or not a teaching really does increase happiness.
I don't see why we can't take it as an injunction to trust only experiment and observation. It seems about right to me.
(ETA: Except of course he's talking about meditation not experiment and ignores self-deception, placebo effect, brain diversity and the all important intersubjective confirmation, but I'll take what I can get from the 5th century B.C.E.)
not trying to be glib here, but:
"• Do not quote yourself." -4wnoise
...what?
Well, I'm being a little postmodern, but how often have you heard people refer to themselves? Not quite 'i told you so' but in a similar vein. Pundits do this a lot. "well if you recall, last year I said xy&z, and look what happened?". It's the falacy related to the fact that, of all the possible outcomes, at least ONE person will probably be right. But, that fact is purely casual/trivial. I just found it poetic that one of the rules of the thread is 'do not quote yourself'... Clearly that's an issue that not all people recognize.
If we were to have designated threads for self-quoting, I would imagine there would have to be some effective restrictions to keep the quality high - I would imagine a time limit that would have to expire, for example.
That is insightful and all, but now falls under:
;)
fair enough. Rats.
"Like any dogma, it is honored far more in the breach than in the observance."
-Benoit Mandelbrot
As is appropriate when the dogma in question is trivially observed yet distracting when breached. I would expect dogma to be encouraged with honor for observance more when the reverse is the case. For example, when a dogma pertains to a particularly noble or self sacrificial action that is far rarer than the 'null' breach.
-Gilgamesh Wulfenbach
Voltaire
Edit: All right, then, here's another one:
Robert Heinlein
--Aristotle
Repeat.
Economists did something even better than predict the crisis. We correctly predicted that we would not be able to predict it.
-William Easterly
How is that better? That doesn't make sense. Predicting that you won't be able to predict something is equivalent to a maximum entropy probability distribution over the outcome. That's a state of zero knowledge. What is -William Easterly attempting to establish with -William Easterly's claim?
Also, what crisis?
I had never previously considered the possibility of James Callaghan uploading into an internet-connected paperclip maximizer, but I guess there's a first time for everything.
The great paperclip crash of 07
This happened much earlier than 2007. The advent of electronic databases and office networks surely lead to rough times in the paperclip industry. Some analysts say that if we had paid more attention to the lesson of the 98' Paperclip Crisis we could have avoided all the problems we have today.
He is attempting to establish that William Easterly and other mainstream academic economists do not suck at their jobs and that modern macroeconomics has not been thoroughly discredited by the recent (ongoing) financial crisis. He attempts to do this by claiming that their failure to predict anything correctly is not an indictment of their intellectually bankrupt field but rather a ringing endorsement. In so doing he conveniently ignores those economists and investors who correctly predicted the crisis and explained in detail what was going to happen and why it was going to happen in the years before the crisis.
There's always someone predicting a financial crisis, and when it inevitably happens (and one will eventually come), someone probably predicted it. Was there anyone who predicted the crisis based on reliable methods that we could use to predict another crisis?
Easterly does have a point though - there are two ways to predict a crisis. Infer the implicit market prediction, or predict it yourself. The latter is extremely hard because as soon as you find some reliable method of predicting financial crises and tell the world about it, market prices will change to reflect this knowledge. On the other hand, as soon as the market knows about the crisis, the crisis is beginning (if people know the price of X is going to fall soon, then the price of X will fall now as they all sell it). So in some sense, a crisis has to come out of the blue.
It sounds like Easterly was being sarcastic - taking a jab and macroeconomists who DO try to predict crises.
The Greatest Trade Ever describes how John Paulson's hedge fund identified the coming sub-prime collapse and made $15 billion betting on it. It also covers several other investors who identified the same issues and made money, though most were not as lucky/smart with their timing as Paulson.
The crisis also looks a lot like a classic example of a credit crunch as described by Austrian business cycle theory. Peter Schiff is one of the best known commentators who predicted the broad outlines of the crisis before it really hit.
Now, I'm not saying that Austrian economists have all the answers or that there isn't some element of 'even a stopped clock tells the right time twice a day' with the predictions of disaster panning out but there were people out there telling a coherent story about why the economy faced major problems and how the crisis would play out. Some of them were quite accurate on the timing as well. You wouldn't know it from the pronouncements of most economists, bankers and politicians because they look much better if they can proclaim that 'nobody' saw or could have seen the problems coming. I'm a lot more impressed with the likes of Andrew Lahde bowing out with a 'f*ck you' and millions of dollars in profits from betting on disaster and being right than by William Easterly smugly proclaiming vindication of mainstream economics when his profession largely failed at making predictions or even understanding what was going on in the real economy.
I think the ABC theory, at least in the form that I understand it, is onto something, but I don't think it's quite right. I think there should be less attention on the fed and more attention on the decision making of investors. And someone should just mathematicize the damn thing already.
Some Austrian influenced economists (but not Austrian) are convinced that the housing market is doomed to bubbles due to it's structure and the structure of the human mind. Basically, once prices start rising for an extended period of time, the human mind treats these prices as if they will continue rising EVEN IF they know that the prices are way higher than they should be, given the actual value of the asset. Many experiments have bore this out. Here's a link.
ETA: Just to clarity a bit, I do think it's possible - though difficult - to predict that a crisis is going to happen, and even have a decent idea of the magnitude. The timing, on the other hand, I think is nearly impossible to get right with any precision.
If that's a real fact, that a crisis is unpredictable (what might be, I don't know), than the knowledge of this fact is more valuable than an accidental prediction of one crises.
At least so I understand W. Easterly.
Sir Arthur Conan Doyle, The Sign of Four, A Scandal in Bohemia
We must be careful who we let define what is sustainable.
Jason Stoddard in Shine, an anthology of near-future optimistic science fiction.
Do not sacrifice truth on the altar of comfort
-- Wayne Gretzky (but I've seen it attributed to Michael Jordan and Joe Ledbetter, HS coach)
Except that actually isn't right. You miss exactly 0% of the shots you don't take. And I'm not just being pedantic. In basketball this attitude can cost teams games. Any game of possessions (of which basketball is one) is won with efficiency. Shooting the ball means there is some chance of scoring but also some chance of missing and the ball being rebounded by the other team. When the latter happens you've lost your opportunity to score and you will never get it back. So the key to winning is to take high efficiency shots-- this means shots that are likely to go in and shots that are worth a lot of points. Now not shooting does increase the likelihood of a turnover and one can't go on not shooting forever. Moreover, quick shots before the defense is ready can often be very efficient shots. But the key is that the game is not about scoring a lot of points-- it's about scoring a lot of points efficiently. And to get good at that means cultivating a skill of waiting for the best shot, creating a better shot or deferring to more efficient teammates.
It might be that these aren't concerns in hockey: if all shots are more or less equally efficient or if a lot of points are scored of offensive rebounds "keep shooting it" might be a good message. I don't know a lot about the sport. But even hockey players aren't shooting from the other side of the rink.
Outside sports there are occasions where 'missing' is worse than 'not shooting' and if the chances of 'missing' are high enough or the cost of 'missing' sufficiently high it can be a really bad idea to 'shoot'.
Seen on bumper sticker, via ^zhurnaly.
"It is the mark of an educated mind to be able to entertain a thought without accepting it." --- Aristotle
This is more important than it looks. Most people's beliefs are just recorded memes that bubbled up from their subconscious when someone pressed them for their beliefs. They wonder what they believe, their mind regurgitates some chatter they heard somewhere, and they go, "Aha, that must be what I believe." Unless they take special countermeasures, humans are extremely suggestible.
"Face the facts. Then act on them. It's the only mantra I know, the only doctrine I have to offer you, and it's harder than you'd think, because I swear humans seem hardwired to do anything but. Face the facts. Don't pray, don't wish, don't buy into centuries-old dogma and dead rhetoric. Don't give in to your conditioning or your visions or your fucked-up sense of... whatever. FACE THE FACTS. THEN act."
--- Quellcrist Falconer, speech before the assault on Millsport. (Richard Morgan, Broken Angels)
"It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence." ~William Kingdon Clifford
This is the quote that got me thinking about rationality as something other than "a word you use to describe things you believe so that you can deride those who disagree with you."
One of the most insidious sources of confusion, I find, is the distinction between the meaning of a word and its most frequent uses. It ties into the whole "Applause Lights" phenomenon, particularly "Fake Norms".
P.S. Belatedly: Welcome to Less Wrong! Feel free to introduce yourself in that thread.
-Louis Aragon
--Sir Edmund Hillary (1919-2008) - New Zealand Mountaineer and First man to Climb Mt. Everest
Interesting in light of pjeby's distinction between "you" and "yourself."
-David Stevens
--via The Economist, "a saying of statisticians".
--von Neumann
I like it, but do you have an issue number?
Here is the piece I got it from:
http://www.economist.com/specialreports/displaystory.cfm?story_id=15557465
"A different game: Information is transforming traditional businesses", Feb 25th 2010 - thanks!
My father's been saying that as long as I can remember; he hasn't taken a statistics class since '82.
Never mind, then!
-- Clay Shirky
http://friendlyatheist.com/2008/02/29/complete-the-atheist-joke-1/
My initial response was to chuckle, but when my analytical capacities kicked in a moment later I was disappointed.
If his initial assumptions was that he was walking into a bar, does that make him atheist in this metaphor? Substitute "walked into a bar" by "believed there is a god", the thing I assume it is a metaphor of. You will see it makes no sense.
Many atheists were formerly theists.
Still, I suppose it might have been better as "A scientist walked into what he thought was a bar, but seeing no bartender, barstools, or drinks, he revised his initial assumption and decided he only walked into a room."
I think it makes sense, as a poke at atheists.
Think about it this way. You walk into a bar, and you see no bartender. In your mind, you say "anything that is a bar will have a bartender. No bar tender, not a bar." Of course, the best thing to do before revising your assumptions is to wait for a bar tender. Maybe he/she is in the bathroom.
Similarly, if you claim there is no evidence of god that I've seen in my lifetime, you are using the wrong measure. Why should god (if there is one) make itself obvious during the short period that is a human lifetime.
This is almost an "irrationality quote" instead of a rationality quote, but still enlightening.
I was with you up until the "similarly". After that you start privileging the hypothesis - you should expect a god to make itself obvious during a human lifetime, by any description of a god ever proposed in history.
I'm not sure I see how I"m privileging the hypothesis. Not saying that I'm not, but if you can explain how I'd appreciate it.
Aside from that, I think you are using "god" to mean any of the gods discussed by any popular religion. By this definition, I'd probably agree with you.
I was using the word "god" in a much more general sense... not sure I can define it though, probably something similar to: any "being" that is omnipotent and omniscient, or maybe: any "being" that created reality as we know it. In either definition, there is not really a reason to expect got to make itself obvious to us on any timescale that we consider reasonable. There is no reason to believe that we are special enough that we'd get that kind of treatment.
There is no reason to propose such a being - privileging the hypothesis is when you consider a hypothesis before any evidence has forced you to raise that hypothesis to the level of consideration.
Unless you have a mountain of evidence (and I'm guessing it'll have to be cosmological to support a god that hasn't visibly intervened in the world) already driving you to argue that there might be a god, don't bother proposing the possibility.
Ah, I see what you are saying. Thanks for the explanation. And you are indeed correct.
--- Mark Liberman
"In the animal kingdom, the rule is, eat or be eaten; in the human kingdom, define or be defined."
Thomas Szaz
Are the winners the only ones actually writing the history? We need to disabuse ourselves of this habit of saying things because they sound good. ----- Ta-Nehisi Coates
Coates runs a popular culture, black issues, and history blog with a very strong rationalist approach.
Deleated as a repeat.
A repeat, but a good one.
WIlliam Thomson, Lord Kelvin
One I got while reading Jaynes's Probability Theory recently:
-- Laplace
"If A=B and B=C and C=D, then do not get a job proofreading." - Quid's Theorem
The wizard who reads a thousand books is powerful. The wizard who memorizes a thousand books is insane.
Bertrand Russell
Note: phaedrus has provided a citation to "The Philosophy of Logical Atomism", noting that this quote is only part of the sentence.
Oooh, thanks to RobinZ and phaedrus! I hadn't seen the second part, and didn't have the citation.
Thanks RobinZ, The full quote is "Everything is vague to a degree you do not realize till you have tried to make it precise, and everything precise is so remote from everything that we normally think, that you cannot for a moment suppose that is what we really mean when we say what we think."
But the partial quote is much more crisp.
Alfred North Whitehead
Freeman Dyson
Claude Lévi-Strauss
-- Ludwig von Mises, Epistemological Problems of Economics
This reminds me of B. F. Skinner's criticism of William James
Before he can add something of substance to the discussion of the epistemological problems of economics, Ludwig von Mises must look back in time, to previous events, and offer them as the explanation of why we want or desire things and why we also call those things agreeable or good.
I think Mises's point is rather that concepts like "good," "bad," "evil," "right," "wrong," "ought to" and "rights" all reduce back down to variations on "I desire it"/"It brings me pleasure" and the opposite. In other words, all ethical systems are dressed up (subjective) consequentialism and they only appear otherwise due to semantic confusion.
.
Imagine that you got no satisfaction at all from bringing pleasure to others, but you did it anyway. What would be the reason?
.
.
The response to that would be that you only do things that give others pleasure because the feeling of helping others is pleasurable to you or because you expect something in return, and that if neither of those were the case, you wouldn't do it. (I don't necessarily agree with that — I'm pretty sure I don't — but I do believe that's how they'd reduce it.)
Dale Carnegie, How to Win Friends and Influence People
Death by Lob's Theorem to this quote.
Rephrase that and it sounds nonsensical: "If you can't outperform the stock market, then how can you be sure of anything?" I think Carnegie was just looking for a glib rationalization for his advice to avoid contradicting people whom you want to like you.
What if I am right 9 times out of 10 when I say I am 90% sure of something, but I am never or very rarely more than 50% sure of propositions of the form "This stock's price will go up/down, over a relevant time frame"?
A side note: All three of the quotes I've posted are from Binmore's Rational Decisions, which I'm about a third of the way through and have found very interesting. It makes a great companion to Less Wrong -- and it's also quite quotable in spots.
Wow - I think I felt real physical pain in my eyes as I read that one.
"All things end badly - or else they wouldn't end"
Almost all relationships end in unhappiness or death. Or unhappiness leading to death.
Typo-hunt: should read "abandoning arithMetic" (without the capital of course)
Fixed.
I'm a big fan of Ken Binmore, and this quote captures a lot of my dissatisfaction with LW's directions of inquiry. For example, it's more or less taken for granted here that future superintelligent AIs should cooperate on the Prisoner's Dilemma, so some of us set out to create a general theory of "superintelligent AIs" (including ones built by aliens, etc.) that would give us the answer we like.
Would it be correct to say you mean "should" in the wishful thinking sense of "we really want this outcome," rather than something normative or probabilistic?
Good question. The answer's yes, but now I'm wondering whether we really should expect alien-built AIs to be cooperators. I know Eliezer thinks we should.
That is not the impression I got from the story.
The baby-eaters were cooperators, yes; they were also stated to be relatively similar to humanity except for their unfortunate tendency to eat preteens.
The other ones, though? I didn't see them do anything obviously cooperative, but I did see a few events that'd argue against it. The overall impression I got was that we really can't be sure, except that it might be unlikely for both sides of a contact to come out unscathed.
-- Marcel Proust, In Search of Lost Time
--Finite and Infinite Games
The author was transformed by reading "Behavior: The Control of Perception"(1973) and began a research program whose early years(?) seem to have been summarized in "Mind Readings: Experimental Studies of Purpose"(1992)
This has been discussed here before.
The problem is that Marken's models don't actually have predictive power; he just fits a function to the data using as many free parameters as he has data points, and marvels at the perfect fit thus derived. One doesn't need to think highly of the current state of psychology to realize that Marken is a crank, and that any recognition Marken has in the PCT community is a sign that they are bereft of actual experimental support if not basic scientific reasoning skills.
The interaction you linked to was interesting. I didn't realize there was already a back story within this community with positions staked out and such. I offered the quote because it seemed like a beautifully mathematical objection to existing work that was "up this community's alley" but I haven't worked into the actual mathematics or experiments themselves. For example, I hadn't purchased either of the books that I linked to, not have I studied them - I simply assigned them high EV given the quality of the author's text.
Your comments, in the interaction you linked to, seem like a good arguments against Marken's theory (specifically the claim that his work involves more free parameters than data points appears to be a good argument against the theory, if true). However, in all of that back and forth, I noticed many links to "lesswrong heuristics" but I didn't notice any outside links to an actual research papers detailing methodology.
I'm substantially more ignorant on the subject than either you or your previous interlocutor and it took me a while to even understand that "PCT" was the theory Marken supports, that you two were taking the pro and con towards it, that your text was mostly between each other with a substantial amount of knowledge assumed. I wish you had both linked more, because it would have been educational.
That said, I'd like to see such links if you know of any. If I can swiftly dismiss Marken's work without further thought, that would be a very efficient use of time. Can you direct me to the links showing an example of his experimental work so I can verify that his research program is crippled by mathematical overfitting? The best I could find was Perceptual organization of behavior: A hierarchical control model of coordinated action but it was pay-walled so I can't access it now to look into it myself.
The paper discussed in that interaction can be found here without a paywall.
As stated then (the conversation can be taken up from about here if not earlier), I think it's quite likely that simple control circuits can be found in facets of motor response; but Powers, Marken and Eby had been talking about control theory in cognitive domains (like akrasia) as if they could isolate simple circuits there, and my search for any kind of evidence turned up only this sort of embarrassing tripe.
And really, the math here is important— it's not a matter of disagreeing with interpretation, it's the plain fact that a generic model with 4 free parameters can be tweaked to precisely fit 4 data points, and it's clear from the paper that this is what Marken did. You simply need more data points than free parameters in order to generate any evidence in favor of a model; the fact that he never mentioned this, and instead crowed about the impressive fit of his model to the data, indicate either gross ignorance of how mathematical models work, or outright intent to mislead (coupled with an utterly incompetent peer review process.)
The gauntlet remains thrown, if anyone wants to point to an experimental study which demonstrates a discernible control circuit in a cognitive task (apart from tasks, like tracking a dot, which have an obvious motor component— in these, I do expect control circuits to be a good model for certain behavior). I would be surprised, but it would suffice to give credence to the theory in my eyes.
Through judicious abuse of my employer's resources, I have acquired a copy of the PDF - PM me an email address and I'll send it to you.
Thanks Robin! I have read this paper now, but it still doesn't seem to address the arguments that orthonormal linked to :-/
The 1986 study appeared to me to be basically well done, offering a fascinating paradigm that could be extended in many directions for further research with a reasonably strong result by itself. It basically confirmed the positive claims of Marken that hierarchical arrangements of negative feedback loop systems (designed, with a handful of optimized parameters, and then left alone) can roughly reproduce trained human behavior in a variety of dynamically changing toy domains, supporting the contention that whatever is operating in the human nervous system after a period of training is doing roughly the same effective computations as the model.
In the text, Merken addresses the "motor control literature" as making claims whose refutation was partly the purpose of his experiments.
It required a little more googling to figure out the claims he was trying to reject... but basically he seems to be objecting to the claim that mammals work as open loop controllers (that is, generating action signals based on an internal model of the world that are sent into the world with no verification loop or secondary corrections). This claim appears to have been founded mostly on things called "deafferentiation experiments"... which turned out to be aesthetically horrifying and also turned out to not actually prove the general case of "open loop" claims.
The most infamous of these experiments, (warning - kind of disturbing pictures) was basically:
The ability of monkeys mutilated in this fashion to (eventually?) move around purposively was taken as evidence that there was not a hierarchically arranged set of negative feedback motor control systems implemented in their nervous system. In practice (after the scientist was arrested for animal cruelty, PETA's request for custody was denied, and the monkeys were brainscanned, euthanized, and autopsied) it turned out that the monkey's brains had been massively re-wired by the experience. The practical upshot of the experiments seem to have primarily been to serve as dramatic evidence of adult primate brain plasticity (which they didn't believe in, back then?) rather than as confirmation of a negative feedback theory of motor control. (Probably there's more to it than that, but this is my first draft understanding.)
Merken dismisses these experiments in part by pointing out the difficulty of preventing negative feedback control processes if there are many sub controllers that can use measurements partially correlated to the measure being optimized and concludes with falsification examples and criteria for the general theory and the particular model that are not subject to this objection:
In short, I'm still impressed by Merken. His reasoning seems clean, his experiment, robust, his criticisms of motor-control and trait-theory, well reasoned. My very broad impression is that there may be a over-arching background argument here between "accurate model in the head producing aim and fire success" versus "incremental goal accomplishment via well tuned reflexes and continuous effort"? If that back story is operative then I guess my posterior probability was just pushed a little more in the direction of "reflexes and effort" over "models and plans".
If there is some trick still lurking here, Orthonormal, that you could point me to and spell out in detail rather than by reference to assertions and hand-waving rationality heuristics, that would be appreciated. The more time I spend on Merken's work, the more I find to appreciate. At this point, I've spend a day or two on this and I think the burden of proof is on you. If you take it up successfully I would be in your debt for rubbing a bit of sand out of my eyes :-)
Jennifer, here is where orthonormal seems to say where exactly Marken overfit the data.
(Orthonormal might not have seen your comment because you didn't post it in reply to one of his/hers.)
[ETA: Nevermind. Looks like the date of orthonormal's last comment is after yours, so he/she probably saw it.]
I don't understand what the quote is trying to say. What are the unrecognized consequences of the open-loop model?
It sounds like the author is upset that psychologists don't believe he has a model of behavior that explains 99% of some output variable using only one input variable. I'd have a hard time believing too.
Tom Siegfried, Odds Are, It's Wrong, on the many failings of traditional statistics in modern science.
Wandering in a vast forest at night, I have only a faint light to guide me. A stranger appears and says to me: 'My friend, you should blow out your candle in order to find your way more clearly.' The stranger is a theologian.
But blowing out the candle actually would make it easier to find your way (it ruins your night vision).
Not if the forest is sufficiently dark that your night vision doesn't have enough light to work with.
That seems like an easy case to test, provided you have some way to re-light the candle.
You need to make two assumptions for the analogy.
1) You can't re-light the candle.
2) If you do things exactly right, you'll get out with just before starving to death (or dying somehow) otherwise, you are dead.
--Voltaire
Source:
-- Schelling, Strategy of conflict, p144
[The book was mentioned a couple of times here on LW, and is a nice introduction to the use of game theory in geopolitics]
"Hypocrisy and dissimulation are what keeps social systems strong; it is intellectual honesty that destroys them."
Theodore Dalrymple- The New Vichy Syndrome p. 26.
If the rationality quotes are intended to illustrate rationality, rather than themselves necessarily be rational, I think this is a fine quote.
This is true when the social systems in question are built on dishonest foundations. Observing whether or not intellectual honesty has this effect on a system has predictive value wrt the eventual fate of the society employing the system.
Voted up.
--Robert A. Heinlein
Sad, but true.
""Not evil, but longing for that which is better, more often directs the steps of the erring"
Theodore Dreiser, Sister Carrie
-- R Scott Bakker, Neuropath
You mean, like every Bayesian believes their prior is correct?
Prior can't be judged. It's not assumed to be "correct". It's just the way you happen to process new info and make decisions, and there is no procedure to change the way it is from inside the system.
I have heard some argue for adjusting priors as a way of dealing with deductive discoveries since we aren't logically omniscient. I think I like that solution. Realizing you forgot to carry a digit in a previous update isn't exactly new information about the belief. Obviously a perfect Bayesian wouldn't have this issue but I think we can feel free to evaluate priors given that we are so far away from that ideal.
But one man's prior is another man's posterior: I can use the belief that a medical test is 90% specific when using it to determine whether a patient has a disease, but I arrived at my beliefs about that medical test through Bayesian processes - either logical reasoning about the science behind the test, or more likely trying the test on a bunch of people and using statistics to estimate a specificity.
So it may be mathematically wrong to tell me my 90% prior is false, but the 90% prior from the first question is the same 90% posterior from the second question, and it's totally kosher to say that the 90% posterior from the second question is wrong (and by extension, I'm using the "wrong prior")
The whole reflective consistency thing is that you shouldn't have "foundational priors" in the sense that they're not the posterior of anything. Every foundational prior gets checked by how well it accords with other things, and in that sense is sort of a posterior.
So I agree with cousin_it that it would be a problem if every Bayesian believed their prior to be correct (as in - they got the correct posterior yesterday to use as their prior today).
Vladimir is using "prior" to mean a map from streams of observations to probability distributions over streams of future observation, not the prior probability before updating. Follow the link in his comment.
Locked in, huh? Then I don't want to be a Bayesian.
If someone was locked in to a belief, then they'd use a point mass prior. All other priors express some uncertainty.
Since you are already locked in in some preference anyway, you should figure out how to compute within it best (build a FAI).
What makes you say that? It's not true. My preferences have changed many times.
Distinguish formal preference and likes. Formal preference is like prior: both current beliefs and procedure for updating the beliefs; beliefs change, but not the procedure. Likes are like beliefs: they change all the time, according to formal preference, in response to observations and reflection. Of course, we might consider jumping to a meta level, where the procedure for updating beliefs is itself subject to revision; this doesn't really change the game, you've just named some of the beliefs changing according to fixed prior "object-level priors", and named the process of revising those beliefs according to the fixed prior "process of changing object-level prior".
When formal preference changes, it by definition means that it changed not according to (former) formal preference, that is something undesirable happened. Humans are not able to hold their preference fixed, which means that their preferences do change, what I call "value drift".
You are locked in in some preference in normative sense, not factual. This means that value drift does change your preference, but it is actually desirable (for you) for your formal preference to never change.
I object to your talking about "formal preference" without having a formal definition. Until you invent one, please let's talk about what normal humans mean by "preference" instead.
I'm trying to find a formal understanding of a certain concept, and this concept is not what is normally called "preference", as in "likes". To distinguish from the word "preference", I used the label "formal preference" in the above comment to refer to this concept I don't fully understand. Maybe the adjective "formal" is inappropriate for something I can't formally define, but it's not an option to talk about a different concept, as I'm not interested in a different concept. Hence I'm confused about what you are really suggesting by
For the purposes of FAI, what I'm discussing as "formal preference", which is the same as "morality", is clearly more important than likes.
I'd be willing to bet money that any formalization of "preference" that you invent, short of encoding the whole world into it, will still describe a property that some humans do modify within themselves. So we aren't locked in, but your AIs will be.
What makes you say that Bayesians are locked in? It's not true. If they're presented with evidence for or against their beliefs, they'll change them.
You're talking about posteriors. They're talking about priors, presumably foundational priors that for some reason aren't posteriors for any computations. An important question is whether such priors exist.
But your beliefs are your posteriors, not your priors. If the only thing that's locked in is your priors, that's not a locking-in at all.
That's not obvious. You'd need to study many specific cases, and see if starting from different priors reliably predicts the final posteriors. There might be no way to "get there from here" for some priors.
When we speak of the values that an organism has, which are analogous to the priors an organism starts with, it's routine to speak of the role of the initial values as locking in a value system. Why do we treat these cases differently?
Bayesians don't believe they lucked into their priors. They have a reflectively consistent causal explanation for their priors.
Even if their explanation were correct, they would still have lucked into them. Others have different priors and no doubt different causes for their priors. So those Bayesians would have been lucky, in order to have the causes that would produce correct priors instead of incorrect ones.
Priors can't be correct or incorrect.
(Clarified in detail in this comment.)
This downvoting should be accompanied with discussion. I've answered the objections that were voiced, but naturally I can't refute an incredulous stare.
The normal way of understanding priors is that they are or can be expressed as joint probability distributions, which can be more or less well-calibrated. You're skipping over a lot of inferential steps.
Right. We could talk of quality of an approximation to a fixed object that is defined as the topic of a pursuit, even if we can't choose the fixed object in the process and thus there is no sense in having preferences about its properties.
I can't tell what you're talking about.
Say, you are trying to figure out what the mass on an electron is. As you develop your experimental techniques, there will be better or worse approximate answers along the way. It makes sense to characterize the approximations to the mass you seek to measure as more or less accurate, and characterize someone else's wild guesses about this value as correct or not correct at all.
On the other hand, it doesn't make sense so similarly characterize the actual mass of an electron. The actual mass of an electron can't be correct or incorrect, can't be more or less well-calibrated -- talking this way would indicate a conceptual confusion.
When I talked about prior or preference in the above comments, I meant the actual facts, not particular approximations to those facts, the concepts that we might want to approximate, not approximations. Characterizing these facts as correct or incorrect doesn't make sense for similar reasons.
Furthermore, since they are fixed elements of ideal decision-making algorithm, it doesn't make sense to ascribe preference to them (more or less useful, more or less preferable). This is a bit more subtle than with the example of the mass of an electron, since in that case we had a factual estimation process, and with decision-making we also have a moral estimation process. With factual estimation, the fact that we are approximating isn't itself an approximation, and so can't be more or less accurate. With moral estimation, we are approximating the true value of a decision (event), and the actual value of a decision (event) can't be too high or too low.
Sounds mysterious to me. Priors are not claims about the world?
Not quite. They are the way you process claims about the world. A claim has to come in context of a method for its evaluation, but prior can only be evaluated by comparing it to itself...
They can be more or less useful, though.
According to what criterion? You'd end up comparing a prior to the prior you hold, with the "best" prior for you just being the same as yours. Like with preference. Clearly not the concept Unknowns was assuming -- you don't need luck to satisfy a tautology.
Of being better at predicting what happens, of course.
You can't judge based on info you don't have. Based on what you do have, you can do no better than current prior.
But you can go and get info, and then judge, and say, "That prior that I held was wrong."
You're speaking as if all truth were relative. I don't know if you mean this, but your comments in this thread imply that there is no such thing as truth.
You've recently had other discussions about values and ethics, and the argument you're making here parallels your position in that argument. You may be trying to keep your believes about values, and about truths in general, in syntactic conformance. But rationally I hope you agree they're different.
I am in violent agreement.
But that still doesn't need to be luck. I got my priors offa evolution and they are capable of noticing when something works or doesn't work a hundred times in a row. True, if I had a different prior, I wouldn't care about that either. But even so, that I have this prior is not a question of luck.
It is luck in a sense - every way that your opinion differs from someone else, you believe that factors outside of your control (your intelligence, your education, et cetera) have blessed you in such a way that your mind has done better than that poor person's.
It's just that it's not a problem. Lottery winners got richer than everyone else by luck, but that doesn't mean they're deluded in believing that they're rich. But someone who had only weak evidence ze won the lottery should be very skeptical. The real point of this quote is that being much less wrong than average is an improbable state, and you need correspondingly strong evidence to support the possibility. I think many of the people on this site probably do have some of that evidence (things like higher than average IQ scores would be decent signs of higher than normal probability of being right) but it's still something worth worrying about.
I think I agree with that: There's nothing necessarily delusive about believing you got lucky, but it should generally require (at least) an amount of evidence proportional to the amount of purported luck.
Then it would make sense to use some evolutionary thingy instead of Bayesianism as your basic theory of "correct behavior", as Shalizi has half-jokingly suggested.
Joe Biden, remarks delivered in Saint Clair Shores, MI, Monday, September 15, 2008
Of course, to really see what someone values you'd have to see their budget profile across a wide range of wealth levels.
Daniel Dennett, interview for TPM: The Philosopher's Magazine
If the point is to get them to answer or reason about the topic, then I think we should reject the statement that "there is no polite way of asking." We should find a way of asking politely, such as teaching them to process our questions instead of answering with cached thoughts. Being offensive doesn't win.
I also think it's a poorly phrased question, since it's easily brushed off with "yes/no", avoiding any of the deeper implications in an apparent effort to make it catchy and instantly polarizing.
If the point is to upset people, to feel righteous, or to signal tribal affiliation, then go right ahead.
This is not universally true, but I would support trying to create nonoffensive ways to deliver the message - the combination of direct and conciliatory methods is probably more powerful than either alone.
Yes, I considered that to be the primary statement under contention.
It's not a strategy I wish to use, so I decided to speak out against it even as I realize that's kind of the point, to have purists who can continue to show that there's further to go, and a spectrum of other positions to provide a more gradual path.
I recognize the potential usefulness of it even as I deride it; I am good cop.
Robert Pirsig, Zen and the Art of Motorcycle Maintenance
Gall's Law:
John Gall, "Systemantics"
Counterexample: a complex computer program designed and written from scratch.
I've written some of those. And every time, I test everything I write as I go, so that at every stage from the word go I have a working program. The big bang method, of writing everything first, then running it, never works.
The "big bang" sometimes happens to me when I write in Haskell. After I fix all the compiler errors, of course. I just wish there were a language with a type system that can detect almost as many errors as Haskell's without having quite such a restrictive, bondage-fetish feel to it.
But yeah, in general, only trivial programs work the first time you run them. That's a good definition of trivial, actually.
...and that worked the very first time? How often does that happen?
The quote is a rule of thumb and an admonition to rational humility, not a law of the universe.
Well "never works and cannot be made to work" does sound a bit strong to me.
I agree it's probably not a law of the universe, as I cannot rule out possible minds that could falsify it. However, I cannot from within my mind (human capabilities) see a case where a complex system could work before each of its parts had been made to work.
The "inverse proposition" given is actually the contrapositive of (i.e. is equivalent to) the original statement.
Counterexample: Space shuttle.
Really? I think only 6 of them were built, and 2 of those suffered catastrophic failure with all hands lost.
It doesn't qualify 100%, because there were little prototype shuttles. Still, you have a point. If we have good theories, we can build pretty big systems from scratch. Gall's law resonates especially strongly with programmers because much of programming doesn't have good theories, and large system-building endeavors fail all the time.
Even if there hadn't been prototype shuttles, the shuttle is still reducible to simpler components. Gall Law just articulates that before you can successfully design something like the space shuttle you have to understand how all of its simpler components work.
If an engineer (or even transhuman AI) had sat down and started trying to design the space shuttle, without knowledge of rocketry, aerodynamics, circuits, springs, or screws, it would be pulling from a poorly constrained section of the space of possible designs, and is unlikely to get something that works.
The way this problem is solved is to work backwards until you get to simple components. The shuttle designer realizes his shuttle will need wings, so starts to design the wing, realizes the wing has a materials requirement, so starts to develop the material. He continues to work back until he gets to the screws and rivets that hold the wing together, and other simple machines.
In engineering, once you place the first atom in your design, you have already made a choice about atomic mass and charge. Complex patterns of atoms like space shuttles will include many subdivisions (components) that must be designed, and Gall's Law illustrates that they must be designed and understood before the designer has a decent chance of the space shuttle working.
I think you completely miss the point of Gall's law. It's not about understanding individual components. Big software projects still fail, even though we understand if-statements and for-loops pretty well.
I know that.
It's about an evolution from simpler systems to more complex systems. Various design phases of the space shuttle aren't what falsify that example. It's the evolution of rocket propulsion, aircraft, and spacecraft, and their components.
(EDIT: Also, at no point was I suggesting that understanding of components guarantees success in designing complex systems, but that it is neccessary. For a complex system to work it must have all working components, reduced down to the level of simple machines. Big software projects would certainly fail if the engineers didn't have knowledge of if-statements and for-loops.)
In addition to NMJablonski's point, it is perhaps arguable just how well the Space Shuttle worked. In hindsight it seems that the same amount of orbital lift capacity could have been done rather more cheaply.
It works for a job it isn't used for: launching into a polar orbit to emplace secret military satellites, and gliding a very long distance back to base without a need for a splashdown recovery that might risk its secrecy.
That's what gave it the wings, and once you have the wings the rest of the design follows.
Evolved from both simpler winged aircraft and simpler rockets.
All the base components that went into the space shuttle still existed on a line of technogical progress from the basic to the advanced. Actually, the space shuttle followed Gall's Law precisely.
The lift mechanism was still vertically stacked chemical rockets of the sort that had already flown for decades. The shuttle unit was built from components perfected by the Gemini and Apollo programs, and packed into an aerodynamic form based on decades of aircraft design.
Reducing technologically, the shuttle still depends on simple systems like airfoils, rockets and nozzles, gears, and other known quantities.
The Columbia shuttle crew would still be with us if this were correct.