Rationality Quotes August 2013
Another month has passed and here is a new rationality quotes thread. The usual rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (733)
Glenn Reynolds
Hindsight bias. It's only after the bust that you find out which boom things made sense after all and which didn't.
The boom produces a lot of stuff which is theoretically not the optimum stuff to produce using the resources used in the boom. However, to the extent the boom brings resources out of the woodwork that may not have been used to produce anything at all in the absence of the boom, it may not actually be a net loss compared to a realistic counterfactual.
The bust accompanied by significant unemployment is of a virtual certainly producing less than any of the counterfactuals in which more people are employed. Of course it IS possible to employ some people digging holes and others to fill them in, but I think this is a strawman, generally artificially increased employment produces something of value.
The Austrians may have it wrong because the obviousness of the bust being the unproductive distortion is lost to them in the intellectual excitement of realizing you can't have a bust without a boom, and so they mistakenly think it is the boom which is less productive.
Sometimes the obvious answer IS right. I think the fact that particularly intelligent people acting in groups miss this more often than is optimum should be one of the cognitive biases on our list of biases we study and stay aware of.
Unemployed people produce less than employed people. The odd construction of a corner case does not make this generally true statement generally false.
I'm downvoting this quote. Read at a basic level, it supports a particular economic theory rather than a larger point of rationality.
For the record, the Austrian Business Cycle Theory is not generally accepted by mainstream economists. This isn't the place to discuss why, and it isn't the place to give ABCT the illusion of a "rational" stamp of approval.
All true, but there are many booms which seem to produce crazy investments; the dot-com boom is the most obvious recent example. You don't need to accept ABCT to accept this, and I'd guess most people who do notice this don't accept ABCT.
-- Iain M. Banks
This seems like a poor strategy by simply considering temper tantrums, let alone all of the other holes in this. (The first half of the comment though, I can at least appreciate.)
I, too, support the cause of opposing every such cause.
I wonder if people here realize how anti-utilitarianism this quote is :-)
You seem to be implying that people here should care about things being anti-utilitarianism. They shouldn't. Utilitarianism refers to a group of largely abhorrent and arbitrary value systems.
It is also contrary to virtually all consequentialist value systems of the kind actually held by people here or extrapolatable from humans. All consequentialist systems that match the quote's criteria for not being 'Fucked' are abhorrent.
It is not. "Murder and children crying" here are not means to an end, they are consequences as well. Maybe not intended consequences, maybe side effects ("collateral damage"), but still consequences.
I see no self-contradiction in a consequentialist approach which just declares certain consequences (e.g. "murder and children crying") be be unacceptable.
There is nothing about consequentialism which distinguishes means from ends. Anything that happens is an "end" of the series of actions which produced it, even if it is not a terminal step, even if it is not intended.
When wedrifid says that the quote is "anti-consequentialism", they are saying that it refuses to weigh all of the consequences - including the good ones. The negativity of children made to cry does not obliterate the positivity of children prevented from crying, but rather must be weighed against it, to produce a sum which can be negative or positive.
To declare a consequence "unacceptable" is to say that you refuse to be consequentialist where that particular outcome is involved; you are saying that such a consequence crashes your computation of value, as if it were infinitely negative and demanded some other method of valuation, which did not use such finicky things as numbers.
But even if there is a value which is negative, and 3^^^3 times greater in magnitude than any other value, positive or negative, its negation will always be of equal and opposite value, allowing things to be weighed against each other once again. In this example, a murder might be worth -3^^^3 utilons - but preventing two murders by committing one results in a net sum of +3^^^3 utilons.
The only possible world in which one could reject every possible cause which ends in murder or children crying is one in which it is conveniently impossible for such a cause to lead to positive consequences which outweigh the negative ones. And frankly, the world we live in is not so convenient as to divide itself perfectly into positive and negative acts in such a way.
Wikipedia: Consequentialism is the class of normative ethical theories holding that the consequences of one's conduct are the ultimate basis for any judgment about the rightness of that conduct. ... Consequentialism is usually distinguished from deontological ethics (or deontology), in that deontology derives the rightness or wrongness of one's conduct from the character of the behaviour itself rather than the outcomes of the conduct.
The "character of the behaviour" is means.
Consequentialism does not demand "computation of value". It only says that what matters it outcomes, it does not require that the outcomes be comparable or summable. I don't see that saying that certain outcomes are unacceptable, full stop (= have negative infinity value) contradicts consequentialism.
You have a point, there are means and ends. I was using the term "means" as synonymous with "methods used to achieve instrumental ends", which I realize was vague and misleading. I suppose it would be better to say that consequentialism does not concern itself with means at all, and rather considers every outcome, including those which are the result of means, to be an end.
As for your other point, I'm afraid that I find it rather odd. Consequentialism does not need to be implemented as having implicitly summable values, much as rational assessment does not require the computation of exact probabilities, but any moral system must be able to implement comparisons of some kind. Even the simplest deontologies must be able to distinguish "good" from "bad" moral actions, even if all "good" actions are equal, and all "bad" actions likewise.
Without the ability to compare outcomes, there is no way to compare the goodness of choices and select a good plan of action, regardless of how one defines "good". And if a given outcome has infinitely negative value, than its negation must have infinitely positive value - which means that the negation is just as desirable as the original outcome is undesirable.
Your point is perfectly valid, I think. Every action-guiding set of principles is ultimately all about consequences. Deontologies can be "consequentialized", i.e. expressed only through a maximization (or minimization) rule of some goal-function, by a mere semantic transformation. The reason why this is rarely done is, I suspect, because people get confused by words, and perhaps also because consequentializing some deontologies makes it more obvious that the goals are arbitrary or silly.
The traditional distinction between consequentialism and non-consequentialism does not come down to the former only counting consequences -- both do! The difference is rather about what sort of consequences count. Deontology also counts how consequences are brought about, that becomes part of the "consequences" that matter, part of whatever you're trying to minimize. "Me murdering someone" gets a different weight than "someone else murdering someone", which in turn gets a different weight from "letting someone else die through 'natural causes' when it could be easily prevented".
And sometimes it gets even weirder, the doctrine of double effect for instance draws a morally significant line between a harmful consequence being necessary for the execution of your (well-intended) aim, or a "mere" foreseen -- but still necessary(!) -- side-effect of it. So sometimes certain intentions, when acted upon, are flagged with negative value as well.
And as you note below, deontologies sometimes attribute infinite negative value to certain consequences.
Pardon me. I left off the technical qualifier for the sake of terseness. I have previously observed that all deontologial value systems can be emulated by (suitably contrived) consequentialist value systems and vice-versa so I certainly don't intend to imply that it is impossible to construct a consequentialist morality implementing this particular injunction. Edited to fix.
"Murder and children crying" aren't allowed to have negative weight in a utility function?
It's not about weight, it's about an absolute, discontinuous, hard limit -- regardless of how many utilons you can pile up on the other end of the scale.
Well, no. It's against the promise of how many utilons you can pile up on the other arm of the scale, which may well not pay off at all. I'm reminded of a post here at some point whose gist was "if your model tells you that your chances of being wrong are 3^^^3:1 against, it is more likely that your model is wrong than that you are right."
Yes, but the quote in no way concerns itself with the probability that such a plan will go wrong; rather, it explicitly includes even those with a wide margin of error, including "every" plan which ends in murder and children crying.
It's not a matter of "the plan might go wrong", it's a matter of "the plan might be wrong", and the universal part comes from "no, really, yours too, because you aren't remotely special."
This seems better-suited for MoreEmotional than LessWrong.
As much as I love Banks, this sounds like a massive set of applause lights, complete with sparkling Catherine wheels. Sometimes, you have to do shitty things to improve the world, and sometimes the shitty things are really shitty, because we're not smart enough to find a better option fast enough to avoid the awful things resulting from not improving at all. "The perfect must not be the enemy of the good" and so on.
I suppose I somewhat appreciate the sentiment. I note that labelling the killing 'murder' has already amounted to significant discretion. Killings that are approved of get to be labelled something nicer sounding.
David Chapman thinks that using LW-style Bayesianism as a theory of epistemology (as opposed to just probability) lumps together too many types of uncertainty; to wit:
I think he is correct, and LWers are overselling Bayesianism as a solution to too many problems (at the very least, without having shown it to be).
I do not see why any of Chapman's examples cannot be given appropriate distributions and modeled in a Bayesian analysis just like anything else:
Dynamical chaos? Very statistically modelable, in fact, you can't really deal with it at all without statistics, in areas like weather forecasting.
Inaccessibility? Very modelable; just a case of missing data & imputation. (I'm told that handling issues like censoring, truncation, rounding, or intervaling are considered one of the strengths of fully Bayesian methods and a good reason for using stuff like JAGS; in contrast, whenever I've tried to deal with one of those issues using regular maximum-likelihood approaches it has been... painful.)
Time-varying? Well, there's only a huge section of statistics devoted to the topic of time-series and forecasts...
Sensing/measurement error? Trivial, in fact, one of the best cases for statistical adjustment (see psychometrics) and arguably dealing with measurement error is the origin of modern statistics (the first instances of least-squared coming from Gauss and other astronomers dealing with errors in astronomical measurement, and of course Laplace applied Bayesian methods to astronomy as well).
Model/abstraction error? See everything under the heading of 'model checking' and things like model-averaging; local favorite Bayesian statistician Andrew Gelman is very active in this area, no doubt he would be quite surprised to learn that he is misapplying Bayesian methods in that area.
One’s own cognitive/computational limitations? Not just beautifully handled by Bayesian methods + decision theory, but the former is actually offering insight into the former, for example "Burn-in, bias, and the rationality of anchoring".
Note that I was speaking of "Bayesianism" as practiced on LW, not of Bayesian statistics the academic field. I do not believe these are the same.
I believe Chapman is writing a more detailed critique of what he sees here; I will be sure to link you to it when it comes.
I think that's absurd if that's what he really means. Just because we are not daily posting new research papers employing model-averaging or non-parametric Bayesian statistics does not mean that we do not think those techniques are useful and incorporated in our epistemology or that we would consider the standard answers correct, and this argument can be applied to any area of knowledge that LWers might draw upon or consider correct. If we criticize p-values as a form of building knowledge, is that not a part of 'Bayesian epistemology' because we are drawing arguments from Jaynes or Ioannidis and did not invent them ab initio?
'Your physics can't deal with modeling subatomic interactions, and so sadly your entire epistemology is erroneous.' '??? There's a huge and extremely successful area of physics devoted to that, and I have no freaking idea what you are talking about. Are you really as ignorant and superficial as you sound like, in listing as a weakness something which is actually a major strength of the physics viewpoint?' 'Oh, but I meant physics as practiced on LessWrong! Clearly that other physics is simply not relevant. Come back when LW has built its own LHC and replicated all the standard results in the field, and then I'll admit that particle physics as practiced on LW is the same thing as particle physics the academic field, because otherwise I refuse to believe they can be the same.'
I think you may be extrapolating much too far from the quote I posted. Also, my statistics level is well below both yours and Chapman's so I am not a good interlocutor for you.
I don't think I am. It's a very simple quote: "here is a list of n items Bayesian statistics and hence epistemology cannot handle; therefore, it cannot be right." And it's dead wrong because all n items are handled just fine.
I think you are being uncharitable. The list was of different types of uncertainty that Bayesians treat as the same, with a side of skepticism that they should be handled the same, not things you can't model with bayesian epistemology.
The question is not whether Bayes can handle those different types of uncertainty, it's whether they should be handled by a unified probability theory.
I think the position that we shouldn't (or don't yet) have a unified uncertainty model is wrong, but I don't think it's so stupid as to be worth getting heated about and being uncivil.
Did somebody solve the problem of logical uncertainty while I wasn't looking?
I disagree that Gwern is being uncivil. I don't think Chapman has any ground to criticize LW-style epistemology when he's made it abundantly clear he has no idea what it is supposed to be. (Indeed, that's his principal criticism: the people he's talked to about it tell him different things.)
It'd be like if Berkeley asked a bunch of Weierstrass' first students about their "supposed" fix for infinitesimals. Because the students hadn't completely grasped it yet, they gave Berkeley a rope, a rubber hose, and a burlap sack instead of giving him the elephant. Then Berkeley goes and writes a sequel to the Analyst disparaging this "new Calculus" for being incoherent.
In that world, I think Berkeley's the one being uncivil.
I think you're not being charitable again. Consider the difference between physics as practiced by quantum woo mystics, and physics as practiced by physicists or even engineers. I think that simplicio is referring to a similar (though less striking) tendency for the representative LWer to quasi-religiously misapply and oversell probability theory (which may or may not be the case, but should be argued with something other than uncharitable ridicule).
Unless there's been an enormous breakthrough in the past 2 years, I believe this is still a major unsolved problem. Also decision theory is about cooperating with other agents, not overcoming cognitive limitations.
Eric Raymond
Here's my thought process upon reading this. (Initially, I assumed “git 'er done” meant something like ‘women are unimportant except as sex objects, and I misread “unwilling” as “willing”.)
(Anyway, if an adult woman complains because you called her a girl, the course of action that leaves you the most time to get stuff done is apologizing, not doing that again, and getting back to work, not endlessly whining about how ridiculous the PC crowd are.)
Not necessarily, it might just encourage further frivolous complaints.
As opposed to feeding trolls, which is widely known to be extremely effective in making them shut up?
In the context the group you position here as 'trolls' are described as frivolous complainers. You advocate apologising and complying. Eugine is correct in pointing out that this can represent a perverse incentive (both in theory and in often observed practice).
I dunno... if someone's goal is to fuel a flamewar to discredit you, it would seem to me that ranting about that is more likely to make their day than just reacting as though they had pointed out you misspelled their name and then going back to your business.
Empirically, heaping scorn on everyone and seeing who sticks around leads to lots of time wasted on flame wars.
A relevant example:
http://arstechnica.com/information-technology/2013/07/linus-torvalds-defends-his-right-to-shame-linux-kernel-developers/
Linux kernel seems to me a quite well-managed operation (of herding cats, too!) that doesn't waste lots of time on flame wars.
I don't follow kernel development much. Recently, a colleague pointed me to the rdrand instruction. I was curious about Linux kernel support for it, and I found this thread: http://thread.gmane.org/gmane.linux.kernel/1173350
Notice that Linus spends a bunch of time (a) flaming people and (b) being wrong about how crypto works (even though the issue was not relevant to the patch).
Is this typical of the linux-kernel mailing list? I decided to look at the latest hundred messages. I saw some minor rudeness, but nothing at that level. Of course, none of these messages were from Linus. But I didn't have to go back more than a few days to find Linus saying things like, "some ass-wipe inside the android team." Imagine if you were that Android developer, and you were reading that email? Would that make you want to work on Linux? Or would that make you want to go find a project where the leader doesn't shit on people?
Here's a revealing quote from one recent message from Linus: "Otherwise I'll have to start shouting at people again." Notice that Linus perceives shouting as a punishment. He's right to do so, as that's how people take it. Sure, "don't get offended", "git 'er done", etc -- but realistically, developers are human and don't necessarily have time to do a bunch of CBT so that they can brush off insults.
Some people, I guess, can continue to be productive after their project leader insults them. The rest either have periodic drops in productivity, or choose to work on projects which are run by people willing to act professionally.
tl;dr: Would you put up with a boss who frequently called you an idiot in public?
Actually, that depends.
Mostly that depends on what the intent (and context) of calling me an idiot in public is. If the intent is, basically, power play -- the goal is to belittle me and elevate himself, reassert his alpha-ness, shift blame, provide an outlet for his desire to inflict pain on somebody -- then no, I'm not going to put up with it.
On the other hand, if this is all a part of a culturally normal back-and-forth, if all the boss wants is for me to sit up and take notice, if I can without repercussions reply to him in public pointing out that it's his fat head that gets into his way of understanding basic things like X, Y, and Z and that he's wrong -- I'm fine with that.
The microcultures of joking-around-with-insults exist for good reasons. Nobody forces you to like them, but you want to shut them down and that seems rather excessive to me.
I think it's pretty clear that Linus is more on the power-play end of the spectrum. Notice his comment above about the Android developer; that's not someone who is part of his microculture (the person in question was a developer on the Android email client, not a kernel hacker). And again, the shouting-as-punishment thing shows that Linus understands the effect that he has, but doesn't care.
Also, Linus, as the person in the position of power, isn't in a position to judge whether his culture is fun. Of course it's fun for him, because he's at the top. "I was just joking around" is always what bullies say when they get called out. The real question is whether it's fun for others. The recent discussion (that presumably sparked the quotes in this thread) was started by someone who didn't find it fun. So even if there are some "good reasons" (none of which you have named), they don't necessarily outweigh the reasons not to have such a culture.
That's not clear to me at all.
Note that management of any kind involves creating incentives for your employees/subordinates/those-who-listen-to-you. The incentives include both carrots and sticks and sticks are punishments and are meant to be so. If you want to talk about carrots-only management styles, well, that's a different discussion.
I disagree. You treat fun and enjoyment of working at some place as the ultimate, terminal value. It is not. The goal of working is to produce, to create, to make. Whether it's "fun" is subordinate to that. Sure, there are feedback loops, but organizations which exist for the benefit of their employees (to make their life comfortable and "fun") are not a good thing.
Punishments seem to have rapidly decreasing returns, especially given the availability of alternatives that are less abusive. Otherwise we'd threaten to people when we wanted to make them more productive, rather than rewarding them - which most of the time we don't above a low level of performance.
For what it's worth, I've never worked at a place that successfully used aversive stimulus. And, since the job market for programmers is so hot, I can't imagine that anyone would willingly do so (outside the games industry, which is a weird case). This is especially true of kernel hackers, who are all highly qualified developers who could find work easily.
I would point out that Linus Torvalds's autobiography is called "Just for Fun". Also, Linus doesn't have employees. Yes, he does manage Linux, but he doesn't employ anyone. I also pointed out a number of ways in which Linus's style was harmful to productivity.
Ahem. I think you mean to say that you never touched the electric fence. Doesn't mean the fence is not there.
Imagine that someone at your workplace decided not to come to work for a week or so, 'cause he didn't feel like it. What would be the consequences? Are there any, err... "aversive stimuli" in play here?
No need for imagination. The empirical reality is that a lot of kernel hackers successfully work with Linus and have been doing this for years and years.
Which means that anyone who doesn't like his style is free to leave at any time without any consequences in the sense of salary, health insurance, etc. The fact that kernel development goes on and goes on pretty successfully is evidence that your concerns are overblown.
No, I mean that touching the electric fence did not make me a more productive worker.
I'm not saying that Linus's style will inevitably lead to instant doom. That would be silly. I'm saying that it's not optimal. Linux hasn't exactly taken over the world yet, so there's definitely room for improvement.
As of 2012-04-16, 75% of kernel development is paid. I would assume those developers would find their jobs in jeopardy if Linus removed them from development.
The claim, as I understand it, is that the culture trades off fun for productivity. A common example given is Apple, where Steve Jobs was a hawk that excoriated his underlings, and thus induced them to create beautiful, world-conquering products.
Also that the culture selects for the people who find being productive fun.
While the more socially enlightened attitudes lead to very effective and high signal-to-noise conflict handling, as can be observed on Tumblr and MetaFilter?
Eric Raymond isn't suggesting that. Why are you?
Straw man. The grandparent explicitly made the scorn conditional, not 'on everyone'.
Failure to steel man. Replacing "everyone" with "people" leaves the basic point unchanged.
ETA: ... or, I should say, leaves a point that (1) deserves reply and (2) was probably what the original hyperbolic version was getting at anyway.
I don't believe that it does, and here's why.
Heaping scorn on everyone and seeing who sticks around is a selection process; the condition for surviving is being able to accept scorn, whether or not such scorn is warranted by the value system of the society. This is somewhat similar to hazing.
Heaping scorn on a specific group of people for their unwillingness to adopt the values of the society (or, rather, some powerful subset of the society which has enough clout to control how things are run) is a selection process based on something of value to the society, and is more like punishment or selective admissions: people with the valued trait are encouraged, those without are allowed to leave.
It would appear that there are very different implications, as the former selects those who can take unjustified scorn (a quality of dubious value), and the latter selects for any demonstrable quantity desired by the society (in this case, a specific attitude towards problem-solving).
This is a good argument for the claim that MixedNuts's hyperbolic version, read literally, misses something important. (Your argument convinces me, anyway.)
It is not clear to me that your argument addresses the "steel man" version in which "everyone" is replaced by "people who are unwilling to adopt that 'git’r'done' attitude".
Abuse of the 'steel man' concept and attempt to introduce a toxic social norm. I am strongly opposed to this influence.
MixedNuts attempts to refute a quote using a non-sequitur. Supporting a false refutation is not being generous, it is being biased. It is being unfair to the initial speaker.
So much so that it leaves the basic point a straw man.
Steel-manning a refutation does not equal supporting that refutation. In fact, steel-manning entails criticizing the original refutation, at least implicitly.
However, when a claim is plausibly intended to be a hyperbolic version of a reasonable claim, pointing out that the hyperbolic version is a straw man, without addressing the reasonable version, is mostly just poisoning the discourse.
(This charge doesn't apply to you if you sincerely believed that MixedNuts was non-hyperbolically claiming that literally everyone has scorn heaped on them in the community under discussion, or that MixedNuts would be read that way by many readers.)
-- Stanislaw Lem, White Death
(as far as I know, this sweet short story never have been translated into English; I translated this passage myself from my Russian copy, so I will be glad if someone corrects my mistakes)
Ernest Rutherford
Covril, The Wheel of Time
"[W]hen you have eliminated the impossible, whatever remains, however improbable, must be the truth." -- Sherlock Holmes
"When you have updated on the evidence, whatever is the most probable, however socially unnacceptable, must be believed."
--Alfred Korzybski Science and Sanity Page 376 (1933)
Interesting, if indeed it is true. I'm not sure how this is supposed to be a rationality quote though.
Sarah Hoyt
James Wilson
Counter-quote.
-Steven Spielberg
The answer may very well be, "because I find this bookmark that I bought at a dollar store a lot more aesthetically pleasing than the raw dollar bill".
You may as well ask, "Why spend $20 on a book ? Why not just save the $20 ?"
I get all kinds of entertainment out of reading a $20 bill.
Arr.
It will fall out. Apart from that, money isn't particularly clean and (especially if considering US currency) not particularly pretty either. I expect people to find a bookmark far more aesthetically pleasing than a note.
How is this a rationality quote? It is rationality-neutral at best.
"Because the dollar is dirty" is one of those pained, stretched explanations people come up with to explain why they do what they do, not the actual reason (even in some small part) the bookmark was invented and became popular.
The question wasn't "Why was the bookmark invented?". If it was, I might have, for example, tried to determine the first time someone used a bookmark (or when it became popular). Then I could have told you precisely how many dollars in present value that dollar would have been worth. That is, moving the goalposts in this way has made your quote worse, not better.
Not even is some small part? That's absurd. Can you not empathise in even a small part with the aesthetic aversion many people have to contaminating things with used currency?
Are you sure you didn't just go ahead and basically make up these people who don't want money to touch their book because it's dirty?
No. I've seen such people. When I look at the mirror, for example. Notice that the standard was explicitly set to:
The observation that this kind of absurd claim is positively received and even supported by similarly ridiculous petty sniping is disheartening.
I've known at least a couple people who found it yucky to handle cash right before a meal for that same reason.
<raises hand>
I definitely wash my hands after handling money and before eating.
I do neither. I use any piece of sufficiently stiff paper I happen to have around (bookmarks purchased by someone else, playing cards, used train tickets, whatever).
Or just fold the corner of the page over.
I made one when I was bored, long ago when my grandmother still ran her store and my uncle still ran his immigration law firm on the third floor, and when I was obsessed with knot theory, out of computer paper, tape, and a lot of hard pencil. I still use it, and it cost me next to nothing.
EDIT: If requested (however unlikely) I will happily deliver a picture, and either a push or a bouillon cube (your choice). EDIT THE SECOND: it was requested! http://imgur.com/a/kxanI
That leaves a permanent crease, which I dislike. (Likewise, I prefer to use pencils -- preferably soft pencils -- rather than pens to take notes.)
While I respect your right to do so, I find such a concept aesthetically horrifying.
Why use a bookmark that's worth a whole dollar? I use scrap paper, or a sticky note if falling out is a risk (it almost always isn't.)
It would seem that most of the responders are hopelessly literal....
Your quote is both literally and connotatively poor. If Spielberg had asked "Why spend two dollars on a bookmark? ... Why not use a dollar as a bookmark?" then there would at least have been some moral along the lines of efficient practicality. Even then it would be borderline.
A dollar is much more fungible than a bookmark. After you're done reading your book, you can not only use the dollar to hold your place in other books, you can spend it on other things.
I find it hard to come up with a deeper meaning for the original statement, so yeah.
Besides, it's not hard to come up with a deeper meaning behind what the responders are saying; in pointing out that an object specifically designed as a bookmark makes a better bookmark than a dollar bill, they're making a statement about more than just dollar bills and bookmarks, but about specialization in general.
"We don't automatically reflect on most things we do, even when spending money. Even lifelong practices can be shown as absurd with a moment's consideration from the right angle. In fact, we're so irrational that we'll pay a dollar for a bookmark!"
That's clearly the intent - except maybe for that last bit - but it's kinda a poor example, I have to admit.
I don't see why everyone is disagreeing with you. I definitely notice that people have a tendency to buy things labeled for some sort of purpose, where if they thought for a few minutes they could find a way to fulfill that same purpose without spending money. Unfortunately, I can't think of any examples off the top of my head.
My bookmark is made of two prices of fridge-magnet material. It can be closed around a few pages and the magnetism holds it in place, preventing it from falling out.
Plus dollars in my country are exclusively coins, the smallest note is $5.
Dollars are floppy. It's nice to have a relatively rigid bookmark. I've used tissues and such as bookmarks in the past but they're unsatisfactory. Of course, that was back when I still read books in dead tree format.
My bookmark is prettier than the dollar.
Hazrat Inayat Khan
ibid.
But Naaman was wroth, and went away...And his servants came near, and spake unto him, and said, My father, if the prophet had bid thee do some great thing, wouldest thou not have done it? how much rather then, when he saith to thee, Wash, and be clean?
2 Kings 5: 11-13
Micah 6: 7-8
-The Great Learning, one of the Four Books and Five Classics of Confucian thought.
I like it when I hear philosophy in rap songs (or any kind of music, really) that I can actually fully agree with:
-- Vince Staples, "Versace Rap"
It's quite sad that Tupac Shakur is the focus of so many conspiracy theories, because he was quite the sceptic about wasting your time on this stuff when there was real work to do making the world better.
I always thought it was interesting that Tupac got all the conspiracy theories while Biggie got none, despite the fact that Biggie released an album called Ready to Die, died, then two weeks later released an album called Life After Death. It's probably because Tupac's music appeals more to hippie types who are into this kind of stuff.
When a concept is inherently approximate, it is a waste of time to try to give it a precise definition.
-- John McCarthy
Thus, whenever you look in a computer science textbook for an algorithm which only gives approximate results, you will find that the algorithm itself is very vaguely specified, since the result is just an approximation anyway.
(I would have said: "When a concept is inherently fuzzy, it is a waste of time to give it a definition with a sharp membership boundary.")
Robert Wright, The Moral Animal
-- B. F. Skinner, Beyond Freedom and Dignity
Stephen Jay Gould
There was only one Ramanujan; and we are all well-aware of Gould's views on intelligence here, I presume.
You presume too much, the only thing I remember about Gould's views is that they are controversial.
A proactive interest in the latter would seem to lead to extensive instrumental interest in the former. Finding things (such as convolutions in brains or genes) that are indicative of potentially valuable talent is the kind of thing that helps make efficient use of it.
That's a hard problem, with no reasonable way to measure it in in a large population in sight, or even direction of the relationship taken into account. Ideally you'd take a bunch of kids and look at their brains and then see how they grew up and see whether you could find anything that altered the distribution in similar cases - but ....
Well, you see the problem? It's a sort of twiddling your thumbs style studying, rather than addressing more immediate problems that might do something at a reasonable price/timeline.
There are surprisingly few MRI machines or DNA sequencers in cotton fields and sweatshops. Paraphrasing the original quote from Stephen Jay Gould: The problem is not how good we are at detecting talent; it's where we even bother to look for it.
Jack Handey
To be fair, if you see a watering hole surrounded by skeletons, it probably means the water's toxic.
John C Wright
Is there a name for the fallacy of claiming to be an expert on the specific contents of other people's subconsciouses?
--Daniel Dennet Consciousness Explained
The complexity of software is an essential property, not an accidental one. Hence, descriptions of a software entity that abstract away its complexity often abstract away its essence.
Fred P. Brooks, No Silver Bullet
I've always had misgivings about this quote. In my experience about 90% of the code on a large project is an artifact of a poor requirement analysis/architecture/design/implementation. (Sendmail comes to mind.) I have seen 10,000-line packages melting away when a feature is redesigned with more functionality and improved reliability and maintainability.
This is true, but the connotations need to be applied cautiously. Complexity is necessary, but it is still something to be minimised wherever practical. Things should be as simple as possible but not simpler.
More concretely, sometimes software can be simplified and improved at the same time.
"In theory, there is no difference between theory and practice. In practice, there is."
-Ledaal Kes (Exalted Aspect Book: Air)
Are they a villain who "solves" people by removing them from their way?
(Alternative response: Does "everything" include the puzzle of identifying something that can't be reduced to a puzzle?)
-- Norwegian folktale.
I don't understand this rationality quote. Is it about fighting akrasia? Self-hacking to effectively saving money? It clearly describes a method that wouldn't actually work, and it could work as humour, but what does it mean as a rationality tale?
It could be used as an effective "How to create an Ugh Field and undermine all future self-discipline attempts" instruction manual. It isn't a rationality tale. It is confusing that 40 people evidently consider it to be one. (But only a little bit confusing. I usually expect non-rationalist quotes that would be accepted as jokes or inspirational quotes elsewhere to get around 10 upvotes in this thread regardless of merit. That means I'm surprised about the degree of positive reception.)
I don't think you are correct.
The miser knows each time he will not get the reward, and that he will save on food and drink. That is the real reward, and the rest is a kabuki play he puts on for less-important impulses, to temporarily allow him to restrain them in service of his larger goal. The end pleasure of savings will provide strong positive reinforcement.
This could probably be empirically tested, to see if it is true and would work as a technique. I can imagine a test where someone is promised candy, and anticipates it while acting to fulfill a task, and then is rewarded instead with a dollar. Do they learn disappointment, or does the greater pleasure of money outweigh the candy? This is predicated on the idea that they would prefer the money, of course - you would need to tinker with amounts before the experiment might give useful results.
I thought the way he deceived his conscious mind, and never learned, was interesting.
In the context of LW, I took it as an amusing critique of the whole idea of rewarding yourself for behaviours you want to do more .
Betcha it'd work. I'm going to set a piece of candy in front of me, work for half an hour, and then put it back, at least once a day for a week.
There are no happy endings. Endings are the saddest part, So just give me a happy middle And a very happy start.
-Shel Silverstein
X will never reach [arbitrary standard], so let's not try to improve X.
But but peak/end rule!
-- Will Wildman, analysis of Ender's Game
-Robert Downey Jr.
Maybe I'm misunderstanding the quote, but this seems to wither if you have something to protect. If I'm having surgery, I don't really want the team of expert surgeons listening to my suggestions. I shouldn't be on my team because I'm not qualified. Highly qualified people should be so that my team will win (and I get to live).
Expert surgeons tend to think that more problems should be solved via surgery than doctors who aren't surgeons. Before getting surgery you should always talk with a doctor who knows something about the kind of illness you are having who isn't a surgeon.
After the operation is done doctors will ask you if everything is alright with you. If you try to understand what the operation involved you will give your doctor answers that are likely to be more informative than if you just try to place all responsibility onto another person.
Especially if you feel something that's not normal for the type of operation that you get, it important to be confident that you perceive something that's worth bringing to the attention of your doctor.
Having had big operations (one with 8 weeks of hospitalisation and one with 3 weeks) myself I think not taking enough for myself in those context was one of the worst decisions I made in my life. But then I was young and stupid about how the world works at the time.
Well, I think the thrust of the quote had more to do with being confident in your own projects. But I'll try to do an answer to your point because I think it's important to recognise the limitations of domain specialists - some of whom just aren't very good at their jobs.
If you're not on your team of expert surgeons, you're gonna be screwed if they're not actually as expert as you might think they were. There's a bit in What Do You Care What Other People Think? Where Feynman is talking about his first wife's hospitalisation - and how he had done some reading around the area and come up with the idea that it might be TB - and didn't push for the idea because he thought that the doctors knew what they were doing.
[Feynman moves onto less likely possibilities]
[Gets convinced to lie to her that it's Hodgkins - lie falls through]
=====================
Point being, disinvolving yourself from decisions is not a no-risk choice, and specialists aren't necessarily wise just because they've sat through the classes and crammed some sort of knowledge into their heads to get a degree. Assigning trust is a difficult subject.
There's a book called The Speed of Trust - and that's pretty much what you give up in being involved in complex decisions where you're not a specialist and where the specialists are actually really good at their jobs - a bit of speed.
I think it's good to be well-calibrated.
-Unknown
How is that a rationality quote?
It's funny, but you really shouldn't be learning life lessons from Tetris.
If Tetris has taught me anything, it's the history of the Soviet Union.
-- Paraphrase of joke by Marcus Brigstocke
To be fair there are quite a few people who nowadays listen to electronic music, take drugs that are pills and who spend a lot of time in dark rooms.
I see small examples everywhere I look; they're just too specific to point the way to a general solution.
James Portnow/Daniel Floyd
Peter Greer
-Thomas Jefferson
-- GLaDOS from Portal 2
If you define best as easiest.
If best is defined as easiest, then the "usually" within the quote is entirely superfluous. "If" statements are logically exception-less, and the Law of Conserved Conversation (That i've just made up) means that "usually" implies exceptions. Otherwise it would be excluded from the quote. So I say, pedantically, "duh. but you're missing the point a bit, aren't you mate?"
I like to think of the principle as a kind of Occam's for action. Don't take elaborate actions to produce some solution that is otherwise trivially easy to produce.
Reynolds' law
Josh Billings
(h/t Robin Hanson)
Le Bovier de Fontenelle
This explains all those urges I get to burn witches, my talent at farming, all my knowledge at hunting and tracking and my outstanding knack for feudal political intrigue.
(Composition is not the relationship to previous minds that education entails. Can someone think of a better one?)
Peter Greer
Scott Adams
This is an incredibly important life skill.
--Delmore Schwartz, "Calmly We Walk Through This April's Day"; quoted by Mike Darwin on the GRG ML
'Then he posed a question that, obvious as it seems, had not really occurred to me: “What makes you think that UFOs are a scientific problem?”
I replied with something to the effect that a problem was only scientific in the way it was approached, but he would have none of that, and he began lecturing me. First, he said, science had certain rules. For example, it has to assume that the phenomena it is observing is natural in origin rather than artificial and possibly biased. Now the UFO phenomenon could be controlled by alien beings. “If it is,” added the Major, “then the study of it doesn’t belong to science. It belongs to Intelligence.” Meaning counterespionage. And that, he pointed out, was his domain. *
“Now, in the field of counterespionage, the rules are completely different.” He drew a simple diagram in my notebook. “You are a scientist. In science there is no concept of the ‘price’ of information. Suppose I gave you 95 per cent of the data concerning a phenomenon. You’re happy because you know 95 per cent of the phenomenon. Not so in intelligence. If I get 95 per cent of the data, I know that this is the ‘cheap’ part of the information. I still need the other 5 percent, but I will have to pay a much higher price to get it. You see, Hitler had 95 per cent of the information about the landing in Normandy. But he had the wrong 95 percent!”
“Are you saying that the UFO data we us to compile statistics and to find patterns with computers are useless?” I asked. “Might we be spinning our magnetic tapes endlessly discovering spurious laws?”
“It all depends on how the team on the other side thinks. If they know what they’re doing, there will be so many cutouts between you and them that you won’t have the slightest chance of tracing your way to the truth. Not by following up sightings and throwing them into a computer. They will keep feeding you the information they want you to process. What is the only source of data about the UFO phenomenon? It is the UFOs themselves!”
Some things were beginning to make a lot of sense. “If you’re right, what can I do? It seems that research on the phenomenon is hopeless, then. I might as well dump my computer into a river.”
“Not necessarily, but you should try a different approach. First you should work entirely outside of the organized UFO groups; they are infiltrated by the same official agencies they are trying to influence, and they propagate any rumour anyone wants to have circulated. In Intelligence circles, people like that are historical necessities. We call them ‘useful idiots’. When you’ve worked long enough for Uncle Sam, you know he is involved in a lot of strange things. The data these groups get is biased at the source, but they play a useful role.
“Second, you should look for the irrational, the bizarre, the elements that do not fit...Have you ever felt that you were getting close to something that didn’t seem to fit any rational pattern yet gave you a strong impression that it was significant?”'
If UFOs are controlled by a non-human intelligence, assuming they'll behave like human schemes is as pointless as assuming they'll behave like natural phenomena. But of course the premise is false and the Major's approach is correct.
misattributed often to Plato
George Bernard Shaw
A luxury, once sampled, becomes a necessity. Pace yourself.
Andrew Tobias, My Vast Fortune
Unknown
Or thinks he's got better leverage than you.
This could be studied empirically.
Anton Lavey, The Satanic Bible, The Book of Satan II
--Professor Farnsworth, Futurama.