Rationality Quotes April 2013
Another monthly installment of the rationality quotes thread. The usual rules apply:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, Overcoming Bias, or HPMoR.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (281)
-- Scott Aaronson
Holy Belldandy, it sounds like someone located the player character. Everyone get your quests ready!
Woah, I'd better implement Phase One of my evil plan if it's going to be ready in time for the hero to encounter it.
Omg, what do I do?! I can't find my random encounter table!
Just act life-like!
Don't worry, you are the random encounter.
My bet is that the student had many digits of pi memorised and just used their parity.
I would have easily won that game (and maybe made a quip about free will when asked how...). All you need is some memorized secret randomness. For example, a randomly generated password that you've memorized, but you'd have to figure out how to convert it to bits on the fly.
Personally I'd recommend going to random.org, generating a few hexadecimal bytes (which are pretty easy to convert to both bits and numbers in any desired range), memorizing them, and keeping them secret. Then you'll always be able to act unpredictably.
Well, unpredictably to a computer program. If you want to be able to be unpredictable to someone who's good at reading your next move from your face, you would need some way to not know your next move before making it. One way would be to run something like an algorithm that generates the binary expansion of pi in your head, and delaying calculating the next bit until the best moment. Of course, you wouldn't actually choose pi, but something less well-known and preferably easier to calculate. I don't know any such algorithms, and I guess if anyone knows a good one, they're not likely to share. But if it was something like a pseudorandom bitstream generator that takes a seed, it could be shared, as long as you didn't share your seed. If anyone's thought about this in more depth and is willing to share, I'm interested.
http://blog.yunwilliamyu.net/2011/08/14/mindhack-mental-math-pseudo-random-number-generators/
When I need this I just look at the nearest object. If the first letter is between a and m, that's a 0. If it's between n and z, that's a 1. For larger strings of random bits, take a piece of memorized text (like a song you like) and do this with the first letter of each word.
There's an easier way: look at the time.
Seconds are even? Type 'f'. Odd? Type 'd'. (Or vice-versa. Or use minutes, if you don't have to do this very often.)
A while ago there was an article (in NYTimes online, I think) about a program that could beat anyone in Rock-Paper-Scissors. That is, it would take a few iterations, and learn your pattern, and do better than chance against you.
It never got any better than chance against me, because I just used the current time as a PRNG.
Edit: Found it. http://www.nytimes.com/interactive/science/rock-paper-scissors.html?_r=0
Edit2: Over 25 rounds, 12-6-7 (win-loss-tie) vs. the "veteran" computer. Try it and post your results! :)
Over 12 rounds against the veteran computer, I managed 5-4-3, just trying to play "counterintuitively" and play differently from how I expected the players whose information it aggregated would play.
Not enough repetitions to be highly confident that I could beat the computer in the long term, but I stopped because trying to be that counterintuitive is a pain.
Got 7-6-7 with the same tactic. Apparently the computer only looks at the last 4 throws, so as long as you're playing against Veteran (where your own rounds will be lost in the noise), it should be possible for a human to learn "anti-anti-patterns" and do better than chance.
19-18-13 over 50 rounds against the veteran, without using any external RNG, by looking away and thinking of something else so that I couldn't remember the results of previous rounds. (My after-lunch drowsiness probably helped.)
That'll be almost independent but not unbiased: I think that a-m will be more frequent than n-z. However, you could do the von Neumann trick: if you have an unfair coin and want a fair sequence of bits, take the first and second flips. HT is 0, TH is 1, and if you get HH or TT, check the third and fourth flips. Etc.
I just looked up the letter frequencies and it's 52% for a-m and 48% for n-z (for the initial letters of English words). Using 'l' instead of 'm' gives a 47/53 split, so 'm' is at least the best letter to use.
[Aside] When do you need to generate random numbers in your head? I can think of literally no time when I've needed to.
If you have to make a close decision and don't have a coin to flip. Or at a poker tournament if you don't trust your own ability to be unpredictable.
Daniel Kahneman,Thinking, Fast and Slow
As far as I can tell this doesn't agree with my experience; a good chunk of every day is spent in groping uncertainty and confusion.
Unless you took John Leslie's advice and Ankified the multiplication table up to 25.
I've read your link to John Leslie with both curiosity and bafflement.
17 x 24 is not perhaps the best example of a question for which no answer comes immediately to mind. Seventeen has the curious property that 17 x 6 = 102. (The recurring decimal 1/6 = 0.166666... hints to us that 17 x 6 = 102 is just the first of a series of near misses on a round number, 167 x 6 = 1002, 1667 x 6 = 10002, etc). So multiplying 17 by any small multiple of 6 is no harder than the two times table. In particular 17 x 24 = 17 x (6 x 4) = (17 x 6) x 4 = 102 x 4 = 408.
17 x 23 might have served better, were it not for the curious symmetry around the number 20, with 17 = 20 - 3 while 23 = 20 + 3. One is reminded of the identity (x + y)(x - y) = x^2 - y^2 which is often useful in arithmetic and tells us at once that 17 x 23 = 20 x 20 - 3 x 3 = 400 - 9 = 391.
17 x 25 has a different defect as an example, because one can hardly avoid apprehending 25 as one quarter of 100, which stimulates the observation that 17 = 16 + 1 and 16 is full of yummy fourness. 17 x 25 = (16 + 1) x 25 = (4 x 4 + 1) x 25 = 4 x 4 x 25 + 1 x 25 = 4 x 100 + 25 = 425.
17 x 26 is a better example. Nature has its little jokes. 7 x 3 = 21 therefore 17 x 13 = (1 + 7) x (1 + 3) = (1 + 1) + 7 x 3 = 2 + 21 = 221. We get the correct answer by outrageously bogus reasoning. And we are surely puzzled. Why does 21 show up in 17 x 13? Aren't larger products always messed up and nasty? (This is connected to 7 + 3 = 10). Any-one who is in on the joke will immediately say 17 x 26 = 17 x (13 x 2) = (17 x 13) x 2 = 221 x 2 = 442. But few people are.
Some people advocate cultivating a friendship with the integers. Learning the multiplication table, up to 25 times 25, by the means exemplified above, is part of what they mean by this.
Others, full of sullen resentment at the practical usefulness of arithmetic, advocate memorizing ones times tables by the grimly efficient deployment of general purpose techniques of rote memorization such as the Anki deck. But who in this second camp sees any need to go beyond ten times ten?
Does John Leslie have a foot in both camps? Does he set the twenty-five times table as the goal and also indicate rote memorization as the means?
I'm not sure exactly what he had in mind, but learning the multiplication tables using Anki isn't exactly rote.
Now, this may not be the case for others, but when I see a new problem like 17 x 24, I don't just keep reading off the answer until I remember it when the note comes back around. Instead, I try to answer it using mental arithmetic, no matter how long it takes. I do this by breaking the problem into easier problems (perhaps by multiplying 17 x 20 and then adding that to 17 x 4). Sooner or later my brain will simply present the answers to the intermediate steps for me to add together and only much later do those steps fade away completely and the final answer is immediately retrievable.
Doing things this way, simply as a matter of course, you develop somewhat of a feel for how certain numbers multiply and develop a kind of "friendship with the integers." Er, at least, that's what it feels like from the inside.
That's not the important point. Even if you have, you will still face the same problem when facing a question like, for example, say 34 × 57 = ?. The quote was using that particular problem as an example. If that example does not apply to you because you Ankified the multiplication table up to 25 or for any other reason, it is trivial to find another problem that gives the desired mental response. (As I just did with the 34 × 57 problem.)
Agreed. I'm not so much disagreeing with the thrust of the quote as nitpicking in order to engage in propaganda for my favorite SRS.
Of course, even if I have no complete answer to 34 × 57, I still have "intuitive feelings and opinions" about it, and so do you. For example, I know it's between 100 and 10000 just by counting the digits, and although I've just now gone and formalized this intuition, it was there before the math: if I claimed that 34 × 57 = 218508 then I'm sure most people here would call me out long before doing the calculation.
What has this got to do with the original quote? The quote was claiming, truthfully or not, that when one is first presented with a certain type of problem, one is dumbfounded for a period of time. And of course the problem is solvable, and of course even without calculating it you can get a rough picture of the range the answer is in, and with a certain amount of practice one can avoid the dumbfoundedness altogether and move on to solving the problem, and that is a fine response to give to the original quote, but it has no relevance to what I was saying.
All I was saying is that it is an invalid objection to object to the quote based on the fact that with a certain technique the specific example given by the quote can be avoided, as that example could have easily been replaced by a similar example which that technique does not solve. I was talking about that specific objection I was not saying the quote is perfect, or even that it is entirely right. You may raise these other objections to it. But the specific objection that Jayson_Virissimo raised happens to be entirely invalid.
-Allen Knutson on collaborating with Terence Tao
(meta)
Saith the linked site: “You must sign in to read answers past the first one.”
Well, that's obnoxious.
If it's any consolation, none of the answers past the first one on this question are very good.
At that point I'd start wondering why there doesn't appear to be a simple proof. For example, maybe some kind of generalization of the result is false and you need the complexity to "break the correspondence" with the generalization.
-Same place
"The peril of arguing with you is forgetting to argue with myself. Don’t make me convince you: I don’t want to believe that much."
The others are quite nice too: http://www.theliteraryreview.org/WordPress/tlr-poetry/
Something a Chess Master told me as a child has stuck with me:
-- Robert Tanner
-- Jake the Dog (Adventure Time)
For reference purposes: video clip; episode transcript.
Which is of course a different question to "What should I do to get good at Chess?" which is all about deliberate practice with a small proportion of time devoted to playing actual games.
Right, I often play blitz games for an hour a day weeks on end and don't improve at all. Interestingly, looking at professional games, even if I don't bother to calculate many lines, seems to make me slightly better; so there are ways to improve without deliberate practice, but playing blitz doesn't happen to be one of them. Playing standard time controls does work decently well though, at least once you can recognize all the dozen or so main tactics.
Playing a lot isn't as good as deliberate practice, but it's better than having done neither.
This seems incontrovertible.
- K'ung Fu-tzu
The 'imitation' part is appropriately meta for a quote page.
I'd like to imagine that it's the blurb he put on the back of his own book: "I've done the reflection (noble!); buy now and you can get the benefit -- it's easy! -- or you can go stumbling off without the benefit of my wisdom like a sucker."
-- Albert Einstein
At least sometimes the formulation is far easier than the solution.
This is definitely true. General class of examples: almost any combinatorial problem ever. Concrete example: the Four Colour Theorem
In my experience it can often turn out that the formulation is more difficult than the solution (particularly for an interesting/novel problem). Many times I have found that it takes a good deal of effort to accurately define the problem and clearly identify the parameters, but once that has been accomplished the solution turns out to be comparatively simple.
Hmm. Einstein is perhaps most famous for "discovering" special relativity. But he neither formulated the problem, nor found the solution (I think the Lorentz transformation was already known to be the solution), but reinterpreted the solution as being real.
His "greatest error" was introducing the cosmological constant into general relativity--curiously, making a similar error to what everyone else had made when confronted with the constancy of the speed of light, which was refusing to accept that the mathematical result described reality.
Do you have an original source for that? All I can find is various quotation sites, which contain so amny other things that Einstein allegedly said I feel sceptical.
Nope, and I don't recall where I saw it attributed to him originally. (I did check by Googling it, but you're right that that only confirms that it's often attributed to him.)
Following the chain, I came across:
Source, with the addition later of 'expect to read a lot of sentences like this in coming years.'
─James C. Scott, Seeing Like a State
-Kafka, A Little Fable
Moral: Just because the superior agent knows what is best for you and could give you flawless advice, doesn't mean it will not prefer to consume you for your component atoms!
My problem with this is, that like a number of Kafka's parables, the more I think about it, the less I understand it.
There is a mouse, and a mouse-trap, and a cat. The mouse is running towards the trap, he says, and the cat says that to avoid it, all he must do is change his direction and eats the mouse. What? Where did this cat come from? Is this cat chasing the mouse down the hallway? Well, if he is, then that's pretty darn awful advice, because if the cat is right behind the mouse, then turning to avoid the trap just means he's eaten by the cat, so either way he is doomed.
Actually, given Kafka's novels, so often characterized by double-binds and false dilemmas, maybe that's the point: that all choices lead to one's doom, and the cat's true observation hides the more important observation that the entire system is rigged.
('"Alas", said the voter, "at first in the primaries the options seemed so wide and so much change possible that I was glad there was an establishment candidate to turn to to moderate the others, but as time passed the Overton Window closed in and now there is the final voting booth into which I must walk and vote for the lesser of two evils." "You need only not vote", the system told the voter, and took his silence for consent.')
On the other hand, it's a perfectly optimistic little fable if you simply replace the one word "trap" with the word "cat".
This is much better than my moral.
I will run the risk of overanalyzing: Faced with a big wide world and no initial idea of what is true or false, people naturally gravitate toward artificial constraints on what they should be allowed to believe. This reduces the feeling of crippling uncertainty and makes the task of reasoning much simpler, and since an artificial constraint can be anything, they can even paint themselves a nice rosy picture in which to live. But ultimately it restricts their ability to align their beliefs with the truth. However comforting their illusions may be at first, there comes a day of reckoning. When the false model finally collides with reality, reality wins.
The truth is that reality contains many horrors. And they are much harder to escape from a narrow corridor that cuts off most possible avenues for retreat.
—Richard Hamming
(I recommend the whole talk, which contains some great examples and many other excellent points.)
I think the thing that strikes me most about this talk is how different science was then versus now. For one small example he was asked to comment on the relative effectiveness of giving talks, writing papers and writing books. In today's world its not a question anyone would ask, and the answer would be "write at least a few papers a year or you won't keep your job."
--Chris Brogan on the Sunk Cost Fallacy
If there is another one next door, maybe. If it is much farther than that the menu would have to be fairly bad.
... if there is a sufficiently convenient alternative and the difference is significant.
I think you are using settle in its more precise meaning (i.e. release a legal claim), which is not consistent with the colloquial usage. Colloquially, "settle" is often used as the antonym of "take reasonable risks."
Similarly, I think the difference between "don't like the menu" and "fairly bad" is hairsplitting for someone who would find this level and type of advice useful. In just about any city, the BATNA is "travel to another place to eat, getting no further from your home than you were at the first place." And that's a pretty good alternative. I think the quote correctly asserts that the alternative is underrated.
While I assert that the quote advocates premature optimization. It distracts from actual cases of the sunk cost fallacy by warning against things that are often just are not worth fixing.
But regardless of whether we believe our own positions are inviolable, it behooves us to know and understand the arguments of those who disagree. We should do this for two reasons. First, our inviolable position may be anything but. What we assume is true could be false. The only way we’ll discover this is to face up to evidence and arguments against our position. Because, as much as we may not enjoy it, discovering we’ve believed a falsehood means we’re now closer to believing the truth than we were before. And that’s something we should only ever feel gratitude for.
Aaron Ross Powell, Free Thoughts
This is why steelmanning is a really good community norm. Social incentives for understanding the other's position are usually bad, but if people give credit for steelmanning, these incentives are better.
"Steelmanning" and "understanding the other's position" aren't really related (to my knowledge).
It's difficult to steelman someone's position if I don't understand it.
― David Lamb & Susan M. Easton, Multiple Discovery: The pattern of scientific progress, pp. 100-101
Columbus's "genius" was using the largest estimate for the size of Eurasia and the smallest estimate for the size of the world to make the numbers say what he wanted them to. As normally happens with that sort of thing, he was dead wrong. But he got lucky and it turned out there was another continent there.
Wait... he did that on purpose?
Yes, actually. He believed the true dimensions of the Earth would conform to his interpretation of a particular Bible verse (thwo-thirds of the earth should be land, and one-third water, so the Ocean had to be smaller than believed) and fudged the numbers to fit.
Ah, OK. I had taken DanielLC to be implying that he had fudged the numbers in order to convince the Spanish queen to fund him.
Exactly. In fact, it was well known at the time that the Earth is round, and most educated people even knew the approximate size (which was calculated by Eratosthenes in the third century BCE). Columbus, on the other hand, used a much less accurate figure, which was off by a factor of 2.
The popular myth that Columbus was right and his contemporaries were wrong is the exact opposite of the truth.
Perhaps Columbus's "genius" was simply to take action. I've noticed this in executives and higher-ranking military officers I've met-- they get a quick view of the possibilities, then they make a decision and execute it. Sometimes it works and sometimes it doesn't, but the success rate is a lot better than for people who never take action at all.
Executives and higher ranking military officers also happen to have the power to enforce their decisions. Making decisions and acting on them can be possible without that power but the political skill required is far greater, the rewards lower, the risks of failure greater and the risks of success non-negligible.
-Paul Graham
--Paul Graham, same essay
Source
--Pirates of the Caribbean
The pirate-specific stuff is a bit extraneous, but I've always thought this scene neatly captured the virtue of cold, calculating practicality. Not that "fairness" is never important to worry about, but when you're faced with a problem, do you care more about solving it, or arguing that your situation isn't fair? What can you do, and what can't you do? Reminds me of What do I want? What do I have? How can I best use the latter to get the former?
That said, if I recognize that I'm in a group that values "fairness" as an abstract virtue, then arguing that my situation isn't fair is often a useful way of solving my problem by recruiting alliances.
If you're in a group where "that's not fair" is frequently a winning argument, you may already be in trouble.
I am in many groups where, when choosing between two strategies A and B, fairness is one of the things we take into account. I'm not sure that's a problem.
If it's a frequently-occurring observation within the group then yes, there seems to be something wrong. Possibly because things are regularly proposed and acted on without considering fairness until someone has to point it out.
If it hardly ever has to be said, but when pointed out, it is often persuasive, you're probably OK.
Frankly this is precisely the kind of ruthless pragmatism that gives utilitarians such a horrible reputation.
Well, it certainly didn't stop Jack Sparrow from being a beloved character.
You can be ruthless and popular, if you're sufficiently charismatic about it.
It also helps to be fictional, or at least sufficiently removed from the target audience that they perceive you in far mode.
I'd say that it's possible to be ruthless and popular even among people who're familiar with you, as long as you keep your ruthlessness in far mode for the people you're attempting to cultivate popularity amongst. Business executives come to mind, and the more cutthroat strains of social maneuverers.
From Boswell's Life of Johnson. HT to a commenter on the West Hunter blog.
If each person counts as one for each time he dines, Alexander can only claim to have personally hosted the guests at his most recent meal; the others were guests of someone else.
I think the idea is that all of the people are him.
quick math
I used to dine with 1460 people a year in my home, reckoning each person as one each time I dined there.
Families of four are mighty terrifying, aren't they?
-- Scott Aaronson on areas of expertise
If the atheists what to win me over, then the way for them to do so is straightforward: they should ignore me, and try instead to win over the theology community, bishops, the Pope, pastors, denominational and non-denominational bodies, etc., with superior research and arguments.
Not that I don't think this is a fair counterpoint to make, but in my own experience trying to find the best arguments for religion, I learned a lot more and got better reasoning talking to random laypeople than by asking priests and theologians.
Of course, the fact that I talked to a lot more laypeople than priests and theologians is most likely the determining factor here, but my experiences discussing the nature and details of climate change have not followed a similar pattern at all.
-- Scott Aaronson in the next paragraph
I can't think of a reply to this that won't start a game of reference class tennis; but I think there's a possibility that Aaronson's list is a more complete set of the relevant experts on the climate than your list is of the relevant experts on the existence of deities. If we grant the existence of deities, and merely wish to learn about their behavior; your list would be analogous to Aaronson's.
Both lists end with “etc.”, so I have trouble calling either of them incomplete.
I think "etc." is a request to the reader to be a good classifier--simply truncating the list at "etc." is overfitting, and defeats the purpose of the "etc." Contrariwise, construing "etc." to mean "everything else, everywhere" is trying to make do with fewer parameters than you actually need. The proper use of "etc." is to use the training examples to construct a good classifier, and flesh out members of the category by lazy evaluation as needed.
It's not a reasonable presumption that "etc." will cover "any arbitrary thing that happens to make trouble for your counterargument".
Just so I'm clear: do you believe the theology community ("bishops, the Pope, pastors, denominational and non-denominational bodies, etc.") is as reliable an authority on the nature and existence of the thing atheists don't believe in than the academic climatology community is on the nature and existence of the thing climate skeptics don't believe in?
If so, then this makes perfect sense.
That said, my experience with both groups doesn't justify such a belief.
Well, no. You're an atheist. I'm sure a Christian climate skeptic would agree with you, with the terms reversed.
That is, a Christian climate skeptic would claim that their experience with both groups doesn't justify the belief that the academic climatology community is as reliable an authority as the theology community?
In a trivial sense I agree with you, in that there's all sorts of tribal signaling effects going on, but not if I assume honest discussion. In my experience, strongly identified Christians believe that most theologians are unreliable authorities on the nature of God.
Indeed, it would be hard for them to believe otherwise, since most theologians don't consider Jesus Christ to have been uniquely divine.
Of course, if we implicitly restrict "the theology community" to "the Christian theology community," as many Americans seem to, then you're probably right for sufficiently narrow definitions of "Christian".
If atheists really thought that theists believed just because the pastors did, then targeting the pastors would seem to be the best way to go about it, yes. Either by attacking their credibility or attempting to convince them otherwise/attack the emotional basis of their faith. Even if the playing field was uneven and the pastors were actually crooked, there just wouldn't be any gain in going after the believers as individuals.
-- Isaac Asimov
For some interesting exceptions to this quote, see Bostrom on Information Hazards.
This may not be strictly true. Consider the basilisk.
I have, and I've come to the conclusion that Eliezer's solution, i.e., suppress all knowledge of it, won't actually work.
Agreed, but I think the exceptions are very few.
Winston Rowntree, Non-Bullshit Fables
On the meta-level, I'm not sure "quickness beats persistence" is a helpful lesson to teach. At the scale of things many LessWrongers would hope to help accomplish, both qualities are prerequisites, and it would be a mistake to believe that you don't have to worry about the latter just because you're one of the millions of people who are 99.9th percentile at the former.
On the base level, a non-bullshit version of this fable would look more like "There once was a hare being passed by a tortoise. Neither of them could talk. The end."
Now that you mention it, a fable, by definition, requires bullshit.
I've always thought there should be a version where the hare gets eaten by a fox halfway through the race, while the tortoise plods along safely inside its armored mobile home.
http://abstrusegoose.com/494
Link.
On a similar note, there's http://www.thisamericanlife.org/radio-archives/episode/463/transcript - search for "Act Two".
"Moral: life is inarguably a depressingly unfair endeavor."
FTFY:
-- Steve Jobs
Longer version from here
—Steve Jobs, interviewed in Fortune, March 7, 2008
Focusing is about saying no long enough to get into flow, or at least some kind of mental state where your short-term memory doesn't constantly evaporate. If you have to say no all the time, you'll wind up twenty hours later having written six lines and with a head full of jelly.
Without context I'm tempted to say focusing is about a whole bunch of things and that telling people to say no is just another way of saying, 'Use your willpower.' Which is another way of saying 'Focus by focusing!' Which... seems rather recursive at least.
One of the things that focusing is about is giving up pursuing good things.
Which means that if I want to focus, I need to decide which good things I'm going to say "no" to.
This may seem obvious, but after seeing many not-otherwise-stupid management structures create lists of "priorities" that encompass everything good (and consequently aren't priorities at all), I'm inclined to say that it isn't as obvious as it may seem.
Let us say you have a paper to write but you also want to go to a party. While trying to write the paper, you could keep wondering whether you should stop writing the paper and just go to the party, but keep writing anyway, i.e. try to use willpower. Or you could decide, once and for all that you are not going to go to the party, which is saying no. I think the second approach will be more effective in getting the paper done. So, I think there is actually a difference.
Now, of course the insight isn't profound and both folk and professional psychology has known it for some time (I can't find a good link off-hand). But, when a successful person high-status person who has achieved a lot saying it lends it whole lot more of credibility.
I feel like it's more about saying "yes" with enthusiasm.
--Daniel Steinberg, The Cholesterol Wars, 2007, Elsevier Press, p. 89
--Matt Dillahunty
I wonder if somebody, looking at (a) his stated goal and (b) his behaviour, would consider his statement borne out. (Same goes for me, no offense to Dillahunty specifically).
Charles P Kindleberger, in Manias, Panics and Crashes; a History of Financial Crisis
I imagine that thanks to Bitcoin, a few of us can feel this quote acutely, in our guts.
--Charlie Munger
<not serious>
“You can catch more flies with honey than vinegar.” “You can catch even more with manure; what's your point?”
--Sheldon Cooper from The Big Bang Theory
</not serious>
That's actually an insightful analogy regarding human social politics.
The Stockholm syndrome says otherwise.
That link isn't clear to me. Could you please elaborate?
-- Thomas Szasz
-- Garret Jones
Could you give an example for goal goal-based advice that's always right?
Sure. From the same post:
-- John Stuart Mill, autobiography
For what it's worth, personal experience tells me otherwise.
I've found that thinking about something outside yourself (and thus not your own happiness) makes lots of people less depressed, and somewhat happy. However, the last sentence is clearly false, as many anecdotal reports of "I'm so happy!" show. Maybe it works that way for some people?
Parfit, On What Matters, Vol. 2 (pp. 616-620).
Parfit, quoted in ”How To Be Good” by Larissa MacFarquhar. PDF
--Charlie Munger
Joe Pyne was a confrontational talk show host and amputee, which I say for reasons that will become clear. For reasons that will never become clear, he actually thought it was a good idea to get into a zing-fight with Frank Zappa, his guest of the day. As soon as Zappa had been seated, the following exchange took place:
Of course this would imply that Pyne is not a featherless biped.
Source: Robert Cialdini's Influence: The Psychology of Persuasion
Ken Jennings
Punster: go on a hunting trip with Mick Jagger.
Double punster: it's hunting season for Jimmy Page's former band.
Nice, but how is this a rationality quote? Is there some allegory that I'm missing?
A funny example of seeing with fresh eyes, I guess?
Um, be creative? 11 upvotes.
-- Warren Buffett
I have no idea whether this is true of Darwin, but it still might be good advice.
See here.
-To The Stars
Source: http://bladekindeyewear.tumblr.com/post/47462509182/but-where-exactally-will-this-backdoor-out-the-felt
-Paul Graham
I like the sentiment, but Paul Graham seems to be claiming that information hazards don't exist, and that doesn't appear to be true.
Despite agreeing with the rest of the essay (which is very good), this is not true. Tiresomely standard counter-example: "Heil Hitler! No, there are no Jews in my attic."
I would say this is not ALWAYS true. But for the purpose of civilized discussion between human beings, it does seem like a very useful rule of thumb.
Substitute "statement" with "belief".
Sorry, I don't understand. I believe there are Jews in my attic, but this belief should be suppressed, rather than spread.
This seems like fallacy of the excluded middle. Suppressed and spread are not the only two options.
If the nazi starts to believe it, you should suppress such a belief (probably by acting inocculuously, but if suppressing it violently would work better you should do that instead.)
I like the sentiment. I disagree that it is (always) the worst you can say about it. And there are also true things that are actively constructed to be misleading---I certainly go about suppressing those where possible and plan to continue.
Wouldn't explaining why the statement is misleading be more productive than suppressing the misleading statement?
Ludwig Wittgenstein, Tractatus Logico-Philosophicus, 1921
What does this mean?
In context, this is said right before the battle of Agincourt and Henry V is reminding his troops that the only thing left for them to do is to prepare their minds for the coming battle (where they are horribly outnumbered). I guess the rationality part is to remember that sometimes we must make sure to be in the right mindset to succeed.
I've always seen that whole speech as a pretty good example of reasoning from the wrong premises: Henry V makes the argument that God will decide the outcome of the battle and so if given the opportunity to have more Englishmen fighting along side them, he would choose to fight without them since then he gets more glory for winning a harder fight and if they lose then fewer will have died. Of course he doesn't take this to the logical conclusion and go out and fight alone, but I guess Shakespeare couldn't have pushed history quite that far.
A good 'dark arts' quote from that speech might be when he offers to pay anyone's fare back to England if they leave then. After that, anyone thinking of deserting will be trapped by their sunk costs into staying - but maybe that's not what Shakespeare had in mind...
The quote struck me as a poetic way of affirming the general importance of metacognition - a reminder that we are at the center of everything we do, and therefore investing in self improvement is an investment with a multiplier effect. I admit though this may be adding my own meaning that doesn't exist in the quote's context.
Rewatching Branagh's version recently, I keyed in on a different aspect. In his speech, Henry describes in detail all the glory and status the survivors of the battle will enjoy for the rest of their lives, while (of course) totally downplaying the fact that few of them can expect to collect on that reward. He's making a cost/benefit calculation for them and leaning heavily on the scale in the process.
Contrast with similar inspiring military speeches:
William Wallace says, "Fight and you may die. Run and you may live...for awhile. And dying in your beds, many years from now, would you be willin' to trade ALL the days, from this day to that, for one chance, just one chance, to come back here and tell our enemies that they may take our lives, but they'll never take our freedom!" He's saying essentially the same thing as Henry, but framing it as a loss instead of a gain. Where Henry tells his soldiers what they'll gain from fighting, Wallace tells them what they'll lose if they don't. Perhaps it's telling that, unlike Henry, he doesn't get very specific. It might've been an opportunity for someone in the ranks to run a thought experiment, "What specific aspects of my life will be measurably different if we have 'freedom' versus if we don't have 'freedom'? What exactly AM I trading ALL the days for? And if I magically had that thing without the cost of potentially dying, what would my preferences be then?" Or to just notice their confusion and be able to recognize they were being loss averse and without the ability to define exactly what they were averse to losing.
Meanwhile, Maximus tells his troops, "What you do in life echoes in eternity." He's more honest and direct about the probability that you're going to die, but also reminds you that the cost/benefit analysis extends beyond your own life, the implication being that your 'honor' (reputation) affects your placement in the afterlife and (probably of more consequence) the well being of your family after your death. Life is an iterated game and sometimes you have to defect (or cooperate?) so that your children get to play at all.
And lastly, Patton says, "No bastard ever won a war by dying for his country. He won it by making the other poor, dumb bastard die for his." He explicitly rejects the entire 'die for your country' framing and foists it wholly onto the enemy. It's his version of "The enemy's gate is down." He's not telling you you're not going to die, but at least he's not trying to convince you that your death is somehow a good or necessary thing.
When taken in this company, Henry actually comes across more like a villain. Of all of them, he's appealing to their desire to achieve rational interests in an irrational way without being at all upfront about their odds of actually getting what he's promising them.
― Halldór Laxness, Under the Glacier.
Before remembering the older definition of "incredible" that is presumably meant, I parsed this as "Like all great rationalists you believed in things that were twice as awesome as theology"; and thought "Only twice?".
What does this mean?
That on probabilistic or rational reflection one can come to believe intuitively implausible things that are as or more extraordinary than their theological counterparts. Or to mutilate Hamlet, that there are more things on earth than are dreamt of in heaven.
Most of quantum physics and relativity are certainly intuitively weirder than Jesus turning water into wine, self-replicating bread or a body of water splitting itself to create a passage.
I mean, our physics say it's technically possible to make machines that do all of this. Without magic. Using energy collected in space and sent to Earth using beams of light. Although we probably wouldn't use beams of light because that's inefficient.
...and then adjusted our senses of the 'incredible' accordingly, so that Special Relativity seemed less incredible, and God more so.
Sense of incredulity is not a belief, so it's not covered by those injunctions. A sense of wonder is both pleasant and good for mental health, and diverging to much from the average in deep emotional reactions carries a real cost in less accurate empathic modelling.
Well, I dunno, if you describe physics as a Turing machine program, ala Solomonoff induction, special relativity may well be more incredible than god(s), chiefly because Turing machines may well be unable to do exact Lorentz invariance, but can do some kind of god(s), i.e. superintelligences. (Approximate relativity is doable, though).
Even after looking the book up on Google, without context, I can't tell whether the rationalist being spoken of has gone astray through his reason, or has succeeded in finding the truth of something. But I am now interested in reading Laxness.
I am confused--upvoting this comment is a rejection of this website.
I doubt that Laxness means "rationalist" in the LW community sense. In philosophy, a rationalist is defined as distinct from an empiricist, as one who believes knowledge to be arrived at from a priori cogitation, as opposed to experience.
Scott Adams on evolution toward... what?
--Francis Bacon
Neither is necessarily or even usually true though, is it?
Necessarily, of course not. Usually, well, this is Francis Bacon, and so the intended meaning of the quote is more like "We can be more certain in the outputs of empiricism than we can be in the outputs of deductive argument beginning with intuitions or other a priori knowledge."
Howard Taylor - Schlock Mercenary
--Daniel Kahneman on the dichotomy between the self that experiences things from moment to moment and the self that remembers and evaluates experiences as a whole. (from Thinking, Fast and Slow )
Edgar Lawrence Smith, Common Stocks as Long Term Investments
Boswell's Life of Johnson (quoted in "Applied Scientific Inference", Sturrock 1994)
-Iain M. Banks, Look to Windward
Incidentally, Mr. Banks has been diagnosed with terminal cancer, and estimated to have a few months to live as of this post. Comments may be made on his website: http://friends.banksophilia.com/
The significant problems we face cannot be solved at the same level of thinking we were at when we created them.
-- Albert Einstein
Source? Wikiquote seems to think its a misquote.
Isn't there a law or something stating that Einstein never said 99% of what's attributed to him? Or maybe that the accuracy of quote's attribution is inversely proportional to the person's fame?
Well, it's unsurprising that misattributed quotes are more often attributed to famous people than to unknown people.
Thanks FiftyTwo- I just looked up the article you refer to and it indicates that it may be a paraphrase of a longer quote. I heard this from Anthony Robbins, this quote is attributed to Einstein in some of his literature. It seems that the sentiment, if not the exact quote, seem to be attributable to Einstein
Whoops, forgot to promote this.