Rationality Quotes June 2012
Here's the new thread for posting quotes, with the usual rules:
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself
- Do not quote comments/posts on LW/OB
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (413)
On lost purposes:
-- Tanya Khovanova
-- Terry Pratchett (on Nation)
--Razib Khan, The Erasmus Path in Science
-- James L. Sutter
-- Will Wilkinson
(I read it for the worldbuilding...)
Cicero, De Natura Deorum
Speaking the Truth in times of universal deceit is a revolutionary act. - George Orwell
--Marie Curie
Given that she died from overexposure to radiation, I'm not sure how seriously I can take this.
So the science gets done, and you make a neat quote, for the people who are still alive.
Well, now people who are in the know can avoid fear by knowing to avoid doing the stuff that she did. It's mostly the people who believe that radiation is dangerously little understood to whom it seems scary.
Of course, I'd have to say the quote is still incorrect. If I understand that I'm a prisoner of war who's going to be tortured to make my superiors want to ransom me more, I'm damn well going to be afraid.
But I still find "Now is the time to understand more, so that we may fear less" awfully uplifting.
--Nicolás Gómez Dávila, source
I liked the quote, once I figured out how all the negatives interacted with each other.
Tycho
Slava Akhmechet see also Enso and the rest
Deadpool
--Friedrich Nietzsche, The Gay Science #51
It seems to make the same point as the Parable of the Dagger.
(I.e.: logic games are fun and all, but don't expect things to work that way in the real world. Or: it's valuable to know the difference between intelligent thinking and smart-assery.)
--Eugene Gendlin
--Hejitz et.al.
What's the significance of this?
Intestinal bacteria have an effect on the nervous system: they affect how we think and how we feel and how our mind develops. This is pretty recent science written by scientists about the function of our mind (or murine minds, at least). That makes it an interesting rationality quote, in my opinion.
It's interesting, all right, but I think it would likely be better received as a standalone Discussion post (ideally with some more context and expansion). The rationality quotes threads tend to be more for quotes directly about rationality or bias than quotes indirectly contributing to our potential understanding of the same.
I think it could make a pretty interesting Discussion post, and would pair well with some discussion of how becoming a cyborg supposedly makes you less empathic.
Serious question: is the cyborg part a joke? I can't tell around here.
Fair question! I phrased it a little flippantly, but it was a sincere sentiment - I've heard somewhere or other that receiving a prosthetic limb results in a decrease in empathy, something to do with becoming detached from the physical world, and this ties in intriguingly with the scifi trope about cyborging being dehumanizing.
Really? If true, then that is fascinating... Can you link to any of the recent research, though?
EDIT: by popular demand. I'll be moving this to a discussion instead.
EDIT: the discussion thread is here
As in the attribution, I'm quoting from: Hejitz et.al.: Normal gut microbiota modulates brain development and behavior, 2011.
Here is a review paper.
See also the current special section of science magazine, or google scholar.
Here's the abstract from The Relationship Between Intestinal Microbiota and the Central Nervous System in Normal Gastrointestinal Function and Disease00346-1/abstract):
Here are results from an RCT on humans with chronic fatigue syndrome
Philosophy Bro
Upvoted for introducing me to one of the funniest blogs I've ever seen. The ironic writing style is brilliant:
-The Catholic Encyclopedia
What makes that one most interesting is its source.
I suspect that if the source was a less unexpected one, say Albert Einstein or Carl Sagan, the quote would seem obvious and uninteresting to LWers and its karma score would be less than half what it is.
This makes perfect sense in terms of Bayesian reasoning. Unexpected evidence is much more powerful evidence that your model is defective.
If your model of the world predicted that the Catholic Church would never say this, well... your model is wrong in at leas that respect.
Well, I would have upvoted such a quote no matter who it was by.
Yes, an interesting question is how may readers will update their opinion of the Catholic church based on this.
I was not surprised by this, because I know many Catholics honestly try to be rational... of course only within the limits given by the Church.
They would have absolutely no problem with Bayesian updating; the only problem would be the Solomonoff prior. If you replace it by "the Catholic Church is always right" prior, you are free to update rationally on everything else and remain a good Catholic.
This is why Catholics don't have a problem to accept e.g. evolution, as long as someone can provide an explanation how evolution can be compatible with "the Catholic Church is always right". (A possible explanation could be e.g. that God created the first life forms; that evolution is a consequence of physical laws created by God, therefore any result of evolution is still indirectly created by God; and that humans are somehow an exception to this process, because even if their bodies are a result of evolution, they also have an immaterial soul created directly by God.)
-William M. Briggs
Brett Evill
The best similar cultural-relativity-based deduction I've read, as introduced by Wikipedia:
You can refrain from passing judgment yourself, but allow others to pass judgement.
For example, rocks are not judgmental.
It doesn't follow, from the fact that passing judgment on someone else's act of passing judgment on people is itself an act of passing judgment on people, that it is impossible not to pass judgment on people.
I'm also not quite clear on whether "passing judgment on" is denotatively the same or different from "judging." (I understand the connotative differences.)
All that said, for my own part, I want to be judged. I want to be judged in certain ways and not in others, certainly, and the possibility of being judged in ways I reject can cause me unhappiness, and I might even say "don't judge me!" as shorthand for "don't apply the particular decision procedure you're applying to judgments of me!" or as a non-truth-preserving way of expressing "your judgment of me upsets me!", but if everyone I knew were to give up having judgments of me at all, or to give up expressing them, that would be a net loss for me.
Thomas Hardy
Pretty sure most people would pick hallucinations over blindness. Easier to correct for.
Hallucinations are easier to correct for?
Hm.
So, I start out with an input channel whose average throughput rate is T1, and whose reliability is R1.
Case 1, I reduce that throughput to T2.
Case 2, I reduce the reliability to R2.
A lot seems to depend on T2/T1 and R2/R1.
From what I've gathered from talking to blind people, I'd estimate that T2/T1 in this case is ~.1. That is, sighted people have approximately an order of magnitude more input available to them than blind people. (This varies based on context, of course, but people have some control over their context in practice.)
Hallucinations vary. If I take as my example the week I was in the ICU after my stroke, I'd estimate that R2/R1 is ~.1. That is, any given input was about ten times more likely to not actually correlate to what another observer would see than it usually is.
Both of these estimates are, of course, pulled out of my ass. I mention them only to get some precision around the hypothetical, not as an assertion about what blindness and hallucination are like in the real world. If you prefer other estimates, that's fine.
Given those estimates... hm.
Both of them suck.
I think I would probably choose hallucination, in practice.
I think I would probably be better off choosing blindness.
False information is definitely more damaging than non-information, because in the best case scenario you ignore the false information. In less-than-best-case scenarios, you fail to ignore the false information and are actively misled.
Suppose there are 10 boxes, one of which contains cash.
If you could open the boxes and see which one had cash, you'd be in great shape. But if you can't, you obviously should prefer leaving all the boxes closed (blindness), rather than somehow seeing cash in box #7 even when it isn't there.
I think the only reason people would be tempted to choose hallucination is that hallucinations in real life are usually relatively mild and often correctible, whereas blindness can be total and intractable with present technology. So given the choice between schizophrenia and blindness, I probably would choose schizophrenia, because schizophrenia is treatable.
One reason I would be tempted to choose hallucination over blindness is that hallucinations feel like knowledge, and blindness feels like lack of knowledge, and I'm more comfortable with the feeling of knowledge than I am with the feeling of the lack of knowledge.
--Alan Alda, in an interview at The Colbert Report, telling the story that gave rise to The Flame Challenge. It has been mentioned on LW before, but I thought it was worth posting it here as a perfect illustration of a Teacher's Password.
Patrick McKenzie, the guy who gets instrumental rationality on the gut level.
More from the same source:
FFS, how can people misremember who they voted for in an election with only two plausible candidates?
Wrong question. I'd say people who voted for the other guy remember, but aren't so eager to respond to surveys.
A large number of them may have not voted at all, but remember themselves doing so.
I suspect, with no data to back me up, that is those who were ambivalent when they stepped into the polling booth that genuinely misremember. Others know they voted for the other guy, but want to be seen as one of the 'winners'.
Or the survey he's referring to is biased. Seems hard for it not to be... did they knock on doors all across the country? If it's based on mail or telephone responses, are people who voted for Obama more likely to respond to those?
Or, he's misquoting the survey. If you were testing the hypothesis that people misremember voting for the winner, wouldn't you sample a smaller area than the whole country, and then compare your results with the vote count from that area? Why would an experiment like that ever get a number meant to be compared with the whole country's votes?
There are many U.S. elections I have voted in where there were two candidates for an office and I couldn't tell you which one I voted for. Admittedly, no cases involving Presidential candidates; I'm usually pretty sure who I'm voting for in those cases.
I suspect, with no data to back me up, that the latter class contains many more people than the former. (If I were that ambivalent, I wouldn't vote for either major candidate at random; I would either vote for a minor candidate, or not vote at all. But I guess not everybody is like me.)
Obviously I need to figure out how to start charging for my website!
I will sing the praises of git and vim, but I didn't pay any money for them. He says extract a commitment, not necessarily a monetary commitment; I read half a book before I started using git, and vim took a lot of practice. So you could use more specialized terminology or something like that. git and vim are both very well-spoken of, and I probably wouldn't have bothered to learn them if they weren't. But I also don't bother to spend money on things that don't have a good reputation, if I haven't had experience with them already. So, either way, requiring a commitment from the user turns away a lot of them.
(I've never read your website)
I wonder if a donate button at the end of each article, tied with a question along the lines of "How valuable was the article you just read?", would be effective. (You could even set it up so that you can track the amount donated by article, and use that to guide future research- I'm not sure how effective that would be, since that depends on how many alternatives you have to pick from in considering new research topics.)
Well, I do have donation stuff setup; last week I moved the Paypal button from the very bottom, post footnotes (where the Bitcoin address remains), to the left sidebar, to see if that would help. (So far it hasn't.)
A rating widget is a good idea; I'm messing around with some but I'm not seeing any really good ones hosted by third-parties (static site, remember).
I am completely undisciplined and I do this stuff as the whim takes me. A month ago I didn't expect to learn how to do meta-analyses and run a DNB meta-analysis and 2 weeks ago I wasn't expecting to do an iodine meta-analysis either; the day before Kiba hired me to write a Silk Road article, I wasn't expecting that either...
I do value your research and writings. I was thinking about offering to buy you a laptop because it sounded like you had an old POS that was hampering said research and writings, but then I decided that would be too weird.
I did have a POS, but in July 2010 I finally bit the bullet and bought a new Dell Studio 17 laptop that has since worked well for me. (The hard drive died a few months ago and I had to replace it, almost simultaneously with my external backup drive dying, which was very stressful, but Dell doesn't make the hard drives, so I write that off as an isolated incident.)
Ah, then I only need to buy you a 2-year backblaze subscription, that's far cheaper.
I've had the impression that you've been selling yourself short for quite some time.
Maybe you can start by following Patrick's example and offering some of the choice data you collect and analyze to the people subscribing to your mailing list. You can also figure out who might be interested in the information you collect (a cool project in itself), and how much it would be worth to them.
Seek not to follow in the footsteps of men of old; seek what they sought. -Matsuo Basho, poet (1644-1694)
Seems like a good way to think of the "seek to succeed, not to be rational" idea.
Margaret Fuller, intoxicated by Transcendentalism, said, "I accept the universe," and Thomas Carlyle, told of the remark, supposedly said, "Gad, she’d better."
This depends on what is meant by "accept the universe". Does this mean that you're ready to deal with reality, or that you accept the way the universe currently is and aren't going to try to make it better?
Given Carlyle's general attitude towards Fuller, I suspect what he meant was that it's a good thing for the universe that Fuller accepts it, for otherwise the results might be bad for the universe.
found here
-- Carl Sagan, 1987 CSICOP Keynote Address
I don't think that the idea that politicians don't change their position has much basis in reality. There are a lot of people who complain about politicians flip-flopping.
When a politician speaks publically, he usually doesn't speak about his personal decision but about a position that's a consensus of the group for which the politician speaks. He might personally disagree with the position and try to change the consensus internally. It's still his role to be responsible for the position of the group to which he belongs. In the end the voter cares about what the group of politicians do. What laws do they enact? Those laws are compromises and the politicians stand for the compromise even when they personally disagree with parts of it.
A scientist isn't supposed to be responsible for the way his experiments turn out.
And if you take something like the Second Vatican Council there's even change of positions in religion.
Yes, politicians flip-flop, and they take heat for it. And religious organizations do revise their doctrines from time to time.
But they don't like to admit it. This shows itself most clearly in schisms, where it's obvious at least one party has changed it stance, yet both present the other side as the schismatic one (splitters).
Thus even though they have changed, they do not "update"--or they do, but then they retcon it to make it look like they've always done things this way. (Call it "backdating," not updating.) This is what the superstates do in 1984.
Coming up with real examples is trivial. Just find a group that has ever had a schism. That's basically every group you've heard of. Ones that come to mind: Marxists, libertarians, Christians, the Chinese Communist Party. Triggering issues for the above groups include the nature of revolution, the relationship between rights and welfare, the Trinity, the role of the state in the economy...
How many scientific papers contain the lines: "In the past the authors of this papers were wrong about X, but they changed their opinion because of Y"?
None, because journals are really careful about proof-reading.
Do you mean:
1) Because journals are really careful about proof-reading and there are no errors in journal articles?
2) Because journals are really careful about proof-reading, they delete every sentence where a scientist says that "I've been wrong in the past"?
3) Some other way in which careful proof-reading removes the possibility that "I've been wrong in the past" appears in a journal article?
It was grammar nitpicking. "The authors where wrong".
Inspiring, but not true.
In what respect is it not true? I've certainly observed it. I haven't observed it every day, but most scientists in the world are not under my observation.
If Sagan had actually looked for it happening in politics and religion, he'd have found plenty of examples. Especially in the latter.
If it really does happen in politics and religion at a comparable rate, then the quote is certainly misleading, but I rather doubt that that is the case. Sagan did not say that it never happens in politics or religion, only that he could not recall an instance.
Robert Anton Wilson, from an interview
I agree with Wilson's conclusions, though the quote is too short to tell if I reached this conclusion in the same way as he did.
Using several maps at once teaches you that your map can be wrong, and how to compare maps and find the best one. The more you use a map, the more you become attached to it, and the less inclined you are to experiment with other maps, or even to question whether your map is correct. This is all fine if your map is perfectly accurate, but in our flawed reality there is no such thing. And while there are no maps which state "This map is incorrect in all circumstances", there are many which state "This map is correct in all circumstances"; you risk the Happy Death Spiral if you use one of the latter. (I should hope most of your maps state "This map is probably correct in these specific areas, and it may make predictions in other areas but those are less likely to be correct".) Having several contradictory maps can be useful; it teaches you that no map is perfect.
This seems unfair. I have a map; it reperesents what I think the universe is like. Certainty it is not perfect, but if I thought a different one was better I would adopt it. There is a distinction between "this is correct" and "I don't know how to pick something more correct".
It depends what kind of maps. Multiple consistent maps are clearly a good thing (like switching from geometry to coordinates and back). Multiple inconsistent ad-hoc maps can be good if you have a way to choose which one to use when.
Wilson doesn't say which he means, I think he's guilty of imprecision.
I think he means that people choose not to think about any map but their favorite one ("their way of looking at reality is the only sane way of viewing the world"), to the point where they can't estimate the conditional probability P(E|a) of the evidence given not-A.
The link with Aristotle seems weak. But the problem obviously makes it harder to use "the logic of probability," as Korzybski called it, and Wilson well knew that Korzybski contrasted probability with classical "Aristotelian" logic. (Note that K wrote before the Bayesian school of thought really took off, so we should expect some imprecision and even wrong turns from him.)
Or you could always just average your inconsistent maps together, or choose the median value. Should work better than choosing a map at random.
Or accept that each map is relevant to a different area, and don't try to apply a map to a part of the territory that it wasn't designed for.
And if you frequently need to use areas of the territory which are covered by no maps or where several maps give contradictory results, get better maps.
Basically, keep around a meta-map that keeps track of which maps are good models of which parts of the territory.
Yeah, that should work.
"Most people have a wrong map, therefore we should use multiple maps" doesn't follow. Reversed stupidity isn't intelligence, and in this case Aristotle appears to have been right all along.
If I'm out charting the oceans, I'd probably need to use multiple maps because the curvature of the Earth makes it difficult to accurately project it onto a single 2D surface, but I do that purely for the convenience of not having to navigate with a spherical map. I don't mistake my hodge-podge of inaccurate 2D maps for the reality of the 3D globe.
Isn't “convenience” also the reason not to use the territory itself as a map in the first place? You know, knowing quantum field theory and general relativity isn't going to give you many insights about (say) English grammar or evolutionary psychology.
No, but your "hodge-podge of inaccurate 2D maps", while still imperfect, is more accurate than relying on a single 2-D map - which is the point I took from the original quote.
If you're favoring hedgehogs over foxes, you're disagreeing with luminaries like Robin Hanson and billionaire investors like Charlie Munger. There is, in fact, far more than one globe--the one my parents had marked out the USSR, whereas ones sold today do not; and on the territory itself you won't see those lines and colorings at all.
Some recent quotes post here had something along the lines of "the only perfect map is a 1 to 1 correspondence with everything in the territory, and it's perfectly useless."
Note that Google Maps can be described as "a hodge-podge of different maps"; a satellite map and a street map (and sometimes a 3D map if you use Google Earth), and using that hodge-podge is indeed more convenient than using one representation that tries to combine them all.
I know that you didn't mean hodge-podge in the same sense (you were talking of 3D-> 2D), but I think that Google Maps is a good illustration of how having different views of the same reality is useful.
"Rich people plan for three generations. Poor people plan for Saturday night." -- Gloria Steinem
The rest of her quotes are pretty good, too.
Here's what I don't like about that quote: It doesn't tell me which way the causation goes (or if it's feedback, or a lurking variable, or a coincidence). Does being rich make you plan better? Or does planning better make you rich?
-Eric Hoffer
-Charles Babbage
Only if you're using a consistent estimator. (Yes, that's a frequentist concept, but the same sorts of problems show up in a Bayesian context once you try to learn nonparametric models...)
On the other hand:
A little learning is a dangerous thing; drink deep, or taste not the Pierian spring: there shallow draughts intoxicate the brain, and drinking largely sobers us again.
Alexander Pope
I'd heard that quote before, but this was the first time I recognized the referent for Mount Stupid.
A more general but less witty form of
–Fred Mosteller
You are a little late to that party.
How's about this?
— Andrew Gelman
-Roger Bacon
Andrew Vickers, What Is A P-Value, Anyway?
When I read the opening line I guessed he was going to go in the opposite direction - as Paul Graham probably would have.
I can see uses to both ways of simplifying one's relationship with the rest of the universe.
Aren't Graham and Dennett talking about different things entirely? Dennett is trying to help us understand better how materialism is compatible with having free will and a conscious self; his prescription here is to avoid a common pitfall, that of dismissing all "upwards" processing of perception and all "downwards" action-starting signals as "mechanical computing, not part of the self" and locating the Cartesian self at the zero-extension intersection of these two processes. It is better to think of the self as extended in both directions. When Graham says "keep your identity small", he is talking about a different sense of "identity" and "small", roughly "do not describe yourself with labels because you might become overly invested in them and lose objectivity and perspective".
I now want to make up bumper stickers that read "What Would Paul Graham Do?"
Granted, I want to do other things that preclude doing so even more.
Wanting to associate your identity with a person, in part because they have a very good argument for why you shouldn't associate your identity with things, and then doing something more important instead... there's something almost poetic or ironic about it.
Poetic? Nice call.
On the plus side at least it indicates that they aren't so caught up in affiliation that we aren't able to ignore his dogmas when it isn't useful to us.
This is only tangentially related, but:
It's probably really important to notice when you feel a desire to signal affiliation with someone or something by purchasing paraphernalia or, e.g., getting a bumper sticker. Wanting to signal that you like something generally means that your identity has expanded to include that thing. This, of course, can be both a symptom and a cause of bias (although it isn't necessarily so). See also all this stuff. Or, more concisely: "I want to buy a bumper sticker/t-shirt/pinup calendar/whatever" should sound an alarm and prompt some introspection.
(I'm not trying to imply that you have a bias towards Paul Graham, just making a general statement.)
Looking briefly at a few sites specializing in custom bumper stickers, I estimate you could probably make and pay for some in half an hour to an hour. Do you want to do those other things that badly?
You know, it's actually a really good question.
I think what's true here, now that I'm considering it for more than five seconds, is that I don't actually want to do this at all, I just think it's a funny idea and wanted to share it, and I chose "I want to X" as a conventional way of framing the idea... a habit I should perhaps replace with "It would be funny to X" in the spirit of not misrepresenting my state to no purpose.
Yes, I figured as much. :)
Phil Plait, Don't Be A Dick (around 23:30)
The former is the most powerful method I know of for the latter. As elspood mentioned, it obviously isn't the victims in particular that will be persuaded.
Wouldn't killing be better described in this context as coercion? Which feels distinct from persuasion, to me.
On humans it does both. Humans are persuaded by power, not merely coerced. (Being persuaded like that is a handy 'hypocrisy' skill given bounded cognition.)
Voted up for the link to the video, which is a good explanation for why dumping hostility on people is not an effective method of convincing them.
FWIW, those that are 'hostile' don't generally believe they're going to convince the people they're being hostile to. They're after the peanut gallery; the undecided.
The effect on the peanut gallery is hard to track.
It's at least as likely that dumping hostility on outsiders is a way of maintaining group cohesion among those who have already identified themselves with the issue.
As you say, in-group signalling is a more likely explanation - hostility is widely unpersuasive to those who are actually undecided.
I don't you can properly isolate these two strategies, there is a reason peace so frequently evolves into war: intelligent rational people living in a peaceful time frequently can reach their goals most easily by creating a violent environment. Diplomacy is safer, easier, and generally something I prefer, but violence can influence many more people much faster.
-- Albert Einstein
Any fool can also make a simple theory to describe anything, provided he is willing to hide dis-confirming evidence under the rug.
Pearl S. Buck
Related.
Upvoted for the "related".
http://www.youtube.com/watch?v=I12H7khht7o&feature=player_embedded
Video by Fallon, a scientist who found out that he was a sociopath-- he says it doesn't bother him that everyone he knew said he was bad at connecting emotionally, but he does seem motivated to work on changing.
I really wish we had brain scans of this guy at 19 and at 25. I want to see which areas were developed!
Yes I can. Speak for yourself (Buck).
I read it more charitably, as being isomorphic to Schopenhauer's "A man can do as he wills, but not will as he wills." The idea is that you are feeling something and not something else, and regardless of what you are feeling you can and should do right.
The distinction may be between setting up the preconditions for a feeling (which has some chance of working) and trying to make a feeling happen directly (which I think doesn't work).
Making a feelings happen directly isn't easy. It's a skill. Given the demographic on this website there a good chance that a lot of the readers can't control their feelings. Most of the people here are skilled at rationality but not that skilled at emotional matters.
It's a bad idea to generalise your own inability to control your feelings to other people.
Can you describe the process of making feelings happen directly?
Directly is a tricky word. In some sense you aren't doing things directly when you follow a step by step process.
If you however want a step by step process I can give it to you (but please don't complain that it's not direct enough):
1) You decide which emotion you want to feel.
2) You search in your mind for an experience when you felt the emotion in the past.
3) You visualize the experience.
4) In case that you see yourself inside your mental image, see the image as if you are seeing it through your own eyes.
5) If the image is black and white, make it colored.
6) Make the image bigger.
7) Locate the emotion inside your body.
8) Increase the size of the emotion.
9) Get it moving.
10) Give it a color.
11) Increase movement and size as long as you want.
That's the way of doing it I learned at day two of an NLP seminar.
I'm not actually sure of what you mean by 'directly' here. Which of the following does 'setting up the preconditions' include:
a) changing breathing patterns etc b) focusing thought on particular events etc. c) rationalising consciously about your emotional state d) thinking something like 'calm down, DavidAgain calm down calm down'
I doubt many people can simply turn a powerful emotion on or off, although I wouldn't rule it out. I read (can't find link now...) about a game where the interface was based on stuff like level of 'arousal' (in the general sense of excitement), which you had to fine tune to get a ball to levitate to a certain level or whatever. I'd be surprised if someone played that a lot with high motivation and didn't start to be able to jump directly to the desired emotional state without intermediary positions. And being able to do so obviously has major advantages in some more common situations (e.g. being genuinely remorseful or angry when those responses will get the best response from someone else and they're good at reading faked emotion, or controlling panic when the panic-response will get you killed)
This game sounds awesome, I am going to try and search for it so I can test this.
A while (i.e. about a decade) ago, I read about a variant of Tetris with a heart rate monitor in which the faster your heart rate was the faster the pieces would fall.
Looks like there are a few pc input devices on the market that read brain activity in some way. The example game above sounds like this Star Wars toy.
Well, what works for someone may not work for someone else. (Heck, what works for me at certain times doesn't work for me at other times.)
-Philippe Petit. On the idea of walking rope in between the World Trade Center towers.
It's not impossible for sure now. If he thought it was impossible when they were actually in existence then he doesn't remotely understand the word. That is beyond even a "Shut up and do the impossible!" misuse.
I don't understand. Are you saying it wasn't impossible enough?
He actually did it in 1974. It took nearly six years of planning. In order to practice for the walk between the World Trade Center towers he first did tightrope walks between the towers of the Notre Dame and then the Sydney Harbor Bridge. All of these were of course illegal. In WTC case, he had to sneak in, tie the ropes between the towers without anyone knowing and walked between the towers without any harness for nearly 45 mins at that height with the wind and everything. For the complete details, watch the documentary 'Man on Wire'. I think it was as impossible as it got in his line of work.
Paul Graham, What You'll Wish You'd Known
This speech was really something special. Thanks for posting it. My favorite sections:
And:
Great stuff.
Bit of a tangent, but something from that essay always bothered me.
Paul Graham
Robert Cialdini, Influence
It doesn't seem to me that Vincent-as-described-by-Cialdini is someone with a passion for waiting at tables; especially not the sort that could also be described as a "passion for service". If anything, he has a passion for exploiting customers, or something of the kind. I would expect someone with a genuine passion for table-waiting -- should such a person exist -- to be as reluctant to mislead customers as, say, someone with a passion for science would be to spend their life working for a partisan think tank putting out deliberately misleading white papers on controversial topics.
(To forestall political arguments: I am not implying that all think tanks are partisan, nor that all white papers put out by partisan think tanks are deliberately misleading.)
...and "Influence" goes onto my "to read" list.
Also true of, say, OCD.
Thomas Harris, The Silence of the Lambs
The other day a client sent me a new sighting of a bug I'd been stalking for a while. The new info allowed me to trap it between two repository revisions, flush it out of the diffs and stomp on the sucker. It did briefly feel kind of primal.
Razib Khan
-- John Fowles, The French Lieutenant's Woman
E. T. Jaynes "Probability Theory, The Logic of Science"
This is also why I don't trust poets who claim that their works spring to them automatically from the Muse. Yes, it would be very impressive if that were so; but how do I know you didn't actually slave over revisions of that poem for weeks?
An anecdote concerning von Neumann, here told by Halmos.
I recall a math teacher in high school explaining that often, in the course of doing a proof, one simply gets stuck and doesn't know where to go next, and a good thing to do at that point is to switch to working backwards from the conclusion in the general direction of the premise; sometimes the two paths can be made to meet in the middle. Usually this results in a step the two paths join involving doing something completely mystifying, like dividing both sides of an equation by the square root of .78pi.
"Of course, someone is bound to ask why you did that," he continued. "So you look at them completely deadpan and reply 'Isn't it obvious?'"
I have forgotten everything I learned in that class. I remember that anecdote, though.
The standard proof of the Product Rule in calculus has this form. You add and subtract the same quantity, and then this allows you to regroup some things. But who would have thought to do that?
--Richard Hamming
IIRC there was an xkcd about that, but I don't remember enough of it to search for it.
EDIT: It was the alt test of 759.
Note that xkcd 759 is about something subtly different: you work from both ends and then, when they don't meet in the middle, try to write the "solution" in such a way that whoever's marking it won't notice the jump.
I know someone who did that in an International Mathematical Olympiad. (He used an advanced variant of the technique, where you arrange for the jump to occur between two pages of your solution.) He got 6/7 for that solution, and the mark he lost was for something else. (Which was in fact correct, but you will appreciate that no one was inclined to complain about it.)
Is 759 the one you are thinking of? The alt-text seems to be relevant.
It's "Jaynes."
Fixed. Thanks.
George Pólya
Duplicate of this. (Well, close enough that the monicker should apply.)
Charles Dickens, David Copperfield (HT Cafe Hayek.)
A reasonable start, but quite insufficient for the long run. Sixpence savings on twenty pounds income is not going to insulate you from disaster, not even with nineteenth-century money.
A disaster is an abrupt fall in income or abrupt increase in expenditures, so it falls under the general claim.
-Vincent Baker
No. If I want something to exist I'll offer a reward or plain and simple pay someone to build it.
I read the quote as "make it (exist)!", instead of "create it". But whether that's what was meant or not, I think that to the basic idea, it doesn't matter all that much whether you cause it to exist directly or via someone else.
As an addition: when I come up with something cool that I wish existed, my first step is to google around if someone else has ever invented it and sells it. : ) Twice so far the answer has been yes.
Nowadays I actually get annoyed when I think up something that's an obvious combination of existing components and I can't immediately find it online. It doesn't happen very often.
If what you want is difficult to explain, it might be as easy to do it yourself.
Dean Ing, The Ransom of Black Stealth One
Exactly. Buying things is far more practical, harnessing the power of specialization and comparative advantage. Building the thing yourself is almost always the incorrect decision. Build it yourself if you are good at building that kind of thing and, more importantly, suck at doing other things that provide more (fungible) value.
Or if you enjoy the process of building it. Or if the process of building it will help you relax or something so that you'll be able to do more things-that-provide-more-value later. Or if you're trying to impress someone. Or any other of the reason people have hobbies. (Also, “suck” suggests a much lower threshold than there actually is, especially in times of unemployment and recession. Telling people who have to cook because they can't afford eating at restaurants twice a day that they “suck” at making money sounds bad to me.)
Those are all reasons to build things. But not the subject of the context.
Closely related principle: Purchase Fuzzies and Utilons Separately.
Perhaps by "it", he meant money.
Doubtful. Money already exists, but it doesn't exist my pocket.
-Aaron Haspel
There's no context in the source, so: WTF?
He is using "mind" in a broader sense than people usually do with the phrase "change your mind".
A reasonable interpretation could be "changing one of your beliefs doesn't automatically change your other related beliefs, your aliefs, your habits and your behavioral triggers". But "changing your mind" could also mean "changing anything about your mind, such as a personality trait or even a mood".
Another example: Learning that an opinion of yours was wrong does not destroy all the broken cognitive processes that generated the wrong opinion in the first place.
I think people are seriously underestimating the value of this quote, but then again of course I do; I'm the one who posted it.
For instance, becoming intellectually convinced that sexual jealousy is a bad idea does not purge you of experiencing any.
Eric Barker
How does this account for the use of humor in mocking outgroup members?
It doesn't.
I love truth. It's such a wonderful thing. It makes you sane, helps you make better, more effective decisions and it irks all the right people. -Aaron Clarey aka "Captain Capitalism"
Jack Parsons