Rationality Quotes January 2013
Happy New Year! Here's the latest and greatest installment of rationality quotes. Remember:
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself
- Do not quote comments/posts on LessWrong or Overcoming Bias
- No more than 5 quotes per person per monthly thread, please
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (604)
--Rory Miller
Nice juxtaposition.
-- Eric Hoffer, The True Believer
-- Tim Kreider, The Quiet Ones
Since this has got 22 upvotes I must ask: What makes this a rationality quote?
I'd say it comes under the 'instrumental rationality' heading. The chatter was clearly bothering the writer, but - irrationally - neither he nor the others (bar one) actually got up and said anything.
Every actual criticism of an idea/behaviour is likely to imply a much larger quantity of silent doubt/disapproval.
Sometimes, but you need to take into account what P(voices criticism | has criticism) is. Otherwise you'll constantly cave to vocal minorities (situations where the above probability is relatively large).
You could argue that the silence of the author and the woman behind the couple is an example of the bystander effect.
"This is how it sometimes works", I would have said. Anything more starts to sound uncomfortably close to "the lurkers support me in email."
...but why wait until they'd almost gotten to Boston?
Perhaps because at that point, one is not faced with the prospect of spending several hours in close proximity to people with whom one has had an unpleasant social interaction.
The passage states that he'd already spoken to them twice.
No one wants to appear rude, of course. As this was almost the end of the ride, the person who rebuked them minimized the time he'd have to endure in the company of people who might consider him rude because of his admonishment, whether or not they agree with him. I wonder if this is partly a cultural thing.
--Scott Derrickson
-- Steve Smith, American Dad!, season 1, episode 7 "Deacon Stan, Jesus Man", on the applicability of this axiom.
-- Closing lines of Crimes and Misdemeanors, script by Woody Allen.
I agree with the sentiment expressed in this quote, and I don't see it as opposed to the one expressed i mine, but judging from the pattern of up votes and downvotes, people do not agree.
I guess the quote I posted is ambiguous. You could read it as a kind of bad theistic argument ("since there is meaning in my life, there must be Ultimate Meaning in the universe"). Or you could read it as an anti-nihilistic quote ("even if there is no Ultimate Meaning, the fact that there is meaning in my life is enough to make it false that the universe is meaningless"). I was assuming the second reading, but I guess either the people who voted either assumed the first one. Or perhaps they saw the second one and just judged it a poor way of stating this idea.
Scott Derrickson may be a part of the universe, but he is not the universe.
Scott Derrickson is indifferent. How do I know this? I know because Scott Derrickson's skin cells are part of Scott Derrickson, and Scott Derrickson's skin cells are indifferent.
If you interpret “X is indifferent” as “no part of X cares”, the original quote is valid and yours isn't.
While affirming the fallacy-of-composition concerns, I think we can take this charitably to mean "The universe is not totally saturated with only indifference throughout, for behold, this part of the universe called Scott Derrickson does indeed care about things."
That's the way I interpreted it, too. There's a speech in HP:MOR where Harry makes pretty much the same point.
“There is light in the world, and it is us!”
Love that moment.
That's exactly the sentiment I was aiming for with the quote.
-- John Rawls, Justice as Fairness: A Restatement.
--Freefall
I was rereading HP Lovecraft's The Call of Cthulhu lately, and the quote from the Necronomicon jumped out at me as a very good explanation of exactly why cryonics is such a good idea.
(Full disclosure: I myself have not signed up for cryonics. But I intend to sign up as soon as I can arrange to move to a place where it is available.)
The quote is simply this:
So strange that this quote hasn't already been memed to death in support of cryonics.
RationalWiki is extremely sceptical of cryonics and still it has quoted that.
It featured prominently in last year's Solstice.
In retrospect, I don't think I have a good reason for not coming to that.
Er... logical fallacy of fictional evidence, maybe? I wince every time somebody cites Terminator in a discussion of AI. It doesn't matter if the conclusion is right or wrong, I still wince because it's not a valid argument.
The original quote has nothing to do with life extension/immortality for humans. It just happens to be an argument for cryonics, and it seems to be a valid one: death as failure to preserve rather than cessation of activity, mortality as a problem rather than a fixed rule.
Yeah, I wasn't saying that it should be used because it's "so true." Just that it's easy to appropriate.
http://lesswrong.com/lw/1pq/rationality_quotes_february_2010/1js5
(Full disclosure: I myself don't intend to sign up for cryonics.)
Huh... Before posting the quote I did try searching to see if it had already been posted before, but that didn't show up. Oh well.
Keith E. Stanovich, What Intelligence Tests Miss: The Psychology of Rational Thought
For instance, you need analytical thinking to design your heuristics. Let your heuristics build new heuristics and a 2% failure rate compounded will give you a 50% failure rate in a few tens of generations.
Possibly, but Stanovich thinks that most heuristics were basically given to us by evolution and rather than choose among heuristics what we do is decide whether to (use them and spend little energy on thinking) or (not use them and spend a lot of energy on thinking).
What is analytical thinking, but a sequence of steps of heuristics well vetted not to lead to contradictions?
A heuristic is a "rule of thumb," used because it is computationally cheap for a human brain and returns the right answer most of the time.
Analytical thinking uses heuristics, but is distinctive in ALSO using propositional logic, probabilistic reasoning, and mathematics - in other words, exceptionless, normatively correct modes of reasoning (insofar as they are done well) that explicitly state their assumptions and "show the work." So there is a real qualitative difference.
Propositional logic is made of many very simple steps, though.
Sure. The point is that "A->B; A, therefore B" is necessarily valid.
Unlike, say, "the risk of something happening is proportional to the number of times I've heard it mentioned."
Calling logic a set of heuristics dissolves a useful semantic distinction between normatively correct reasoning and mere rules of thumb, even if you can put the two on a spectrum.
Ohh, I agree. I just don't think that there is a corresponding neurological distinction. (Original quote was about evolution).
David Wong, 6 Harsh Truths That Will Make You a Better Person. Published in Cracked.com
For Instance It Makes You Write With Odd Capitalization.
It's probably a section title.
It's a copywriting technique. It makes easier for people to read (note how the subjects of newsletters campaings usually come like that). Don't ask me for the research, I have no idea where I've read it.
In this case, I guess he just pasted the title here.
So, is there any research done about this kind of stuff? All the discussions of this kind of things I've seen on Wikipedia talk:Manual of Style and places like that appear to be based on people Generalizing From One Example.
I wish my 17-year-old self had read that article.
It a misleading claim. Studying of how parents influence their kids generally conclude that "being" of the parent is more important than what they specifically do with the kids.
From the article:
The author of the article doesn't seem to understand that there such a thing as good listening. If a girl tell you about some problem in her life it can be more effective to empathize with the girl than to go and solve the problem.
If something says "It's what's on the inside that matters!" a much better response would be ask: What makes you think that your inside is so much better than the inside of other people?
Could you explain this? Or link to info about such studies? (Or both?)
Yep.
I'm rather curious how parents can "be" something to children without doing, since it's supposed children don't know their parents before their first contact (after birth, I mean).
I think I have heard of such studies, but the conclusion is different.
Who the parents are matter more than things like which school do the kids go, or in which neighborhood they live, etc.
But in my view, that's only because being something (let's say, a sportsman), will makes you do things that influence your kids to pursue a similar path
I didn't said that they aren't doing anything. I said that identifying specific behaviors doesn't make a good predictor. Characteristics like high emotional intelligence are better predictors.
Working on increased emotional intelligence and higher self esteem would be work that changes "who you are".
Taking steps to raise their own emotional intelligence might have a much higher effect that taking children to the museum to teach them about
If a parent has a low self esteem their child is also likely to have low self esteem. The low self esteem parent might a lot to prove try to do for his child to prove to himself that he's worthy.
There a drastic difference between a child observing: "Mommy hugs me because she read in a book that good mothers hug their children and she wants to prove to herself that she's a good mother and Mommy hugs me because she loves me".
On paper the women who spents a lot of energy into doing the stuff that good mothers are supposed to do is doing more for their child then a mother who's not investing that much energy because she's more secure in herself. Being secure in herself increase the chance that she will do the right things at the right time signal her self confidence to the child. A child who sees that her mother is self confident than has also a reason to believe that everything is alright.
As far as studies go, unfortunately I don't keep good records on what information I read from what sources :( (I would add that hugging is an example I use here to illustrate the point instead of refering to specific study about hugging)
Yes... and studies show that this is largely due to genetic similarity, much less so to parenting style.
Which still means that it boils down to what the mother does.
The thing is, no one can see what you "are" except by what you do. Your argument seems to be "doing things for the right reason will lead to doing the actual right thing, instead of implementing some standard recommendation of what the right thing is". Granted. But the thing that matters is still the doing, not the being. "Being" is relevant only to the extent that it makes you do.
Oh, and as for this:
There's a third possibility: "Mommy doesn't hug me, but I know she loves me anyway". Sometimes that's worse than either of the other two.
If there is no other method, then advising people to ignore changing what they are in favor of what they do is bad advice.
I am having trouble parsing your comment. Could you elaborate? "no other method" of what?
Also, who is advising people to ignore changing what they are...? And why is advising people to change what they do bad advice?
Please do clarify, as at this point I am not sure whether, and on what, we are disagreeing.
If "what you are" is the only/most effective way to change "what you do" (eg unconscious signalling) then the advice of the original article to focus on "what you do" is poor advice, even if it is technically correct that only what you do matters.
"We are what we repeatedly do. Excellence, then, is not an act, but a habit." — Aristotle
It goes both ways. And it's meaningless to speak of changing "what you are" if you do not, as a result, do anything different.
I don't think the Cracked article, or I, ever said that the only way to change your actions is by changing some mysterious essence of your being. That's actually a rather silly notion, when it's stated explicitly, because it's self-defeating unless you ignore the observable evidence. That is, we can see that changing your actions by choosing to change your actions IS possible; people do it all the time. The conclusion, then, is that by choosing to change your actions, you have thereby changed this ineffable essence of "what you are", which then proceeds to affect what you do. If that's how it works, then worrying about whether you're changing what you are or only changing what you do is pointless; the two cannot be decoupled.
And that's the point of the article, as I understand it. "What you are" may be a useful notion in your own internal narrative — it's "how the algorithm feels from the inside" (the algorithm in this case being your decisions to do what you do). But outside of your head, it's meaningless. Out in the world, there is no "what you are" beyond what you do.
As was pointed out elsewhere in these comments, there are situations where changing "what you are" - for example, increasing your confidence levels - is more effective then trying to change your actions directly.
What do you exactly mean with "matter"?
If you want to define whether A matters for B, than it's central to look whether changes in A that you can classify cause changes in B.
When one speaks about doing one frequently doesn't think about actions like raising one heart rate by 5 bpm to signal that something created an emotional impact on yourself. If an attractive woman walks through the street and a guy sees her and gets attracted, you can say that the woman is doing something because she's reflecting light in exactly the correct way to get the guy attracted.
If you define "doing" that broadly it's not a useful word anymore. The cracked article from which I quoted doesn't seem to define "doing" that broadly. On the other hand it's no problem to define "being" broadly enough to cover all "doing" as well.
This article greatly annoyed me because of how it tells people to do the correct practical things (Develop skills! Be persistent and grind! Help people!) yet gives atrocious and shallow reasons for it - and then Wong says how if people criticize him they haven't heard the message. No, David, you can give people correct directions and still be a huge jerk promoting an awful worldview!
He basically shows NO understanding of what makes one attractive to people (especially romantically) and what gives you a feeling of self-worth and self-respect. What you "are" does in fact matter - both to yourself and to others! - outside of your actions; they just reveal and signal your qualities. If you don't do anything good, it's a sign of something being broken about you, but just mechanically bartering some product of your labour for friendship, affection and status cannot work - if your life is in a rut, it's because of some deeper issues and you've got to resolve those first and foremost.
This masochistic imperative to "Work harder and quit whining" might sound all serious and mature, but does not in fact has the power to make you a "better person"; rather, you'll know you've changed for the better when you can achieve more stuff and don't feel miserable.
I wanted to write a short comment illustrating how this article might be the mirror opposite of some unfortunate ideas in the "Seduction community" - it's "forget all else and GIVE to people, to obtain affection and self-worth" versus "forget all else and TAKE from people, to obtain affection and self-worth" - and how, for a self-actualized person, needs, one's own and others', should dictate the taking and giving, not some primitive framework of barter or conquest - but I predictably got too lazy to extend it :)
I've taken a crack at what's wrong with that article.
The problem is, there's so much wrong with it from so many different angles that it's rather a large topic.
Yep :). I was doing a more charitable reading than the article really deserves, to be honest. It carried over from the method of political debate I am attempting these days - accept the opponent's premises (e.g. far-right ideas that they proudly call "thoughtcrime"), then show how either a modus-tollens inference from them is instrumentally/ethically preferrable, or how they just have nothing to do with the opponent being an insufferable jerk.
100% true. I often shudder when I think how miserable I could've got if I hadn't watched this at a low point in my life.
I think the only problem with the article is that it tries to otheroptimize. It seems to address a problem that the author had, as some people do. He seems to overestimate the usefulness of his advices though (he writes for anyone except if "your career is going great, you're thrilled with your life and you're happy with your relationships"). As mentioned by NancyLebovitz, the article is not for the clinical depressed, in fact it is only for a small (?) set of people who sits around all day whining, who thinks they deserve better for who they are, without actually trying to improve the situation.
That said, this over generalization is a problem that permeates most self help, and the article is not more guilty than the average.
I think I'll just quote the entirety of an angry comment on Nancy's blog. I basically can't help agreeing with the below. Although I don't think the article is entirely bad and worthless - there are a few commonplace yet forcefully asserted life instructions there, if that's your cup of tea - its downsides do outweigh its utility.
What especially pisses me off is how Wong hijacks the ostensibly altruistic intent of it as an excuse to throw a load of aggression and condescending superiority in the intended audience's face, then offers an explanation of how feeling repulsed/hurt by that tone further confirms the reader's lower status. This is, like, a textbook example of self-gratification and cruel status play.
Conclusion: a truth that's told with bad intent beats all the lies you can invent. And when you mix in some outright lies...
One of the comments at dreamwidth is by a therapist who said that being extremely vulnerable to shame is a distinct problem-- not everyone who's depressed has it, and not everyone who's shame-prone is depressed.
Also, I didn't say clinically depressed. I'm in the mild-to-moderate category, and that sort of talk is bad for me.
Actually the article says enough different and somewhat contradictory things that it supports multiple readings, or to put it less charitably, it's contradictory in a way that leads people to pick the bits which are most emotionally salient to them and then get angry at each other for misreading the article.
The title is "6 Harsh Truths That Will Improve Your Life"-- by implication, anyone's life. Then Wong says, "this will improve your life unless it's awesome in all respects". Then he pulls back to "this is directed at people with a particular false view of the universe".
Yep.
As with most self-help advice, it is like an eyeglass prescription - only good for one specific pathology. It may correct one person's vision, while making another's massively worse.
Also, I remember what it was like to be (mildly!) depressed, and my oh my would that article not have helped.
My complaint about the article is that it has the same problem as most self-help advice. When you read it, it sounds intelligent, you nod your head, it makes sense. You might even think to yourself "Yeah, I'm going to really change now!"
But as everyone whose tried to improve himself knows, it's difficult to change your behavior (and thoughts) on a basis consistent enough to really make a long-lasting difference.
Éowyn explaining to Aragorn why she was skilled with a blade. The Lord of the Rings: The Two Towers, the 2002 movie.
It's funny. I've seen that movie five times or so. But I watched it again a few days ago, and that line struck me, too. Never stood out before.
If you are an American perhaps it stood out this time because of all the recent discussion of gun control.
Silas Dogood
I'd have thought from observation that quite a lot of human club is just about discussing the rules of human club, excess meta and all. Philosophy in daily practice being best considered a cultural activity, something humans do to impress other humans.
I dunno; "philosophy", at least, doesn't seem to be about discussing the rules of the human club, or maybe it's discussing a very specific part of the rules (but then, so is Maths!). Family gossip and stand-up comedy seem much closer to "discussing the rules of the human club".
Going meta is the quick win (in the social competition) for cultural discourse, though, c.f. postmodernism.
Do you know what he means by this? We spend a lot of time here discussing the rules of human club, and so far seem glad we have.
Imagine the average high school clique. They would be very uncomfortable explicitly discussing the rules of the group - even as they enforced them ruthlessly. Further, the teachers, parents, and other adults who knew the students would be just as uncomfortable describing the rules of the clique.
In short, we are socially weird for being willing to discuss the social rules - that our discussion is an improvement doesn't mean it is statistically ordinary.
Ah, I see.
Human club has many rules. Some can be bent. Others can be broken.
Well...only insofar as we discuss the social rules on Lesswrong itself. No one, not even the high school clique, is uncomfortable with discussing social rules as generalities. I've seen school age kids discuss these things as generalities quite enthusiastically when a teacher instigates the discussion.
It's only when specific names and people are mentioned that the discussion can become dangerous for the speakers. But this is no more true for social rules than it is for other conversations of similar type - for instance, when pointing out flaws of people sitting around you. I don't think LW is immune to this (for example, the use of the word "phyg" is particularly salient, if usually tongue-in-cheek, example) although the anonymity and fact that very few of us know each other personally does provide some level of protection.
I suspect this population was not randomly selected. Otherwise, someone might have explained to me why nerds are unpopular at a time in my life that it might have been actually helpful.
I think the author is needlessly overcomplicating things.
1) People instinctively form tight nit groups of friends with people they like. People they like usually means help them survive and raise offspring. This usually means socially adept, athletic, and attractive.
2) Having friends brings diminishing returns. The more friends a person have, the less they feel the need to make new friends. That's why the first day of school is vital.
3) Ill feelings develop between sally and bob. Sally talks to Susanne, and now they both bear ill feelings towards Bob. Thus, Bob has descended a rung in the dominance hierarchy.
4) Bob's vulnerability is a function of how many people Sally can find who will agree with her about him. As a extension of this principle, those with the fewest friends will get the most picked on. The bullies can be both from the popular and unpopular crowd.
5) Factors leading to few friends - lack of social or athletic ability, conspicuous non-conformity via eccentric behavior, dress, or speech, low attractiveness, or misguided use of physical or verbal aggression.
By the power law, approximately 20% of the kids will be friends with 80% of the network. These are the popular kids. As a result of their privileged position, they do not even notice popularity hierarchies...it's sort of like being white, male, upper middle class, etc. These kids will claim that there is no such thing as "popularity".
Any random person is likely to find themselves in the bottom 80%. They will find themselves excluded from the main network because the people in the main network already have enough friends. They will fined themselves picked on because they are vulnerable like Bob.
When we call someone a nerd, we refer to a constellation of traits which include intelligence, obscure interests (non-conformity), lack of social skills, lack of fashion sense, lack of athletic ability, and glasses-wearing. Obviously such folks are less likely to be in the top 20%...not because of the intelligence but because of all that other stuff.
But they aren't the only ones who find themselves unpopular. In fact, a vast segment of the population finds themselves in this position.
Something like this?
lanyard
Wasn't that poem sarcastic anyway? Until the last stanza, the poem says how the roads were really identical in all particulars -- and in the last stanza the narrator admits that he will be describing this choice falsely in the future.
That's not how I read it. There's no particular difference between the two roads, so far as Frost can tell at the point of divergence, but they're still different roads and lead by different routes to different places, and he expects that years from now he'll look back and see (or guess?) that it did indeed make a big difference which one he took.
He expects that years from now he'll look back and claim it made a big difference. He says he will also claim it was the road less traveled by, for all that the passing there had worn them the same and so forth. It may well make a big difference, but of what nature no-one knows.
Yes, I understand that what the poem says is that he'll say it made a big difference, rather than that it did. But there's a difference between not saying it's a real difference and saying it's not a real difference; it seems to me that the former is what Frost does and the latter is what ArisKatsaris says he does.
(My reading of the to-ing and fro-ing about whether the road is "less traveled by" is that the speaker's initial impression is that one path is somewhat untrodden, and he picks it on those grounds; on reflection he's not sure it's actually any less trodden than the other, but his intention was to take the less-travelled road; and later on he expects to adopt the shorthand of saying that he picked the less-travelled one. I don't see that any actual dishonesty is intended, though perhaps a certain lack of precision.)
"Two roads diverged in a wood. I took the one less traveled by, and had to eat bugs until the park rangers rescued me."
Duplicate.
-Paul Graham
Partial duplicate
— Umberto Eco, The Search for the Perfect Language
-- someone on Usenet replying to someone deriding Kurzweil
In general, though, that argument is the Galileo gambit and not a very good argument.
There's a more charitable reading of this comment, which is just "the absurdity heuristic is not all that reliable in some domains."
What makes this the Galileo Gambit is that the absurdity factor is being turned into alleged support (by affective association with the positive benefits of air travel and frequent flier miles) rather than just being neutralized. Contrast to http://lesswrong.com/lw/j1/stranger_than_history/ where absurdity is being pointed out as a fallible heuristic but not being associated with positives.
That's the way I interpreted it. (How comes I tend to read pretty much anything¹ charitably?)
-- Penn Jilette
Disagree about the fashion.
-- Greg Egan, "Border Guards".
-- Kyubey (Puella Magi Madoka Magica)
For you, I'll walk this endless maze...
The only ones to love a martyr's actions are those who did not love them.
That isn't true. If I love someone and they martyr themselves (literally or figuratively) in a way that is the unambiguously and overwhelmingly optimal way to fulfill both their volition and my own then I will love the martyr's actions. If you say I do not love the martyr or do not love their actions due to some generalization then you are just wrong.
Agreed... but also, this gets complicated because of the role of external constraints.
I can love someone, "love" what they do in the context of the environment in which they did it (I put love here in scare quotes because I'm not sure I mean the same thing by it when applied to an action, but it's close enough for casual conversation), and hate the fact that they were in such an environment to begin with, and if so my feelings about it can easily get confused.
-- Penn Jilette.
It's hardly fair to describe this tiny modicum of doubt as atheism, even in the umbrella sense that covers agnosticism.
It is entirely possible for someone to believe in an evil god, and (quite reasonably) decline to do that god's alleged bidding.
-- Groucho Marx
I'm not sure that's great advice. It will result in you trying to try to live forever. The only way to live forever or die trying is to intend to live forever.
How to apply that, though? I could try not to try to try to live forever, but that sounds equivalent to trying to merely try to live forever. And now "try" has stopped sounding like a real word, which makes my misfortune even more trying.
Live forever. Details are in the linked post.
-- P. W. Bridgman, ‘‘The Struggle for Intellectual Integrity’’
-- Larry Wall
See the Mouse Universe.
(If you wonder where "two hundred and forty-two miles" shortening of the river came from, it was the straightening of its original meandering path to improve navigation)
http://xkcd.com/605/
-Bas van Fraassen, The Scientific Image
What does that mean?
Lambs are young sheep; they have less meat & less wool.
The punishment for livestock rustling being identical no matter what animal is stolen, you should prefer to steal a sheep rather than a lamb.
Believing large lies is worse than small lies; basically, it's arguing against the What-The-Hell Effect as applied to rationality. Or so I presume, did not read original.
I had noticed that effect myself, but I didn't know it had a name.
I had noticed it and mistakenly attributed it to the sunk cost fallacy but on reflection it's quite different from sunk costs. However, it was discovering and (as it turns out, incorrectly) generalising the sunk cost fallacy that alerted me to the effect and that genuinely helped me improve myself, so it's a happy mistake.
One thing that helped me was learning to fear the words 'might as well,' as in, 'I've already wasted most of the day so I might as well waste the rest of it,' or 'she'll never go out with me so I might as well not bother asking her,' and countless other examples. My way of dealing it is to mock my own thought processes ('Yeah, things are really bad so let's make them even worse. Nice plan, genius') and switch to a more utilitarian way of thinking ('A small chance of success is better than none,' 'Let's try and squeeze as much utility out of this as possible' etc.).
I hadn't fully grasped the extent to which I was sabotaging my own life with that one, pernicious little error.
John Locke, Essay Concerning Human Understanding
-Bryant Walker Smith
But we've had self-driving cars for multiple years now...
By principle of charity (everyone knows about google cars by now), I took the grandparent to mean "past performance is not a guarantee of future returns."
Obviously, Smith knows this, since he has published papers on the legality of self-driving cars as late as 2012. The purpose of the quote (for me) is to draw an analogy between Strong AI and self-driving cars, both of which have had people saying "it is just 20 years away" for many decades now (and one of which is now here, making the people that made such a prediction 20 year ago correct).
Considering there are working prototypes of such cars driving around right now ...
EDIT: Damn, ninja'd by Luke.
The Last Psychiatrist (http://thelastpsychiatrist.com/2010/10/how_not_to_prevent_military_su.html)
-- Eric Hoffer, The True Believer
A decent quote, except I am minded to nitpick that there is no such thing as unbelief as a separate category from belief. We just have credences.
Many futile conversations have I seen among the muggles, wherein disputants tried to make some Fully General point about unbelief vs belief, or doubt vs certainty.
-Pirkei Avot (5:15)
Deep wisdom indeed. Some people believe the wrong things, and some believe the right things, some people believe both, some people believe neither.
To me, it expresses the need to pay attention to what you are learning, and decide which things to retain and which to discard. E.g. one student takes a course in Scala and memorizes the code for generics, while the other writes the code but focuses on understanding the notion of polymorphism and what it is good for.
I genuinely don't understand this comment.
Sorry. Attempt #2:
If I had infinite storage space and computing power, I would store every single piece of information I encountered. I don't, so instead I have to efficiently process and store things that I learn. This generally requires that I throw information out the window. For example, if I take a walk, I barely even process most of the detail in my visual input, and I remember very little of it. I only want to keep track of a very few things, like where I am in relation to my house, where the sidewalk is, and any nearby hazards. When the walk is over, I discard even that information. On the other hand, I often have to take derivatives. Although understanding what a derivative means is very important, it would be silly of me to rederive e.g. the chain rule each time I wanted to use it. That would waste a lot of time, and it does not take a lot of space to store the procedure for applying the chain rule. So I store that logically superfluous information because it is important.
In other words, I have to be picky about what I remember. Some information is particularly useful or deep, some information isn’t. Just because this is incredibly obvious, doesn’t mean we don’t need to remind ourselves to consciously decide what to pay attention to.
I thought the quote expressed this idea nicely and compactly. Whoever wrote the quote probably did not mean it in quite the same way I understand it, but I still like it.
While this comment is true - you can't remember everything - I'm not sure how you could get that from the categorization in the quote. Still, if that's what you got out of it, I can see why you posted it here.
-- Henri Poincaré
-Buttercup Dew (@NationalistPony)
C. S. Lewis, Mere Christianity
Caveat: this is not at all how the majority of the religious people that I know would use the word "faith". In fact, this passage turned out to be one of the earliest helps in bringing me to think critically about and ultimately discard my religious worldview.
I am fascinated by all of the answers that are not "never," as this has never happened to me. If any of the answerers were atheists, could any of you briefly describe these experiences and what might have caused them? (I am expecting "psychedelic drugs," so I will be most surprised by experiences that are caused by anything else.)
Sometimes, I am extremely unconvinced in the utility of "knowing stuff" or "understanding stuff" when confronted with the inability to explain it to suffering people who seem like they want to stop suffering but refuse to consider the stuff that has potential to help them stop suffering. =/
Interesting. My confidence in my beliefs has never been tied to my ability to explain them to anyone, but then again I'm a mathematician-(in-training), so...
Well, it's not that I'm not confident that they're useful to me. They are! They help me make choices that make me happy. I'm just not confident in how useful pursuing them is in comparison to various utilitarian considerations of helping other people be not miserable.
For example, suppose I could learn some more rationality tricks and start saving an extra $100 each month by some means, while in the meantime someone I know is depressed and miserable and seemingly asking for help. Instead of going to learn those rationality tricks to make an extra $100, I am tempted to sit with them and tell them all the ways I learned to manage my thoughts in order to not make myself miserable and depressed. And when this fails spectacularly, eating my time and energy, I am left inclined to do neither because that person is miserable and depressed and I'm powerless to help them so how useful is $100 really? Blah! So, to answer the question, this is the mood in which I question my belief in the usefulness of knowing and doing useful things.
I am also a computer science/math person! high five
Aren't useful things kind of useful to do kind of by definition? (I know this argument is often used to sneak in connotations, but I can't imagine that "is useful" is a sneaky connotation of "useful thing.")
What you describe sounds to me like a failure to model your friend correctly. Most people cannot fix themselves given only instructions on how to do so, and what worked for you may not work for your friend. Even if it might, it is hard to motivate yourself to do things when you are miserable and depressed, and when you are miserable and depressed, hearing someone else say "here are all the ways you currently suck, and you should stop sucking in those ways" is not necessarily encouraging.
In other words, "useful" is a two- or even three-place predicate.
Hasn't happened to me in years. Typically involved desperation about how some aspect of my life (only peripherally related to the beliefs in question, natch) was going very badly. Temptation to pray was involved. These urges really went away when I discovered that they were mainly caused by garden variety frustration + low blood sugar.
I think that in my folly-filled youth, my brain discovered that "conversion" experiences (religious/political) are fun and very energizing. When I am really dejected, a small part of me says "Let's convert to something! Clearly your current beliefs are not inspiring you enough!"
My own response was “rarely”; had I answered when I was a Christian ten years ago, I would probably have said “sometimes”; had I answered as a Christian five years ago I might have said “often” or “very often” (eventually I allowed some of these moments of extreme uncertainty to become actual crises of faith and I changed my mind, though it happened in a very sloppy and roundabout way and had I had LessWrong at the time things could’ve been a lot easier.)
And still, I can think of maybe two times in the past year when I suddenly got a terrifying sinking feeling that I have got everything horribly, totally wrong. Both instances were triggered whilst around family and friends who remain religious, and both had to do with being reminded of old arguments I used to use in defense of the Bible which I couldn’t remember, in the moment, having explicitly refuted.
Neither of these moods was very important and both were combated in a matter of minutes. In retrospect, I’d guess that my brain was conflating fear of rejection-from-the-tribe-for-what-I-believe with fear of actually-being-wrong.
Not psychedelic drugs, but apparently an adequate trigger nonetheless.
Erm...when I was a lot younger, when I considered doing something wrong or told a lie I had the vague feeling that someone was keeping tabs. Basically, when weighing utilities I greatly upped the probability that someone would somehow come to know of my wrongdoings, even when it was totally implausible. That "someone" was certainly not God or a dead ancestor or anything supernatural...it wasn't even necessarily an authority figure.
Basically, the superstition was that someone who knew me well would eventually come to find out about my wrongdoing, and one day they would confront me about it. And they'd be greatly disappointed or angry.
I'm ashamed to say that in the past I might have actually done actions which I myself felt were immoral, if it were not for that superstitious feeling that my actions would be discovered by another individual. It's hard to say in retrospect whether the superstitious feeling was the factor that pushed me back over that edge.
Note that I never believed the superstition...it was more of a gut feeling.
I'm older now and am proud to say that I haven't given serious consideration to doing anything which I personally feel is immoral for a very, very long time. So I do not know whether I still carry this superstition. It's not really something I can test empirically.
I think part of it is that as I grew older my mind conceptually merged "selfish desire" and "morality" neatly into one single "what is the sum total of my goals" utility function construct (though I wasn't familiar with the term "utility function" at the time).
This shift occurred sometime in high school, and it happened around the same time that I overcame mind-body dualism at a gut level. Though I've always had generally atheist beliefs, it wasn't until this shift that I really understood the implications of a logical universe.
Once these dichotomies broke down, I no longer felt the temptation to "give in" to selfish desire, nor was I warded off by "guilt" or the superstitious fear. I follow morals because I want to follow them, since they are a huge part of my utility function. Once my brain understood at a gut level that going against my morality was intrinsically against my interests, I stopped feeling any temptation to do immoral actions for selfish reasons. On the flip side, the shift also allows be to be selfish without feeling guilty. It's not that I'm a "better person" thanks to the shift in gut instinct...it's more that my opposing instincts don't fight with each other by using temptation, fear, and guilt anymore.
I think there is something about that "shift" experience I described (anecdote indicates that a lot of smart people go through this at some point in life, but most describe it in less than articulate spiritual terms) which permanently alters your gut feelings about reality, morality, and similar topics in philosophy.
I'm guessing those who answered "never" either did not carry the illusions in question to begin with and therefore did not require a shift in thought, or they did not factor in how they felt pre-shift into their introspection.
I answered Sometimes. For me the 'foundational belief' in question is usually along the lines: "Goal (x) is worth the effort of subgoal/process (y)." These moods usually last less than 6 months, and I have a hunch that they're hormonal in nature. I've yet to systematically gather data on the factors that seem most likely to be causing them, mostly because it doesn't seem worth the effort right now. Hah. Seriously, though, I have in fact been convinced that I need to work out a consistent utility function, but when I think about the work involved, I just... blah.
I put sometimes.
I believe all kinds of crazy stuff and question everything when I'm lying in bed trying to fall asleep, most commonly that death will be an active and specific nothing that I will exist to experience and be bored frightened and upset by forever. Something deep in my brain believes a very specific horrible cosmology as wacky and specific as any religion but not nearly as cheerful. When my faculties are weakened it feels as if I directly know it to be true and any attempt to rehearse my reasons for materialism feels like rationalizing.
I'm neither very mentally healthy nor very neurotypical, which may be part of why this happens.
I have had "extreme temporary loss of foundational beliefs," where I briefly lost confidence in beliefs such as the nonexistence of fundamentally mental entities (I would describe this experience as "innate but long dormant animist intutions suddenly start shouting,") but I've never had a mood where Christianity or any other religion looked probable, because even when I had such an experience, I was never enticed to privilege the hypothesis of any particular religion or superstition.
I answered "sometimes" thinking of this as just Christianity, but I would have answered "very often" if I had read your gloss more carefully.
I'm not quite sure how to explicate this, as it's something I've never really though much about and had generalized from one example to be universal. But my intuitions about what is probably true are extremely mood and even fancy-dependent, although my evaluation of particular arguments and such seems to be comparatively stable. I can see positive and negative aspects to this.
I thought the most truthful answer for me would be "Rarely", given all possible interpretations of the question. I think that it should have been qualified "within the past year", to eliminate the follies of truth-seeking in one's youth. Someone who answers "Never" cannot be considering when they were a five-year-old. I have believed or wanted to believe a lot of crazy things. Even right now, thinking as an atheist, I rarely have those moods, and only rarely due to my recognized (and combated) tendency toward magical thinking. However, right now, thinking as a Christian, I would have doubts constantly, because no matter how much I would like to believe, it is plain to see that most of what I am expected to have faith in as a Christian is complete crap. I am capable of adopting either mode of thinking, as is anyone else here. We're just better at one mode than others.
Sounds like Lewis's confusion would have been substatially cleared up by distinguishing between belief and alief, and then he would not have had to perpetrate such abuses on commonly used words.
To be fair, the philosopher Tamar Gendler only coined the term in 2008.
-- Ricardo, publicly saying "oops" in his restrained Victorian fashion, in his essay "On Machinery".
I was actually just reading that yesterday because of Cowen linking it in http://marginalrevolution.com/marginalrevolution/2013/01/the-ricardo-effect-in-europe-germany-fact-of-the-day.html
I'm not entirely sure I understand Ricardo's chapter (Victorian economists being hard to read both because of the style and distance), or why, if it's as clear as Ricardo seems to think, no-one ever seems to mention the point in discussions of technological unemployment (and instead, constantly harping on comparative advantage etc). What did you make of it?
That's how I found it, too. But I need the LessWrong karma and you don't. :D
If I followed the discussion of circulating versus fixed capital, and gross versus net increase, Ricardo is showing that (in modern jargon as opposed to Victorian jargon) if you set the elasticities correctly, you can make a new machine decrease total wages in spite of substitution effects. He seems to think about this in terms of the "carrying capacity" of the economy, ie the total population size, presumably because Victorian economists worked much closer to true Malthusian conditions than ours do. In other words it's a bit of a model, not necessarily related to any particular economic change that has ever actually happened. Possibly you could get the same result re-published today if you put it in modern jargon with some nice equations, but it would be one of those papers that basically say "If we set variable X to extreme value Y, what happens?" So it's probably not that important when discussing actual machinery, as Ricardo acknowledges; he's exploring the edges of the parameter space.
(a few verbal tics were removed by me; the censorship was already present in the version I heard)
I'd vote this up, but I can't shake the feeling that the author is setting up a false dichotomy. Living forever would be great, but living forever without arthritis would be even better. There's no reason why we shouldn't solve the easier problem first.
Sure there is. If you have two problems, one of which is substantially easier than the other, then you still might solve the harder problem first if 1) solving the easier problem won't help you solve the harder problem and 2) the harder problem is substantially more pressing. In other words, you need to take into account the opportunity cost of diverting some of your resources to solving the easier problem.
In general this is true, but I believe that in this particular case the reasoning doesn't apply. Solving problems like arthritis and cancer is essential for prolonging productive biological life.
Granted, such solutions would cease to be useful once mind uploading is implemented. However, IMO mind uploading is so difficult -- and, therefore, so far in the future -- that, if we did chose to focus exclusively on it, we'd lose too many utilons to biological ailments. For the same reason, prolonging productive biological life now is still quite useful, because it would allow researchers to live longer, thus speeding up the pace of research that will eventually lead to uploading.
Sympathetic, but ultimately, we die OF diseases. And the years we do have are more or less valuable depending on their quality.
Physicians should maximize QALYs, and extending lifespan is only one way to do it.
Using punctuation that is normally intended to match ({[]}), confused me. Use the !%#$ing other punctuation for that.
Jimmy the rational hypnotist on priming and implicit memory:
--George Eliot
Apologies to Jayson_Virissimo.
— Gregory Wheeler, "Formal Epistemology"
Is there a concrete example of a problem approached thus?
Viewing the interactions of photons as both a wave and a billiard ball. Both are wrong, but by seeing which traits remain constant in all models, we can project what traits the true model is likely to have.
Does that work? I don't know enough physics to tell if that makes sense.
-- TVTropes
Edit (1/7): I have no particular reason to believe that this is literally true, but either way I think it holds an interesting rationality lesson. Feel free to substitute 'Zorblaxia' for 'Japan' above.
Interesting; is this true?
Yes, my Japanese teacher was very insistent about it, and IIRC would even take points off for talking about someones mental state with out the proper qualifiers.
This is good to know, and makes me wonder whether there's a way to encourage this kind of thinking in other populations. My only thought so far has been "get yourself involved with the production of the most widely-used primary school language textbooks in your area."
Thoughts?
I think you're missing a word here :P
Fixed.
Specific source: Useful Notes: Japanese Language on TV Tropes
It's not easy to find rap lyrics that are appropriate to be posted here. Here's an attempt.
-Jobe Wilkins (Whateley Academy)
Even though this quote is focusing on religion, I think it applies to any beliefs people have that they think are "harmless" but greatly influence how they treat others. In short, since no person is an island, we have a duty to critically examine the beliefs we have that influence how we treat others.
-- Eric Hoffer, The True Believer
-- Eric Hoffer, The True Believer
--Brian Greene, The Elegant Universe
South Park, Se 16 ep 4, "Jewpacabra"
note: edited for concision. script
--Eric Hoffer, The True Believer
-- Sterren with a literal realization that the territory did not match his mental map in The Unwilling Warlord by Lawrence Watt-Evans
- Mark T. Conrad, "Thus Spake Bart: On Nietzche and the Virtues of Being Bad", The Simpsons and Philosophy: The D'Oh of Homer