Rationality Quotes January 2013
Happy New Year! Here's the latest and greatest installment of rationality quotes. Remember:
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself
- Do not quote comments/posts on LessWrong or Overcoming Bias
- No more than 5 quotes per person per monthly thread, please
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (604)
--Michael Huemer
-- thedaveoflife
The Third Doctor
Vannevar Bush
--Sir Francis Galton
-G. K. Chesterton
"I've never seen the Icarus story as a lesson about the limitations of humans. I see it as a lesson about the limitations of wax as an adhesive."
-- Randall Munroe, in http://what-if.xkcd.com/30/ (What-if xkcd, Interplanetary Cessna)
-- Steven Brust, spoken by Vlad, in Iorich
Seems to describe well the founder of this forum. I wonder if this quote resonates with a certain personal experience of yours.
-
Partial dupe.
SMBC comics: a metaphor for deathism.
While I am a fan of SMBC, in this case he's not doing existentialism justice (or not understanding existentialism). Existentialism is not the same thing as deathism. Existentialism is about finding meaning and responsibility in an absurd existence. While mortality is certainly absurd, biological immortality will not make existential issues go away. In fact, I suspect it will make them stronger..
edit: on the other hand, "existentialist hokey-pokey" is both funny and right on the mark!
Randall Munroe
This is a duplicate.
And to think, I was just getting on to post this quote myself!
I don't think change can be planned. It can only be recognized.
jad abumrad, a video about the development of Radio Lab and the amount of fear involved in doing original work
Dr. Seuss
-http://writebadlywell.blogspot.com/2010/05/write-yourself-into-corner.html
I would argue that the lesson is that when something valuable is at stake, we should focus on the simplest available solutions to the puzzles we face, rather than on ways to demonstrate our intelligence to ourselves or others.
Story ... too awesome ... not to upvote ...
not sure why its rational, though.
Speaking of writing yourself into a corner...
According to TV Tropes, there was one show, "Sledge Hammer", which ended its first season with the main character setting off a nuclear bomb while trying to defuse it. They didn't expect to be renewed for a second season, so when they were, they had a problem. This is what they did:
Previously on Sledge Hammer:
[scene of nuclear explosion]
Tonight's episode takes place five years before that fateful explosion.
I think this is an updating of the cliché from serial adventure stories for boys, where an instalment would end with a cliffhanger, the hero facing certain death. The following instalment would resolve the matter by saying "With one bound, Jack was free." Whether those exact words were ever written is unclear from Google, but it's a well-known form of lazy plotting. If it isn't already on TVTropes, now's your chance.
Did you just create that redlink? That's not the standard procedure for introducing new tropes, and if someone did do a writeup on it, it would probably end up getting deleted. New tropes are supposed to be introduced as proposals on the YKTTW (You Know That Thing Where) in order to build consensus that they're legitimate tropes that aren't already covered, and gather enough examples for a proper launch. You could add it as a proposal there, but the title is unlikely to fly under the current naming policy.
Pages launched from cold starts occasionally stick around (my first page contribution from back when I was a newcomer and hadn't learned the ropes is still around despite my own attempts to get it cutlisted,) but bypassing the YKTTW is frowned upon if not actually forbidden.
I didn't make any edits to TVTropes -- the page that it looks like I'm linking to doesn't actually exist. But I wasn't aware of YKTTW.
ETA: Neither is their 404 handler, that turns URLs for nonexistent pages into invitations to create them. As a troper yourself, maybe you could suggest to TVTropes that they change it?
If you're referring to what I think you are, that's more of a feature than a bug, since works pages don't need to go through the YKTTW. We get a lot more new works pages than new trope pages, so as long as the mechanics for creating either are the same, it helps to keep the process streamlined to avoid too much inconvenience.
Wouldn't that fall under "Cliffhanger Copout"?
I believe that Cliffhanger Copout refers to the same thing. The Harlan Ellison example in particular is worth reading.
There's so many different ways that story couldn't possibly be true...
(EDIT: Ooh, turns out that the Superman Radio program was the one that pulled off the "Clan of the Fiery Cross" punch against the KKK.)
--John Derbyshire
Wile this is all very inspiring, is it true? Yes, truth in and of itself is something that many people value, but what this quote is claiming is that there are a class of people (that he calls "dissidents") that specifically value this above and beyond anything else. It seems a lot more likely to me that truth is something that all or most people value to one extent or another, and as such, sometimes if the conditions are right people will sacrifice stuff to achieve it, just like for any other thing they value.
Person 1: "I don't understand how my brain works. But my brain is what I rely on to understand how things work." Person 2: "Is that a problem?" Person 1: "I'm not sure how to tell."
-Today's xkcd
osewalrus
I try to get around this by assuming that self-interest and malice, outside of a few exceptional cases, are evenly distributed across tribes, organizations, and political entities, and that when I find a particularly self-interested or malicious person that's evidence about their own personality rather than about tribal characteristics. This is almost certainly false and indeed requires not only bad priors but bad Bayesian inference, but I haven't yet found a way to use all but the narrowest and most obvious negative-valence concepts to predict group behavior without inviting more bias than I'd be preventing.
-- Scenes From A Multiverse
This works equally well as an argument against utilitarianism, which I'm guessing may be your intent.
I have no idea what people mean when they say they are against utilitarianism. My current interpretation is that they don't think people should be VNM-rational, and I haven't seen a cogent argument supporting this. Why isn't this quote just establishing that the utility of babies is high?
I find these criticisms by Vladimir_M to be really superb.
I aspire to be VNM rational, but not a utilitarian.
It's all very confusing because they both use the word "utility" but they seem to be different concepts. "Utilitarianism" is a particular moral theory that (depending on the speaker) assumes consequentialism, linearish aggregation of "utility" between people, independence and linearity of utility function components, utility is proportional to "happyness" or "well-being" or preference fulfillment, etc. I'm sure any given utilitarian will disagree with something in that list, but I've seen all of them claimed.
VNM utility only assumes that you assign utilities to possibilities consistently, and that your utilities aggregate by expectation. It also assumes consequentialism in some sense, but it's not hard to make utility assignments that aren't really usefully described as consequentialist.
I reject "utilitarianism" because it is very vague, and because I disagree with many of its interpretations.
A bounded utility function that places a lot of value on signaling/being "a good person" and desirable associate, getting some "warm glow" and "mostly doing the (deontologically) right thing" seems like a pretty good approximation.
Well, Alicorn is a deontologist.
In any case, as an ultafinitist you should know the problems with the VNM theorem.
I also have no idea what people mean when they say they are deontologists. I've read Alicorn's Deontology for Consequentialists and I still really have no idea. My current interpretation is that a deontologist will make a decision that makes everything worse if it upholds some moral principle, which just seems like obviously a bad idea to me. I think it's reasonable to argue that deontology and virtue ethics describe heuristics for carrying out moral decisions in practice, but heuristics are heuristics because they break down, and I don't see a reasonable way to judge which heuristics to use that isn't consequentialist / utilitarian.
Then again, it's quite likely that my understanding of these terms doesn't agree with their colloquial use, in which case I need to find a better word for what I mean by consequentialist / utilitarian. Maybe I should stick to "VNM-rational."
I also didn't claim to be an ultrafinitist, although I have ultrafinitist sympathies. I haven't worked through the proof of the VNM theorem yet in enough detail to understand how infinitary it is (although I intend to).
Taboo "make everything worse".
At the very least I find it interesting how rarely an analogous objection against VNM-utiliterians with different utility functions is raised. It's almost as if many of the "VNM-utiliterians" around here don't care what it means to "make everything worse" as long as one avoids doing it, and avoids doing it following the mathematically correct decision theory.
Well the continuity axiom in the statement certainly seems dubious from an ultafinitist point of view.
Have worse consequences for everybody, where "everybody" means present and future agents to which we assign moral value. For example, a sufficiently crazy deontologist might want to kill all such agents in the name of some sacred moral principle.
Rarely? Isn't this exactly what we're talking about when we talk about paperclip maximizers?
When I asked you to taboo "makes everything worse", I meant taboo "worse" not taboo "everything".
You want me to say something like "worse with respect to some utility function" and you want to respond with something like "a VNM-rational agent with a different utility function has the same property." I didn't claim that I reject deontologists but accept VNM-rational agents even if they have different utility functions from me. I'm just trying to explain that my current understanding of deontology makes it seem like a bad idea to me, which is why I don't think it's accurate. Are you trying to correct my understanding of deontology or are you agreeing with it but disagreeing that it's a bad idea?
No, I'm going to respond by asking you "with respect to which utility function?" and "why should I care about that utility function?"
Huh? How so?
Replace the "corn god" in the quote with a sufficiently rational utiliterian agent.
I hadn't actually thought of that, but that could be part of why I liked the quote.
--Mencius Moldbug, here
I can't overemphasise how much I agree with this quote as a heuristic.
As I noted in my other comment, he redefined the terms underdog/overdog to be based on poteriors, not priors, effectively rendering them redundant (and useless as a heuristic).
Most of the time, priors and posteriors match. If you expect the posterior to differ from your prior in a specific direction, then change your prior.
And thus, you should expect 99% of underdogs to lose and 99% of overdogs to win. If all you know is that a dog won, you should be 99% confident the dog was an overdog. If the standard narrative reports the underdog winning, that doesn't make the narrative impossible, but puts a burden of implausibility on it.
Second statement assumes that the base rate of underdogs and overdogs is the same. In practice I would expect there to be far more underdogs than overdogs.
Good point. I was thinking of underdog and overdog as relative, binary terms -- in any contest, one of two dogs is the underdog, and the other is the overdog. If that's not the case, we can expect to see underdogs beating other underdogs, for instance, or an overdog being up against ten underdogs and losing to one of them.
How should I change my prior if I expect it to change in the specific directions either up or down - but not the same?
Fat tailed distributions make the rockin' world go round.
They don't even have to be fat-tailed; in very simple examples you can know that on the next observation, your posterior will either be greater or lesser but not the same.
Here's an example: flipping a biased coin in a beta distribution with a uniform prior, and trying to infer the bias/frequency. Obviously, when I flip the coin, I will either get a heads or a tails, so I know after my first flip, my posterior will either favor heads or tails, but not remain unchanged! There is no landing-on-its-edge intermediate 0.5 coin. Indeed, I know in advance I will be able to rule out 1 of 2 hypotheses: 100% heads and 100% tails.
But this isn't just true of the first observation. Suppose I flip twice, and get heads then tails; so the single most likely frequency is 1/2 since that's what I have to date. But now we're back to the same situation as in the beginning: we've managed to accumulative evidence against the most extreme biases like 99% heads, so we have learned something from the 2 flips, but we're back in the same situation where we expect the posterior to differ from the prior in 2 specific directions but cannot update the prior: the next flip I will either get 2/3 or 1/3 heads. Hence, I can tell you - even before flipping - that 1/2 must be dethroned in favor of 1/3 or 2/3!
And yet if you add those two posterior distributions, weighted by your current probability of ending up with each, you get your prior back. Magic!
(Witch burners don't get their prior back when they do this because they expect to update in the direction of "she's a witch" in either case, so when they sum over probable posteriors, they get back their real prior which says "I already know that she's a witch", the implication being "the trial has low value of information, let's just burn her now".)
Yup, sure does. Which is a step toward the right idea Kindly was gesturing at.
I consider this an uncharitable reading, I've read the article twice and I still understood him much as Konkvistador and Athrelon have.
I suppose this is a hilariously obvious thing to say, but I wonder how much leftism Marcion Mugwump has actually read. We're completely honest about the whole power-seizing thing. It's not some secret truth.
(Okay, some non-Marxist traditions like anarchism have that whole "people vs. power" thing. But they're confused.)
Ehm... what?
Yes but as a friend reminded me recently, saying obvious things can be necessary.
The heuristic is great, but that article is horrible, even for Moldbug.
I agree. For example:
This statement is obviously true. But it sure would be useful to have a theory that predicted (or even explained) when a putative civil disobedience would and wouldn't work that way.
Obviously, willing to use overwhelming violence usually defeats civil disobedience. But not every protest wins, and it is worth trying to figure out why - if for no other reason than figuring out if we could win if we protested something.
Cheney Bros v. Doris Silk Corporation, New York Circuit Court of Appeals, Second Circuit
In reference to Occam's razor:
--from Machine Learning by Tom M. Mitchell
Interesting how a concept seems more believable if it has a name...
(Source: Dennettations)
Is there a reason you're quoting this, or are you just being humeorous?
http://www.exmormon.org/whylft18.htm
This is part of why it's important to fight against all bad arguments everywhere, not just bad arguments on the other side.
Another interpretation: Try to figure out which side has more intelligent defenders and control for that when evaluating arguments. (On the other hand, the fact that all the smart people seem to believe X should probably be seen as evidence too...)
Yes, argument screens off authority, but that assumes that you're in a universe where it's possible to know everything and think of everything, I suspect. If one side is much more creative about coming up with clever arguments in support of itself (much better than you), who should you believe if the clever side also has all the best arguments?
Isn't the real problem here that the author of the quote was asking the wrong question, namely "Mormonism or non-Mormon Christianity?" when he should have been asking "Theism or atheism?" I don't see how controlling for which side had the more intelligent defenders in the former debate would have helped him better get to the truth. (I mean that may well be the right thing to do in general, but this doesn't seem to be a very good example for illustrating it.)
"Just because you no longer believe a lie, does not mean you now know the truth."
Mark Atwood
--Thomas Sowell
That's hopelessly vague. Advice is hard enough to absorb even if you understand it.
-Woody Allen EDIT: Fixed formatting.
FWIW, it seems like whatever is parsing the markdown in these comments, whenever it sees a ">" for a quote at the beginning of a paragraph it'll keep reading until the next paragraph break, i.e. double-whitespace at the end of a line or two linebreaks.
"De notre naissance à notre mort, nous sommes un cortège d’autres qui sont reliés par un fil ténu."
Jean Cocteau
("From our birth to our death, we are a procession of others whom a fine thread connects.")
...and I'm not sure about the fine thread.
"We are living on borrowed time and abiding by the law of probability, which is the only law we carefully observe. Had we done otherwise, we would now be dead heroes instead of surviving experts." –Devil's Guard
Beatrice the Biologist
-Alfie Kohn, "Punished By Rewards"
--Wendy Cope, He Tells Her from the series ‘Differences of Opinion’
--Benjamin Franklin
--"Sid" a commenter from HalfSigma's blog
The generation of random numbers is too important to be left to chance. Robert R. Coveyou, Oak Ridge National Laboratory
--Daniel Dennett, Breaking the Spell (discussing the differences between the "intentional object" of a belief and the thing-in-the-world inspiring that belief)
"Study everything, join nothing"
Atribution?
-- Jonathan Haidt
-- Yvain, on why brinkmanship is not stupid
Belief propagation fail on my part. I had already read that Yvain post when watching that Haidt talk, but I still interpreted the behaviour he described in terms of ape social dynamics, forgetting that the average politician is probably more cold-blooded (i.e. more resembling the idealized agents of game theory) than the average ape is.
OTOH, the problems Haidt describes (global warming, rising public debt, rising income inequality, rising prevalence of out-of-wedlock births) don't have a hard deadline, they just gradually get worse and worse over the decades; so the dynamics of brinkmanship is probably not quite the same.
Orson Scott Card, The Lost Gate
As my math teacher always said,
Why is it repulsive...? I guess I don't get it. I mean sure, it's not a subspace... is that what they mean?
David Brin
If the memories of my youth serve me anger 'leads to the dark side of the force' via the intermediary 'hate'. That is, it leads you to go around frying things with lightening and choking people with a force grip. This is only 'evil' when you do the killing in cases where killing is not an entirely appropriate response. Unfortunately humans (and furry green muppet 'Lannik') are notoriously bad at judging when drastic violation of inhibitions is appropriate. Power---likely including the power to kill people with your brain---will almost always corrupt.
Not nearly as much as David Brin perverts the message that Lucas's message. I in fact do reject the instructions of Yoda but I reject what he actually says. I don't need to reject a straw caricature thereof.
Automatically. Immediately. Where did this come from? Yoda is 900 years old, wizened and gives clear indications that he thinks of long term consequences rather than being caught up in the moment. We also know he's seen at least one such Jedi to Sith transition with his own eyes (after first predicting it). Anakin took years to grow from a whiny little brat into an awesome badass (I mean... "turn evil"). That is the kind of change that Yoda (and Lucas) clearly have in mind.
That seems unlikely. It also wasn't claimed by the Furry Master. Instead what can be expected is that that opinions and political beliefs will change in predictable ways---most notably in the direction of endorsing the acquisition and use of power in ways that happen to benefit the self. Maybe the corrupted will change from a Blue to a Green but more likely they'll change into a NavyBlue and consider it Right to kill Greens with their brain, take all their stuff and ravage their womenfolk (or menfolk, or asexual alien humanoids, depending on generalized sexual orientation).
Except that Lucas in the very same movie has Darth Vader turn back to the Light and throw Palpatine down some shaft due to loyalty to his son. Perhaps Lucas isn't presenting the moral lesson that Brin believes he is presenting.
Agreed generally, but will quibble about your last par. Vader's redemption is being presented as a Heroic Feat, it is no more representative of normal moral or psychological processes in this universe than blowing up the Death Star with a single shot is representative of normal tactics.
Drawing from Attack of the Clones:
The proximate emotion that leads to Anakin's fall is love. Even if we ignore the love-of-mother --> Tusken raiders massacre, the romance between Anakin and Padme is expressly forbidden because of the risk of Anakin turning evil.
If any strong emotion has such a strong risk of turning evil that the emotion must be forbidden, we aren't really talking about a moral philosophy that bears any resemblance to one worth trying to implement in real humans.
I'm not saying that strong emotions don't have a risk of going overboard - they obviously do. But the risk is maybe in the 10% range. It certainly isn't in the >90% range.
That's probably an overstatement by Brin. But evil (Sith-ness) is highly likely from feeling strong emotions (in-universe), and that's not representative of the way things work in the real world. It's roughly parallels the false idea that we rationalists want to remove emotions from human experience.
Lots of people in Weimar Germany got angry at the emerging fascists - and went out and joined the Communist Party. It was tough to be merely a liberal democrat.
I suspect you have your causation backwards. People created / joined the Freikorps and other quasi-fascist institutions to fight the threat of Communism. Viable international Communism (~1917) predates the fall of the Kaiser - and the Freikorp had no reason to exist when the existing authorities were already willing and capable of fighting Communism.
More generally, the natural reading of the Jedi moral rules is that the risk of evil from strong emotions was so great that liberal democrats should be prohibited from feeling any (neither anger-at-injustice nor love)
I don't know why you would think the causation would be only in one direction.
Now I'm confused. What is the topic of discussion? Clarification of Weimar Republic politics is not responsive to the Jedi-moral-philosophy point. Anger causing political action, including extreme political action, is a reasonable point, but I don't actually think anger-at-opponent-unjust-acts was the cause of much Communist or Fascist membership.
You might think anger-at-social-situation vs. anger-at-unjust-acts is excessive hair-splitting. But I interpreted your response as essentially saying "Anger-at-injustice really does lead to fairly directly evil." Your example does not support that assertion. If I've misinterpreted you, please clarify. I often seem to make these interpretative mistakes, and I'd like to do better at avoiding these types of misunderstandings in the future.
It certainly does. In reaction to one evil, Naziism, Germans could go and support a second evil, Communism, which to judge by its global body counts, was many times worse than Naziism, which is exactly the sort of reaction Brin is ridiculing: "oh, how ridiculous, how could getting angry at evil make you evil too?" Well, it could make you support another evil, perhaps even aware of the evil on the theory of 'the enemy of my enemy is my friend'...
I don't know how you could get a better example of 'fighting fire with fire' than that or 'when fighting monsters, beware lest you become one'.
Anger can lead to evil vs. Anger must lead to evil.
And ignoring anger for the moment, Jedi moral philosophy says love leads to evil (that's the Annakin-Padme plot of Attack of the Clones - the romance was explicitly forbidden by Jedi rules).
-- Sterren with a literal realization that the territory did not match his mental map in The Unwilling Warlord by Lawrence Watt-Evans
- Mark T. Conrad, "Thus Spake Bart: On Nietzche and the Virtues of Being Bad", The Simpsons and Philosophy: The D'Oh of Homer
That's not a bad essay (BTW, essays should be in quote marks, and the book itself, The Simpsons and Philosophy, in italics), but I don't think the quote is very interesting in isolation without any of the examples or comparisons.
Edited, thanks for the style correction.
I suspect you're probably right that more examples makes this more interesting, given the lack of upvotes. In fact, I probably found the quote relevant mostly because it more or less summed up the experience of my OWN life at the time I read it years ago.
I spent much of my youth being contrarian for contradiction's sake, and thinking myself to be revolutionary or somehow different from those who just joined the cliques and conformed, or blindly followed their parents, or any other authority.
When I realized that defining myself against social norms, or my parents, or society was really fundamentally no different from blind conformity, only then was I free to figure out who I really was and wanted to be. Probably related: this quote.
South Park, Se 16 ep 4, "Jewpacabra"
note: edited for concision. script
This is a duplicate. You probably checked and didn't find it because for some reason Google doesn't know about it.
-- Eric Hoffer, The True Believer
-- Eric Hoffer, The True Believer
-Jobe Wilkins (Whateley Academy)
Even though this quote is focusing on religion, I think it applies to any beliefs people have that they think are "harmless" but greatly influence how they treat others. In short, since no person is an island, we have a duty to critically examine the beliefs we have that influence how we treat others.
-- TVTropes
Edit (1/7): I have no particular reason to believe that this is literally true, but either way I think it holds an interesting rationality lesson. Feel free to substitute 'Zorblaxia' for 'Japan' above.
I have to say that's fairly stupid (I'm talking about the claim which the quote is making and generalizing over a whole population; I am not doing argumentum ad hominem here).
I've seen many sorts of (fascinated) mythical claims on how the Japanese think/communicate/have sex/you name it differently and they're all ... well, purely mythical. Even if I, for the purposes of this argument, assume that beoShaffer is right about his/her Japanese teacher (and not just imagining or bending traits into supporting his/her pre-defined belief), it's meaningless and does not validate the above claim. Just for the sake of illustration, the simplest explanation for such usages is some linguistic convention (which actually makes sense, since the page from which the quote is sourced is substantially talking about the Japanese Language).
Unless someone has some solid proof that it's actually related to thinking rather than some other social/linguistic convention, this is meaningless (and stupid).
Agreed. Pop-whorfianism is usually silly.
I'm not familiar with this term and your link did not clarify as much as I had hoped. Could you give a clearer definition?
Just-so stories about the relationships between language and culture. (The worst thing is that, while just-so stories about evolutionary psychology are generally immediately identified as sexist/classist/*ist drivel, just-so stories about language tend to be taken seriously no matter how ludicrous they are.)
Well, the Sapir-Whorf hypothesis is the idea that language shapes thought and/or culture, and Whorfianism is any school of thought based on this hypothesis. I assume pop-Whorfianism is just Whorfian speculation by people who aren't qualified in the field (and who tend to assume that the language/culture relationship is far more deterministic than it actually is).
Specific source: Useful Notes: Japanese Language on TV Tropes
TV Tropes is unreliable on Japanese culture. While it's fond of Japanese media, connection demographics show that Japanese editors are disproportionately rare (even after taking the language barrier into account); almost all the contributors to a page like that are likely to be language students or English-speaking Japanophiles, few of whom have any substantial experience with the language or culture in the wild. This introduces quite a bit of noise; for example, the site's had problems in the past with people reading meanings into Japanese words that don't exist or that are much more specific than they are in the wild.
I don't know myself whether the ancestor is accurate, but it'd be wise to take it with a grain of salt.
Interesting; is this true?
Yes, my Japanese teacher was very insistent about it, and IIRC would even take points off for talking about someones mental state with out the proper qualifiers.
I think you're missing a word here :P
Fixed.
This is good to know, and makes me wonder whether there's a way to encourage this kind of thinking in other populations. My only thought so far has been "get yourself involved with the production of the most widely-used primary school language textbooks in your area."
Thoughts?
It's not necessarily an advantageous habit. If a person tells you they like ice cream, and you've seen them eating ice cream regularly with every sign of enjoyment, you have as much evidence that they like ice cream as you have about countless other things that nobody bothers hanging qualifiers on even in Japanese. The sciences are full of things we can't experience directly but can still establish with high confidence.
Rather than teaching people to privilege other people's mental states as an unknowable quality, I think it makes more sense to encourage people to be aware of their degrees of certainty.
Increased awareness of degrees of certainty is more or less what I was thinking of encouraging. It hadn't occurred to me to look for a deeper motive and try to address it directly. This was helpful, thank you.
You can look at this way of thinking as a social convention. Japanese people often care about signaling respect with language. Someone who direct speaks about the mental state of another can be seen as presumtious.
High status people in any social circle can influence it's social customs. If people get put down for guessing other other's mental states wrong without using qualifiers they are likely to use qualifiers the next time.
If you actually want to do this, E-Prime is an interesting. E-Prime calls for tabooing to be.
I meet a few people in NLP circles that valued to communicate in E-Prime.
— Gregory Wheeler, "Formal Epistemology"
Is there a concrete example of a problem approached thus?
Viewing the interactions of photons as both a wave and a billiard ball. Both are wrong, but by seeing which traits remain constant in all models, we can project what traits the true model is likely to have.
Does that work? I don't know enough physics to tell if that makes sense.
It doesn't give you all the information you need, but that's how the problem was originally tackled. Scientists noticed that they had two contradictory models for light, which had a few overlapping characteristics. Those overlapping areas allowed them to start formulating new theories. Of course it took ridiculous amounts of work after that to figure out a reasonable approximation of reality, but one has to start somewhere.
It's not easy to find rap lyrics that are appropriate to be posted here. Here's an attempt.
(a few verbal tics were removed by me; the censorship was already present in the version I heard)
Using punctuation that is normally intended to match ({[]}), confused me. Use the !%#$ing other punctuation for that.
I'd vote this up, but I can't shake the feeling that the author is setting up a false dichotomy. Living forever would be great, but living forever without arthritis would be even better. There's no reason why we shouldn't solve the easier problem first.
Sure there is. If you have two problems, one of which is substantially easier than the other, then you still might solve the harder problem first if 1) solving the easier problem won't help you solve the harder problem and 2) the harder problem is substantially more pressing. In other words, you need to take into account the opportunity cost of diverting some of your resources to solving the easier problem.
In general this is true, but I believe that in this particular case the reasoning doesn't apply. Solving problems like arthritis and cancer is essential for prolonging productive biological life.
Granted, such solutions would cease to be useful once mind uploading is implemented. However, IMO mind uploading is so difficult -- and, therefore, so far in the future -- that, if we did chose to focus exclusively on it, we'd lose too many utilons to biological ailments. For the same reason, prolonging productive biological life now is still quite useful, because it would allow researchers to live longer, thus speeding up the pace of research that will eventually lead to uploading.
Sympathetic, but ultimately, we die OF diseases. And the years we do have are more or less valuable depending on their quality.
Physicians should maximize QALYs, and extending lifespan is only one way to do it.
The question is whether that's a useful paradigm. Aubrey Gray argues that it isn't.
C. S. Lewis, Mere Christianity
Caveat: this is not at all how the majority of the religious people that I know would use the word "faith". In fact, this passage turned out to be one of the earliest helps in bringing me to think critically about and ultimately discard my religious worldview.
Upvoted. I actually had a remarkably similar experience reading Lewis. Throughout college I had been undergoing a gradual transformation from "real" Christian to liberal Protestant to deist, and I ended up reading Lewis because he seemed to be the only person I could find who was firmly committed to Christianity and yet seemed willing to discuss the kind of questions I was having. Reading Mere Christianity was basically the event that let me give Christianity/theism one last look over and say "well said, but that is enough for me to know it is time to move on."
I answered "rarely," but I should probably qualify that. I've been an atheist for about 5 years, and in the last 2 or 3, I don't recall ever seriously thinking that the basic, factual premises of Christianity were any more likely than Greek myths. But I have had several moments -- usually following some major personal failing of mine, or maybe in others close to me -- where the Christian idea of man-as-fallen living in a fallen world made sense to me, and where I found myself unconsciously groping for something like the Christian concept of grace.
As I recall, in the first few years after my deconversion, this feeling sometimes led me to think more seriously about Christianity, and I even prayed a few times, just in case. In the past couple years that hasn't happened; I understand more fully exactly why I'd have those feelings even without anything like the Christian God, and I've thought more seriously about how to address them without falling on old habits. But certainly that experience has helped me understand what would motivate someone to either seek or hold onto Christianity, especially if they didn't have any training in Bayescraft.
I put never, but "not anymore" would be more accurate
This. Took a while to build that foundation, and a lot of contemplation in deciding what needed to be there... but once built, it's solid, and not given to reorganization on whim. That's not because I'm closed-minded or anything, it's because stuff like a belief that the evidence provided by your own senses is valid really is kind of fundamental to believing anything else, at all. Not believing in that implies not believing in a whole host of other things, and develops into some really strange philosophies. As a philosophical position, this is called 'empiricism', and it's actually more fundamental than belief in only the physical world (ie: disbelief in spiritual phenomena, 'materialism'), because you need a thing that says what evidence is considered valid before you have a thing that says 'and based on this evidence, I conclude'.
I thought the most truthful answer for me would be "Rarely", given all possible interpretations of the question. I think that it should have been qualified "within the past year", to eliminate the follies of truth-seeking in one's youth. Someone who answers "Never" cannot be considering when they were a five-year-old. I have believed or wanted to believe a lot of crazy things. Even right now, thinking as an atheist, I rarely have those moods, and only rarely due to my recognized (and combated) tendency toward magical thinking. However, right now, thinking as a Christian, I would have doubts constantly, because no matter how much I would like to believe, it is plain to see that most of what I am expected to have faith in as a Christian is complete crap. I am capable of adopting either mode of thinking, as is anyone else here. We're just better at one mode than others.
I answered "sometimes" thinking of this as just Christianity, but I would have answered "very often" if I had read your gloss more carefully.
I'm not quite sure how to explicate this, as it's something I've never really though much about and had generalized from one example to be universal. But my intuitions about what is probably true are extremely mood and even fancy-dependent, although my evaluation of particular arguments and such seems to be comparatively stable. I can see positive and negative aspects to this.
Oh whoops, I didn't read the parenthetical either. Not sure if it changes my answer.
I have had "extreme temporary loss of foundational beliefs," where I briefly lost confidence in beliefs such as the nonexistence of fundamentally mental entities (I would describe this experience as "innate but long dormant animist intutions suddenly start shouting,") but I've never had a mood where Christianity or any other religion looked probable, because even when I had such an experience, I was never enticed to privilege the hypothesis of any particular religion or superstition.
I am fascinated by all of the answers that are not "never," as this has never happened to me. If any of the answerers were atheists, could any of you briefly describe these experiences and what might have caused them? (I am expecting "psychedelic drugs," so I will be most surprised by experiences that are caused by anything else.)
I am firmly atheist right now, lounging in my mom's warm living room in a comfy armchair, tipity-typing on my keyboard. But when I go out to sea, alone, and the weather turns, a storm picks up, and I'm caught out after dark, and thanks to a rusty socket only one bow light works... well, then, I pray to every god I know starting with Poseidon, and sell my soul to the devil while at it.
I'm not sure why I do it.
Maybe that's what my brain does to occupy the excess processing time? In high school, when I still remembered it, I used to recite the litany against fear. But that's not quite it. When waves toss my little boat around and I ask myself why I'm praying---the answer invariably comes out, ``It's never made things worse. So the Professor God isn't punishing me for my weakness. Who knows... maybe it will work? Even if not, prayer beats panic as a system idle process.''
I'm a bit late here, but my response seems different enough to the others posted here to warrant replying!
My brain is abysmally bad at storing trains of thought/deduction that lead to conclusions. It's very good at having exceptionally long trains of thoughts/deductions. It's quite good at storing the conclusions of my trains of thoughts, but only as cached thoughts and heuristics. It means that my brain is full of conclusions that I know I assign high probabilities to, but don't know why off the top of my head. My beliefs end up stored as a list of theorems in my head, with proofs left as an exercise to the reader. I occasionally double-check them, but it's a time-consuming process.
If I'm having a not very mentally agile day, I can't off the top of my head re-prove the results I think I know, and a different result seems tempting, I basically get confused for a while until I re-figure out how to prove the result I know I've proven before.
Basically on some days past-me seems like a sufficiently different person that I no longer completely trust her judgement.
Interesting. I've only had this experience in very restricted contexts, e.g. I noticed recently that I shouldn't trust my opinions on movies if the last time I saw them was more than several years ago because my taste in movies has changed substantially in those years.
I have been diagnosed with depression in the past, so it's not terribly surprising to me when "My life is worth living" is considered a foundational belief, that has it's confidence fade in and out quite a lot. In this case, the drugs would actually restore me back to a more normal level.
Although, considering the frequency with which it is still happening, I may want to reconsult with my Doctor. Saying "I have been diagnosed with mental health problems, and I'm on pills, but really, I still have some pretty bad mental health problems." pattern matches rather well to "Perhaps I should ask my Doctor about updating those pills."
Yep. Medical professionals often err on the side of lesser dosage anyway, even for life-threatening stuff. After all, "we gave her medication but she died anyway, the disease was too strong" sounds like abstract, chance-and-force-of-nature-and-fate stuff, and like a statistic on a sheet of paper.
"Doctor overdoses patient", on the other hand, is such a tasty scoop I'd immediately expect my grandmother to be gossiping about it and the doctor in question to be banned from medical practice for life, probably with their diplomas revoked.
They also often take their guidelines from organizations like the FDA, which are renowned to explicitly delay for five years medications that have a 1 in 10000 side-effect mortality rate versus an 80% cure-and-survival rate for diseases that kill 10k+ annually (bogus example, but I'm sure someone more conscientious than me can find real numbers).
Anyway, sorry for the possibly undesired tangent. It seems usually-optimal to keep returning to your doctor persistently as much as possible until medication really does take noticeable effect.
Occasionally the fundamental fact that all our inferences are provisional creeps me out. The realization that there's no way to actually ground my base belief that, say, I'm not a Boltzmann brain, combined with the fact that it's really quite absurd that anything exists rather than nothing at all given that any cause we find just moves the problem outwards is the closest thing I have to "doubting existence".
I put sometimes.
I believe all kinds of crazy stuff and question everything when I'm lying in bed trying to fall asleep, most commonly that death will be an active and specific nothing that I will exist to experience and be bored frightened and upset by forever. Something deep in my brain believes a very specific horrible cosmology as wacky and specific as any religion but not nearly as cheerful. When my faculties are weakened it feels as if I directly know it to be true and any attempt to rehearse my reasons for materialism feels like rationalizing.
I'm neither very mentally healthy nor very neurotypical, which may be part of why this happens.
I answered Sometimes. For me the 'foundational belief' in question is usually along the lines: "Goal (x) is worth the effort of subgoal/process (y)." These moods usually last less than 6 months, and I have a hunch that they're hormonal in nature. I've yet to systematically gather data on the factors that seem most likely to be causing them, mostly because it doesn't seem worth the effort right now. Hah. Seriously, though, I have in fact been convinced that I need to work out a consistent utility function, but when I think about the work involved, I just... blah.
Erm...when I was a lot younger, when I considered doing something wrong or told a lie I had the vague feeling that someone was keeping tabs. Basically, when weighing utilities I greatly upped the probability that someone would somehow come to know of my wrongdoings, even when it was totally implausible. That "someone" was certainly not God or a dead ancestor or anything supernatural...it wasn't even necessarily an authority figure.
Basically, the superstition was that someone who knew me well would eventually come to find out about my wrongdoing, and one day they would confront me about it. And they'd be greatly disappointed or angry.
I'm ashamed to say that in the past I might have actually done actions which I myself felt were immoral, if it were not for that superstitious feeling that my actions would be discovered by another individual. It's hard to say in retrospect whether the superstitious feeling was the factor that pushed me back over that edge.
Note that I never believed the superstition...it was more of a gut feeling.
I'm older now and am proud to say that I haven't given serious consideration to doing anything which I personally feel is immoral for a very, very long time. So I do not know whether I still carry this superstition. It's not really something I can test empirically.
I think part of it is that as I grew older my mind conceptually merged "selfish desire" and "morality" neatly into one single "what is the sum total of my goals" utility function construct (though I wasn't familiar with the term "utility function" at the time).
This shift occurred sometime in high school, and it happened around the same time that I overcame mind-body dualism at a gut level. Though I've always had generally atheist beliefs, it wasn't until this shift that I really understood the implications of a logical universe.
Once these dichotomies broke down, I no longer felt the temptation to "give in" to selfish desire, nor was I warded off by "guilt" or the superstitious fear. I follow morals because I want to follow them, since they are a huge part of my utility function. Once my brain understood at a gut level that going against my morality was intrinsically against my interests, I stopped feeling any temptation to do immoral actions for selfish reasons. On the flip side, the shift also allows be to be selfish without feeling guilty. It's not that I'm a "better person" thanks to the shift in gut instinct...it's more that my opposing instincts don't fight with each other by using temptation, fear, and guilt anymore.
I think there is something about that "shift" experience I described (anecdote indicates that a lot of smart people go through this at some point in life, but most describe it in less than articulate spiritual terms) which permanently alters your gut feelings about reality, morality, and similar topics in philosophy.
I'm guessing those who answered "never" either did not carry the illusions in question to begin with and therefore did not require a shift in thought, or they did not factor in how they felt pre-shift into their introspection.
My own response was “rarely”; had I answered when I was a Christian ten years ago, I would probably have said “sometimes”; had I answered as a Christian five years ago I might have said “often” or “very often” (eventually I allowed some of these moments of extreme uncertainty to become actual crises of faith and I changed my mind, though it happened in a very sloppy and roundabout way and had I had LessWrong at the time things could’ve been a lot easier.)
And still, I can think of maybe two times in the past year when I suddenly got a terrifying sinking feeling that I have got everything horribly, totally wrong. Both instances were triggered whilst around family and friends who remain religious, and both had to do with being reminded of old arguments I used to use in defense of the Bible which I couldn’t remember, in the moment, having explicitly refuted.
Neither of these moods was very important and both were combated in a matter of minutes. In retrospect, I’d guess that my brain was conflating fear of rejection-from-the-tribe-for-what-I-believe with fear of actually-being-wrong.
Not psychedelic drugs, but apparently an adequate trigger nonetheless.
Hasn't happened to me in years. Typically involved desperation about how some aspect of my life (only peripherally related to the beliefs in question, natch) was going very badly. Temptation to pray was involved. These urges really went away when I discovered that they were mainly caused by garden variety frustration + low blood sugar.
I think that in my folly-filled youth, my brain discovered that "conversion" experiences (religious/political) are fun and very energizing. When I am really dejected, a small part of me says "Let's convert to something! Clearly your current beliefs are not inspiring you enough!"
Sometimes, I am extremely unconvinced in the utility of "knowing stuff" or "understanding stuff" when confronted with the inability to explain it to suffering people who seem like they want to stop suffering but refuse to consider the stuff that has potential to help them stop suffering. =/
Interesting. My confidence in my beliefs has never been tied to my ability to explain them to anyone, but then again I'm a mathematician-(in-training), so...
Well, it's not that I'm not confident that they're useful to me. They are! They help me make choices that make me happy. I'm just not confident in how useful pursuing them is in comparison to various utilitarian considerations of helping other people be not miserable.
For example, suppose I could learn some more rationality tricks and start saving an extra $100 each month by some means, while in the meantime someone I know is depressed and miserable and seemingly asking for help. Instead of going to learn those rationality tricks to make an extra $100, I am tempted to sit with them and tell them all the ways I learned to manage my thoughts in order to not make myself miserable and depressed. And when this fails spectacularly, eating my time and energy, I am left inclined to do neither because that person is miserable and depressed and I'm powerless to help them so how useful is $100 really? Blah! So, to answer the question, this is the mood in which I question my belief in the usefulness of knowing and doing useful things.
I am also a computer science/math person! high five
Aren't useful things kind of useful to do kind of by definition? (I know this argument is often used to sneak in connotations, but I can't imagine that "is useful" is a sneaky connotation of "useful thing.")
What you describe sounds to me like a failure to model your friend correctly. Most people cannot fix themselves given only instructions on how to do so, and what worked for you may not work for your friend. Even if it might, it is hard to motivate yourself to do things when you are miserable and depressed, and when you are miserable and depressed, hearing someone else say "here are all the ways you currently suck, and you should stop sucking in those ways" is not necessarily encouraging.
In other words, "useful" is a two- or even three-place predicate.
Sounds like Lewis's confusion would have been substatially cleared up by distinguishing between belief and alief, and then he would not have had to perpetrate such abuses on commonly used words.
To be fair, the philosopher Tamar Gendler only coined the term in 2008.
-Buttercup Dew (@NationalistPony)
Never in my life did I expect to find myself upvoting a comment quoting My Nationalist Pony.
--George Eliot
Apologies to Jayson_Virissimo.
Not to get too nitpicky, but the mine example doesn't really work here. Working for 40 years in a mine without accident doesn't actually make disaster imminent; I would imagine that a mine disaster is a Poisson process, in which expected duration to the next accident is independent of any previous occurrences.
It seems like there might be some gambler's fallacy stuff happening here.
An actually good example of this would be a bridge whose foundations are slowly eroding, and is now in danger of collapse.
Jimmy the rational hypnotist on priming and implicit memory:
The Last Psychiatrist (http://thelastpsychiatrist.com/2010/10/how_not_to_prevent_military_su.html)
-- Ricardo, publicly saying "oops" in his restrained Victorian fashion, in his essay "On Machinery".
I was actually just reading that yesterday because of Cowen linking it in http://marginalrevolution.com/marginalrevolution/2013/01/the-ricardo-effect-in-europe-germany-fact-of-the-day.html
I'm not entirely sure I understand Ricardo's chapter (Victorian economists being hard to read both because of the style and distance), or why, if it's as clear as Ricardo seems to think, no-one ever seems to mention the point in discussions of technological unemployment (and instead, constantly harping on comparative advantage etc). What did you make of it?
-- Eric Hoffer, The True Believer
A decent quote, except I am minded to nitpick that there is no such thing as unbelief as a separate category from belief. We just have credences.
Many futile conversations have I seen among the muggles, wherein disputants tried to make some Fully General point about unbelief vs belief, or doubt vs certainty.
-Pirkei Avot (5:15)
-- Henri Poincaré
Deep wisdom indeed. Some people believe the wrong things, and some believe the right things, some people believe both, some people believe neither.
To me, it expresses the need to pay attention to what you are learning, and decide which things to retain and which to discard. E.g. one student takes a course in Scala and memorizes the code for generics, while the other writes the code but focuses on understanding the notion of polymorphism and what it is good for.
I genuinely don't understand this comment.
Sorry. Attempt #2:
If I had infinite storage space and computing power, I would store every single piece of information I encountered. I don't, so instead I have to efficiently process and store things that I learn. This generally requires that I throw information out the window. For example, if I take a walk, I barely even process most of the detail in my visual input, and I remember very little of it. I only want to keep track of a very few things, like where I am in relation to my house, where the sidewalk is, and any nearby hazards. When the walk is over, I discard even that information. On the other hand, I often have to take derivatives. Although understanding what a derivative means is very important, it would be silly of me to rederive e.g. the chain rule each time I wanted to use it. That would waste a lot of time, and it does not take a lot of space to store the procedure for applying the chain rule. So I store that logically superfluous information because it is important.
In other words, I have to be picky about what I remember. Some information is particularly useful or deep, some information isn’t. Just because this is incredibly obvious, doesn’t mean we don’t need to remind ourselves to consciously decide what to pay attention to.
I thought the quote expressed this idea nicely and compactly. Whoever wrote the quote probably did not mean it in quite the same way I understand it, but I still like it.
-Bas van Fraassen, The Scientific Image
What does that mean?
Believing large lies is worse than small lies; basically, it's arguing against the What-The-Hell Effect as applied to rationality. Or so I presume, did not read original.
I had noticed that effect myself, but I didn't know it had a name.
I had noticed it and mistakenly attributed it to the sunk cost fallacy but on reflection it's quite different from sunk costs. However, it was discovering and (as it turns out, incorrectly) generalising the sunk cost fallacy that alerted me to the effect and that genuinely helped me improve myself, so it's a happy mistake.
One thing that helped me was learning to fear the words 'might as well,' as in, 'I've already wasted most of the day so I might as well waste the rest of it,' or 'she'll never go out with me so I might as well not bother asking her,' and countless other examples. My way of dealing it is to mock my own thought processes ('Yeah, things are really bad so let's make them even worse. Nice plan, genius') and switch to a more utilitarian way of thinking ('A small chance of success is better than none,' 'Let's try and squeeze as much utility out of this as possible' etc.).
I hadn't fully grasped the extent to which I was sabotaging my own life with that one, pernicious little error.
Lambs are young sheep; they have less meat & less wool.
The punishment for livestock rustling being identical no matter what animal is stolen, you should prefer to steal a sheep rather than a lamb.
However, the parent says this is NOT an epistemological principle, that one should prefer to get the most benefit when choosing between equally-punished crimes.
So is it saying that epistemology should not allow for equal punishments for unequal crimes? That seems less like epistemology and more like ethics.
Should our epistemology simply not waste time judging which untrue things are more false than others because we shouldn't be believing false things anyway?
It would be great if Jason would give us more context about this one, since the meaning doesn't seem clear without it.
(If you wonder where "two hundred and forty-two miles" shortening of the river came from, it was the straightening of its original meandering path to improve navigation)
http://xkcd.com/605/
John Locke, Essay Concerning Human Understanding