Rationality Quotes September 2013
Another month has passed and here is a new rationality quotes thread. The usual rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (456)
Oglaf (Original comic NSFW)
How have I been reading Oglaf for so long without knowing about the epilogues?
...oh crap, I'm going to have to reread the whole thing, aren't I.
Nah, the wiki makes it much easier.
bahahahaha
And the mouseovers. And the alt text, which is different again.
And the mock ads at the bottom.
ETA: Explanation: Sometimes the banner at the bottom will contain an actual (randomized) ad, but many of the comics have their own funny mock ad associated. (When I noticed this, I went through all the ones I had already read again, to not miss out on that content.)
(I thought I'd clarify this, because this comment got downvoted - possibly because the downvoter misunderstood it as sarcasm?)
For anyone unaware, SMBC has an additional joke panel when you mouse over the red button at the bottom
Actually, you have to click it now. Just a heads up to anyone reading this and trying to find them.
... the what.
Ahh I just finished that.
Ted Chiang, Tower of Babylon
Thomas Edison
Black Books, Elephants and Hens. H/t /u/mrjack2 on /r/hpmor.
-Leonard Susskind, Susskind's Rule of Thumb
Not necessarily a great metric; working on the second-most-probable theory can be the best rational decision if the expected value of working on the most probable theory is lower due to greater cost or lower reward.
Great quote.
Unfortunately, we find ourselves in a world where the world's policy-makers don't just profess that AGI safety isn't a pressing issue, they also aren't taking any action on AGI safety. Even generally sharp people like Bryan Caplan give disappointingly lame reasons for not caring. :(
Why won't you update towards the possibility that they're right and you're wrong?
This model should rise up much sooner than some very low prior complex model where you're a better truth finder about this topic but not any topic where truth-finding can be tested reliably*, and they're better truth finders about topics where truth finding can be tested (which is what happens when they do their work), but not this particular topic.
(*because if you expect that, then you should end up actually trying to do at least something that can be checked because it's the only indicator that you might possibly be right about the matters that can't be checked in any way)
Why are the updates always in one direction only? When they disagree, the reasons are "lame" according to yourself, which makes you more sure everyone's wrong. When they agree, they agree and that makes you more sure you are right.
It's not so much that I'm a better truth finder, it's that I've had the privilege of thinking through the issues as a core component of my full time job for the past two years, and people like Caplan only raise points that have been accounted for in my model for a long time. Also, I think the most productive way to resolve these debates is not to argue the meta-level issues about social epistemology, but to have the object-level debates about the facts at issue. So if Caplan replies to Carl's comment and my own, then we can continue the object-level debate, otherwise... the ball's in his court.
This doesn't appear to be accurate. E.g. Carl & Paul changed my mind about the probability of hard takeoff. And when have I said that some public figure agreeing with me made me more sure I'm right? See also my comments here.
If I mention a public figure agreeing with me, it's generally not because this plays a significant role in my own estimates, it's because other people think there's a stronger correlation between social status and correctness than I do.
Yes, but why Caplan did not see it fit to think about the issue for a significant time, and you did?
There's also the AI researchers who have had the privilege of thinking about relevant subjects for a very long time, education, and accomplishments which verify that their thinking adds up over time - and who are largely the actual source for the opinions held by the policy makers.
By the way, note that the usual method of rejection of wrong ideas, is not even coming up with wrong ideas in the first place, and general non-engagement of wrong ideas. This is because the space of wrong ideas is much larger than the space of correct ideas.
What I expect to see in the counter-factual world where the AI risk is a big problem, is that the proponents of the AI risk in that hypothetical world have far more impressive and far more relevant accomplishments and credentials.
The first problem with highly speculative topics is that great many arguments exist in favour of either opinion on a speculative topic. The second problem is that each such argument relies on a huge number of implicit or explicit assumptions that are likely to be violated due to their origin as random guesses. The third problem is that there is no expectation that the available arguments would be a representative sample of the arguments in general.
Hmm, I was under the impression that you weren't a big supporter of the hard takeoff to begin with.
Well, your confidence should be increased by the agreement; there's nothing wrong with that. The problem is when it is not balanced by the expected decrease by disagreement.
There are a great many differences in our world model, and I can't talk through them all with you.
Maybe we could just make some predictions? E.g. do you expect Stephen Hawking to hook up with FHI/CSER, or not? I think... oops, we can't use that one: he just did. (Note that this has negligible impact on my own estimates, despite him being perhaps the most famous and prestigious scientist in the world.)
Okay, well... If somebody takes a decent survey of mainstream AI people (not AGI people) about AGI timelines, do you expect the median estimate to be earlier or later than 2100? (Just kidding; I have inside information about some forthcoming surveys of this type... the median is significantly sooner than 2100.)
Okay, so... do you expect more or fewer prestigious scientists to take AI risk seriously 10 years from now? Do you expect Scott Aaronson and Peter Norvig, within 25 years, to change their minds about AI timelines, and concede that AI is fairly likely within 100 years (from now) rather than thinking that it's probably centuries or millennia away? Or maybe you can think of other predictions to make. Though coming up with crisp predictions is time-consuming.
This is why many scientists are terrible philosophers of science. Not all of them, of course; Einstein was one remarkable exception. But it seems like many scientists have views of science (e.g. astonishingly naive versions of Popperianism) which completely fail to fit their own practice.
Yes. When chatting with scientists I have to intentionally remind myself that my prior should be on them being Popperian rather than Bayesian. When I forget to do this, I am momentarily surprised when I first hear them say something straightforwardly anti-Bayesian.
Examples?
Statements like "I reject the intelligence explosion hypothesis because it's not falsifiable."
I see. I doubt that it is as simple as naive Popperianism, however. Scientists routinely construct and screen hypotheses based on multiple factors, and they are quite good at it, compared to the general population. However, as you pointed out, many do not use or even have the language to express their rejection in a Bayesian way, as "I have estimated the probability of this hypothesis being true, and it is too low to care." I suspect that they instinctively map intelligence explosion into the Pascal mugging reference class, together with perpetual motion, cold fusion and religion, but verbalize it in the standard Popperian language instead. After all, that is how they would explain why they don't pay attention to (someone else's) religion: there is no way to falsify it. I suspect that any further discussion tends to reveal a more sensible approach.
Yeah. The problem is that most scientists seem to still be taught from textbooks that use a Popperian paradigm, or at least Popperian language, and they aren't necessarily taught probability theory very thoroughly, they're used to publishing papers that use p-value science even though they kinda know it's wrong, etc.
So maybe if we had an extended discussion about philosophy of science, they'd retract their Popperian statements and reformulate them to say something kinda related but less wrong. Maybe they're just sloppy with their philosophy of science when talking about subjects they don't put much credence in.
This does make it difficult to measure the degree to which, as Eliezer puts it, "the world is mad." Maybe the world looks mad when you take scientists' dinner party statements at face value, but looks less mad when you watch them try to solve problems they care about. On the other hand, even when looking at work they seem to care about, it often doesn't look like scientists know the basics of philosophy of science. Then again, maybe it's just an incentives problem. E.g. maybe the scientist's field basically requires you to publish with p-values, even if the scientists themselves are secretly Bayesians.
I'm willing to bet most scientists aren't taught these things formally at all. I never was. You pick it up out of the cultural zeitgeist, and you develop a cultural jargon. And then sometimes people who HAVE formally studied philosophy of science try to map that jargon back to formal concepts, and I'm not sure the mapping is that accurate.
I think 'wrong' is too strong here. Its good for some things, bad for others. Look at particle-accelerator experiments- frequentist statistics are the obvious choice because the collider essentially runs the same experiment 600 million times every second, and p-values work well to separate signal from a null-hypothesis of 'just background'.
If there was a genuine philosophy of science illumination it would be clear that, despite the shortcomings of the logical empiricist setting in which Popper found himself , there is much more of value in a sophisticated Popperian methodological falsificationism than in Bayesianism. If scientists were interested in the most probable hypotheses, they would stay as close to the data as possible. But in fact they want interesting, informative, risky theories and genuine explanations. This goes against the Bayesian probabilist ideal. Moreover, you cannot falsify with Bayes theorem, so you'd have to start out with an exhaustive set of hypotheses that could account for data (already silly), and then you'd never get rid of them---they could only be probabilistically disconfirmed.
Strictly speaking, one can't falsify with any method outside of deductive logic -- even your own Severity Principle only claims to warrant hypotheses, not falsify their negations. Bayesian statistical analysis is just the same in this regard.
A Bayesian analysis doesn't need to start with an exhaustive set of hypotheses to justify discarding some of them. Suppose we have a set of mutually exclusive but not exhaustive hypotheses. The posterior probability of an hypothesis under the assumption that the set is exhaustive is an upper bound for its posterior probability in an analysis with an expanded set of hypotheses. A more complete set can only make a hypotheses less likely, so if its posterior probability is already so low that it would have a negligible effect on subsequent calculations, it can safely be discarded.
I'm a Bayesian probabilist, and it doesn't go against my ideal. I think you're attacking philosophical subjective Bayesianism, but I don't think that's the kind of Bayesianism to which lukeprog is referring.
For what it's worth, I understand well the arguments in favor of Bayes, yet I don't think that scientific results should be published in a Bayesian manner. This is not to say that I don't think that frequentist statistics is frequently and grossly mis-used by many scientists, but I don't think Bayes is the solution to this. In fact, many of the problems with how statistics is used, such as implicitly performing many multiple comparisons without controlling for this, would be just as large of problems with Bayesian statistics.
Either the evidence is strong enough to overwhelm any reasonable prior, in which case frequentist statistics wlil detect the result just fine; or else the evidence is not so strong, in which case you are reduced to arguing about priors, which seems bad if the goal is to create a societal construct that reliable uncovers useful new truths.
But why not share likelihood ratios instead of posteriors, and then choose whether or not you also want to argue very much (in your scientific paper) about the priors?
No, the multiple comparisons problem, like optional stopping, and other selection effects that alter error probabilities are a much greater problem in Bayesian statistics because they regard error probabilities and the sampling distributions on which they are based as irrelevant to inference, once the data are in hand. That is a consequence of the likelihood principle (which follows from inference by Bayes theorem). I find it interesting that this blog takes a great interest in human biases, but guess what methodology is relied upon to provide evidence of those biases? Frequentist methods.
Deborah, what do you think of jsteinhardt's Beyond Bayesians and Frequentists?
Hm. A generalized phenomenon of overwhelming physicist underconfidence could account for a reasonable amount of the QM affair.
Anonymous, found written in the Temple at 2013 Burning Man
Part of that seems to be from HPMOR. I'm not sure where the rest comes from.
Yeah, almost certainly HPMOR inspired. Eliezer's work has spread far.
Nate Silver, The Signal and the Noise: Why So Many Predictions Fail — But Some Don’t, New York, 2012, p. 451
"Not being able to get the future exactly right doesn’t mean you don’t have to think about it."
--Peter Thiel
G K Chesterton
I don't think that's the case. There are plenty of shy intellectuals who don't push their ideas on other people. Darwin sat more than a decade on his big idea.
There are ideas that are about qualia. It doesn't make much sense to try to explain a blind person what red looks like and the same goes for other ideas that rest of observed qualia instead of resting on theory. If I believe in a certain idea because I experienced a certain qualia and I have no way of giving you the experience of the same qualia, I can't explain you the idea. In some instances I might still try to explain the blind what red looks like but there are also instance where I see it as futile.
One way of teaching certain lessons in buddhism is to give a student a koan that illustrates the lesson and let him meditate over the koan for hours. I don't see anything dishonest about teaching certain ideas that way.
If someone thinks about a topic in terms of black and white it just takes time to teach him to see various shades of grey.
Eugene McCarthy, Human Origins: Are We Hybrids?
As a non-biologist, I kind-of suspect that article is supposed to be some kind of elaborate joke. It sounds convincing to me, but then again, so did Sokal (1996) to non-physicists; my gut feelings' prior probability for that claim is tiny (but probably tinier than rationally warranted; possibly, because it kind-of sounds like a parody of ancient astronaut hypotheses); and I can't find any mention of any mammal inter-order hybrids on Wikipedia.
Sokal's paper brought up the possibility of a morphogenetic field affecting quantum mechanics, which sounds slightly less rigorous than a Discworld joke -- Sir Pratchett can at least get the general aspects of quantum physics correctly. Likewise, Mrs. Jenna Moran's RPGs have more meaningful statements on set theory than Sokal's joking conflation of the axiom of equality and feminist/racial equality. I'd expect a lot of non-physicists would consider it unconvincing, especially if you allow them the answer "this paper makes no sense".
((I'd honestly expect false positives, more than false negatives, when asking average persons to /skeptically/ test papers on quantum mechanics for fraud. Thirty pages of math showing a subatomic particle to be charming has language barrier problems.))
The greater concern here is that the evidence Mr. McCarthy uses to support his assertions is incredibly weak. The vast majority of his list of interspecies hybrids, for example, are either intra-familiae or completely untrustworthy (some are simply appeals to legends or internet hoax, like the cabbit or dog-bear hybrids). The only example of remotely similar variation to a chimpanzee-pig hybrid while being remotely trustworthy is an alleged rabbit-rat cross, but chasing the citation shows that the claimed evidence likely had a different (and at the time of the original experiment, unknown) cause and that the fertilization never occurred. Other cases conflate mating behavior and fertility, by which definition humans should be capable of hybridizing with rubber and glass. The sheer number of untrustworthy citations -- and, more importantly, that they're mixed together with the verifiable and known good ones -- is a huge red flag.
The quote's interesting -- and correct! as anyone who's shown the double-slit experiment can show -- but there's probably better ways to say it and theories to associate it with.
The concept doesn't come from Sokal but from Rupert Sheldrake who used the term in his 1995 book (http://www.co-intelligence.org/P-morphogeneticfields.html).
There are plenty of New Age people who seriously believe that the world works that way.
Or find it a reasonable / plausible theory... I'm married to one who evolved into one who reads that pseudo-science, instead of the Stephen Hawking she used to read 20 years ago...
This is a blatant parody. Probability of pig+chimp hybrids involved in human origins are at pascal-low levels.
This is worthy of notice. It really shouldn't have been remotely convincing..
Can you identify the factors which caused you to give the statements in this article more credibility than you would have given to any random internet source of an unlikely-sounding claim? Information about what went wrong here might be useful from a rationality-increasing perspective.
Mostly, the fact that I don't know shit about biology, and the writer uses full, grammatical sentences, cites a few references, anticipates possible counterarguments and responds to them, and more generally doesn't show many of the obvious signs of crackpottery.
This is exactly why I (amongst many?) find it so hard to separate the good-stuff from the bad-stuff. It's the way the matter is brought to you, not the matter itself. Very thoughtful way of bringing it, as Army1987 says, references, anticipation of counterarguments etc.
I would also very wary of McCarthy arguement. As having studied bioinformatics myself I would say:
Show me the human genes that you think come from pigs. If you name specific genes we can run our algorithms. Don't talk about stuff like the form vertebra when we have sequenced the genomes.
Yeah, it's a good quote promoting open-mindedness, but of course that's because crackpots spend a lot of time trying to hide their theories from any criticism in the name of open-mindedness.
Scott Adams
-- TychoCelchuuu on Reddit
Fallacy names are useful for the same reason any term or technical vocab are useful.
'But notice how you could've just you meant the quantity 1+1+1+1 without yelling "four" first! In fact, that's how all 'numbers' work. If someone is actually using a quantity, you can just give that quantity directly without being a mathematician and finding a pat little name for all of their quantities used.'
Fallacy names are great for chunking something already understood. The problem is that most people who appeal to them don't understand them, and therefore mis-use them. If they spoke in descriptive phrases rather than in jargon, there would be less of an illusion of transparency and people would be more likely to notice that there are discrepancies in usage.
For instance, most people don't understand that not all personal attacks are ad hominem fallacies. The quotation encourages that particular mistake, inadvertently. So it indirectly provides evidence for its own thesis.
Yeah, suppose someone argued instead that it should be OK to kill the other person and take their stuff. And were a convicted murderer.
If you're assuming that they won't be punished if they convinced the other person, then that's true. That would be a conflict of interest and hint at them starting with the bottom line.
If you don't assume that, then it sounds like ad hominem combined with circular logic. Them being a murderer doesn't mean their argument is wrong. In fact, since they're living the conclusion, it's evidence that they actually believe it, and thus that it's write. Furthermore, them being a murderer is only bad if you already accept the conclusion that it's not OK to kill the other person and take their stuff.
You can't say that whenever they are a murderer or not has no relation to the argument they're making, while you can say that for the face being ugly, though.
I voted your comment up because I agree that the vocabulary is useful for both the person committing the fallacy and (I think this is overlooked) for the person recognizing the fallacy.
However, I think the point of the original quote is probably that when someone points out a fallacy they are probably felling angry and want to insult their interlocutor.
-rekam
That's not even an example of the ad hominem fallacy.
"You have an ugly face, so you're wrong" is ad hominem. "You have an ugly face" is not. It's just a statement. Did the speaker imply the second part? Maybe... but probably not. It was probably just an insulting rejoinder.
Insults, i.e. "Attacking you, not your argument", is not what ad hominem is. It's a fallacy, remember? It's no error in reasoning to call a person ugly. Only when you conclude from this that they are wrong do you commit the fallacy.
So:
A: It's wrong to stab your neighbor and take their stuff.
B: Your face is ugly.
A: The ugliness of my face has no bearing on moral...
B, interrupting: Didn't say it does! Your face is still ugly!
They did not logically entail it but they did conversationally implicate it (see CGEL, p. 33 and following, for the difference). As per Grice's maxim of relation, people don't normally bring up irrelevant information.
At which point A would be justified in asking, “Why did you bring it up then?” And even if B had (tried to) explicitly cancel the pragmatic implicature (“It's wrong to stab your neighbor and take their stuff” -- ”I won't comment on that; on a totally unrelated note, your face is ugly”), A would still be justified in asking “Why did you change the topic?”
B here is violating Grice's maxims. That's the point. He's not following the cooperative principle. He's trying to insult A (perhaps because he is frustrated with the conversation). So applying Gricean reasoning to deduce B's intended meaning is incorrect.
If A asks "why are you changing the subject?", B's answer would likely be something along the lines of "And your mother's face is ugly too!".
Then he doesn't get to complain when people mis-get his point.
I contest the empirical claim you are making about human behaviour. That reply in that context very nearly always constitutes arguing against the point the other is making. In particular, the example to which you are replying most definitely is an example of a fallacious ad hominem.
In common practice it does. The rules do change based on attractiveness. (Tangential.)
The effect of the fallacy can be implied, can't it?
Jeremy Silman
— Montaigne, Essays, M. Screech's 1971 translation
Richard Rhodes
That only tells you that if you just rely on the scientific method, it won't result in only benevolent knowledge. You could use another method to filter for benevolence.
The same techniques of starting fire can be used to keep your neighbor warm in the winter, or to burn your neighbor's house down.
The same techniques of chemistry can be used to create remedies for diseases, or to create poisons.
The same techniques of business can be used to create mutual benefit (positive-sum exchanges; beneficial trade) or parasitism (negative-sum exchanges; rent-seeking).
The same techniques of rhetorical appeal to fear of contamination can be used to teach personal hygiene and save lives, or to teach racial purity and end them.
It isn't the knowledge that is benevolent or malevolent.
Indeed, one fact I am rather fond of is that some deadly poisons are themselves antidotes to other deadly poisons, such as curare to strychnine, and atropine to nerve gas.
That is a completely different reason than presented in the quote.
Good luck finding one that doesn't also bias you into a corner.
That would be wonderful, world-changing, and unlikely. I hope but do not expect to see it happen.
Richard Mitchell - Less Than Words Can Say
idontknowbut@gmail.com
It works similarly for psychology. People who study psychology learn dozen different explanations of human thinking and behavior, so the smarter among them know these things are far from settled, and perhaps there is no simple answer that explains everything. On the other hand, some people just read a random book on psychology, and they believe they understand everything completely.
Or don't read any books and simply pick it up by osmosis.
The same is broadly true of e.g. pop music or politics: you can't really escape them. It's not necessarily a reason to study them, though.
This seems true. What I am curious about is whether it remains true if you substitute "don't" with "do". Those that do study philosophy have not on average impressed me with their ability to discriminate among the bullshit.
it seems to me that you are identifying 'study philosophy' as 'take philosophy courses/study academic philosophy/etc', which may not have been the intent of the OP
When you know a thing, to hold that you know it, and when you do not know a thing, to allow that you do not know it. This is knowledge.
Confucius, Analects
Plato
In a democratic republic of over 300 million people, whether or not you "participate in politics" has virtually no effect on whether your rulers are inferior or superior than yourself (unless "participate in politics" is a euphemism for coup d'état).
Another case of rationalists failing at collective action.
It's not a nation of 300 million rationalists, however.
Yet.
And you don't even need a majority of rationalists by headcount. You just need to find and hack the vulnerable parts of your culture and politics where you have a chance of raising people's expectations for rational decision making. Actual widespread ability in rationality skills comes later.
Whenever you feel pessimistic about moving the mean of the sanity distribution, try reading the Bible or the Iliad and see how far we've come already.
People don't expect rational decision making from politics, because that's not what politics is for. Politics exists for the sake of power (politics), coordination and control, and of tribalism, not for any sort of decision making. When politicians make decisions, they optimize for political purposes, not for anything external such as economic, scientific, cultural, etc. outcomes.
When people try make decisions to optimize something external like that, we don't call them politicians; we call them bureaucrats.
If you tried to do what you suggest, you would end up trying not to improve or reform politics, but to destroy destroy it. Good luck with that.
Depends on who "we" are. A great many people still believe in the Bible and try to emulate it, or other comparable texts.
A little cynical maybe? Politicians don't spend 100% of the time making decisions for purely political reasons. Sometimes they are trying to achieve something, even if broadly speaking the purposes of politics are as you imply.
But of course, most of the people we would prefer to be more rational don't know that's what politics is for, so they aren't hampered by that particular excuse to give up on it. Anyway, they could quite reasonably expect more rational decision making from co-workers, doctors, teachers and others.
I don't think the people making decisions to optimise an outcome are well exemplified by bureaucrats. Try engineers.
Knowing that politics is part of what people do, and that destroying it is impossible, yes I would be trying to improve it, and hope for a more-rational population of participants to reform it. I would treat a claim that the way it is now is eternal and unchangeable as an extraordinary one that's never been true so far. So, good luck with that :)
You aren't seriously suggesting the mean of the sanity distribution hasn't moved a huge amount since the Bible was written? Or even in the last 100 years? I know I'm referring to a "sanity distribution" in an unquantifiable hand-wavy way, but do you doubt that those people who believe in a literalist interpretation of the Bible are now outliers, rather that the huge majority they used to be?
Certainly, they're often trying to achieve something outside of politics in order to gain something within politics. We should strive to give them good incentives so the things they do outside of politics are net benefits to non-politicians.
So teaching them to be more rational would cause them to be less interested in politics, instead of demanding that politicians be more rational-for-the-good-of-all. I'm not sure if that's a good or bad thing in itself, but at least they wouldn't waste so much time obsessing over politics. Being apolitical also enhances cooperation.
That's very true, it just has nothing to do with politics. I'm all for making people more rational in general.
Politicians can be rational. It's just that they would still be rational politicians - they would use their skills of rationality to do more of the same things we dislike them for doing today. The problem isn't irrationally practiced politics, it's politics itself.
It's changed a lot over the past, but not in this respect: I think no society on the scale millions of people has ever existed that wasn't dominated by one or another form of politics harmful to most of its residents.
Indeed, it depends on how you measure sanity. On the object level of the rules people follow, things have gotten much better. But on the more meta level of how people arrive at beliefs, judge them, and discard them, the vast majority of humanity is still firmly in the camp of "profess to believe whatever you're taught as a child, go with the majority, compartmentalize like hell, and be offended if anyone questions your premises".
A democratic republic is not necessary. In any kind of political regime encompassing 300 million people, your participation in politics has very small expected effect on whether your rulers are inferior to you.
This seems a bit mangled. The original in The Republic talks about refusing to rule, not refusing to go into politics. Makes it a bit less of a snappy exhortation for your fellow monkeys to gang up on the other monkeys for the price of actually making more sense.
"One of the penalties for not ruling the world is that it gets ruled by other people." - clearly superior quote
"To know thoroughly what has caused a man to say something is to understand the significance of what he has said in its very deepest sense." -Willard F. Day
Paul Graham
-- Tim Evans-Ariyeh
Ideally, it would be nice if the world can move towards caring about the full outcome over factors like the satisfication of baseline levels of effort in more and more situations, not just exceptional ones.
--"Adventure Time" episode "The Businessmen": the zombie businessmen are explaining why they are imprisoning soft furry creatures in a glass bowl.
parodie
Another good one from the same source:
A. P. Herbert, Uncommon Law.
Caution in applying such a principle seems appropriate. I say this because I've long since lost track of how often I've seen on the Internet, "I lost all respect for X when they said [perfectly correct thing]."
I agree. It strengthens your point to note that, although the quote is normally used seriously, the author intended it mischievously. In context, the "thirteenth stroke" is a defendant, who has successfully rebutted all the charges against him, making the additional claim that "this [is] a free country and a man can do what he likes if he does nobody any harm."
This "crazy" claim convinces the judge to convict him anyway.
For most people, is it necessarily wrong to lose all respect for someone in response to a true statement? Most people are respecting things other than truth, and the point "anyone respectable would have known not to say that" can remain perfectly valid.
I don't lose all respect for X based on one thing they say, but I do increase my respect in them if the controversial or difficult things they say are correct and I conserve expected evidence.
A. P. Herbert, Uncommon Law.
His Master's Voice, Stanislaw Lem; p. 106 from the Northwestern University Press 3rd edition, 1999
I like the self-test idea, but this sort of defeatism is kind of, well, self-defeating.
I think it's true. Short of crude measures like stimulants, it does seem to ebb and flow for no obvious reasons. And it's useful to know if you're currently in a doldrum - you can give up forcing yourself to try to work on creative material, and turn to all the usual chores and small tasks that build up.
— Jack Vance, The Languages of Pao
Improbable would seem more appropriate.
However, to set yourself against all the stupidity in the world is an insurmountable task.
You know, that's really not so implausible...
Apparently, both particulate air pollution and streetlights are both capable of this.
http://blogs.discovermagazine.com/crux/2012/08/23/why-is-the-night-sky-turning-red/
Professor Quirrell was not being ironic.
Rationality wakes up last:
Scott Adams on waking up with a numb arm.
I woke up one time with both arms completely numb. I tried to turn the light on and instead fell out of bed. I felt certain that I was going to die right then.
Never experienced this exact experience - I don't sleep on my arm - but waking up stupid? Definitely.
Odd, this has never happened to me. Not the experience of waking up with a numb arm, but the experience of being at all worried about it.
I was quite worried the first time I experienced a numb arm which was both completely dead to sensation and totally immobile for multiple minutes, but after that had happened before, successive occurrences were no longer particularly worrying.
I've experienced 'pins and needles' many times, but a totally 'dead' arm only once. I didn't have any control over it, and when I tried to move it I hit myself in the nose. Quite hard, too!
When I experienced a "totally dead arm," I didn't just not have control over it, I couldn't even wiggle my fingers. It was pretty frightening, since as far as I knew the arm might have experienced extensive cell death from blood deprivation; after all, I had no sign of it being operational at all. My circulation was poor enough that I couldn't even tell if it was still warm, beyond residual heat from my lying on it.
It's happened twice again since then though, and the successive occasions were not particularly distressing.
IIRC the numbness is caused by nerve compression, not blood-flow cutoff.
edit: Apparently it can be either way: http://www.wisegeek.org/what-are-the-most-common-causes-of-numbness-while-sleeping.htm
edit2: And another source claims it's due to nerves, so I dunno. I do find the nerve explanation more plausible than the blood-flow one.
Paul Graham
Yes, but it can be either a bad sign about what you're trying to talk yourself into, or about your state of mind. It simply means that your previous position was held strongly - not because of strong rational evidence alone, because stronger evidence can override that - the act of assimilating the information precludes talking yourself into it. If you have to talk yourself into something, it probably means that there is an irrational aspect to your attachment to the alternative.
And that irrational, often emotional attachment can be either right or wrong; were this not true, gut feeling would answer every question truthfully, and the first plausible explanation one could think of would always be correct.
I interpreted the quote as saying that if you are not readily enthusiastic about something but have to beat yourself into doing it, then it is a sign that you should not direct (any more) resources to it.
As did I, but I disagreed with the point that enthusiasm is a necessary indicator of a good idea. Consider the act of eating one's vegetables (assuming that one is a small, stereotypical child) - intuitively repulsive, but ultimately beneficial, the sort of thing which one might have to talk oneself into.
I've had to talk myself into going on some crazy roller-coasters. After the experience though, I'm extremely glad that I did.
Y'know, there are all sorts of counterexamples to this ... but I think its still a bad sign, if not a definitive one, on the basis that if I had been more suspicious of things I was talking myself into I would have had a definite net benefit to my life. (Not counting times I was neurohacking myself, admittedly, but that's not really the same.)
Yes, there's an unfortunate tendency among some "rationalist" types to dismiss heuristics because they don't apply in every situation.
Mark Crislip - Science-Based Medicine
Reality cares about your beliefs.
People who don't believe in ego depletion don't get as much ego depleted as people who do believe in it.
People who believe that stress is unhealthy have a higher mortality when they have high stress than people who don't hold that belief.
I would expect that if you have more ego depletion than other people it would result in you being more likely to believe in ego depletion. Similarly, if you're suffering health problems due to stress, it would make you think stress is unhealthy.
Your point still stands. Reality does care about your beliefs when the relevant part of reality is you.
I'd guess that there is causation in both directions to some extent, leading to a positive feedback loop.
How high is your confidence that the effect can be completely explained that way?
Not that high, but it does throw into question any studies showing a correlation, and it seems strange to site an example there's no evidence for.
The placebo effect has little relevant effect. People who believe they can fly don't fare better when pushed of cliffs. A world where you believe x is different from a world where you believe not-x, and that has slight physical effects given that we are embodied, but to say 'Reality cares about your beliefs' sounds far to much like a defence of idealism, or the idea that 'everyone has their own truths'.
I'm not sure whether that's true, the last time I investigated that claim I don't found the evidence compelling. Placebo's are also a relatively clumsy way of changing beliefs intentionally.
How do you know? If you pick a height that kills 50% of the people who don't believe that they can fly, I'm not sure that the number of people killed is the same for those who hold that belief. The belief is likely to make people more relaxed when they are pushed over the cliff which is helpful for surviving the experience.
I doubt that you find many people who hold that belief with the same certainity that they believe the sun rises tomorrow. If you don't like idealism, argue based on the beliefs that people actually hold in reality instead of escaping into thought experiments.
I would call 20000 death Americans per year for the belief that stress is unhealthy more than a slight physical effect.
I don't think that the fact that you pattern match it that way speaks against the idea. I think the original quote comes from a place of Descartes inspired mind-body dualism. We are embodied and the content of our mind has effects.
The original quote is taken from an article about the vaccine controversy. People who don't vaccinate because they believe that God will protect them or whatever actually exist, and they may be slightly less likely to fall ill than people who don't vaccinate but don't hold that belief but a lot more likely to fall ill than people who do vaccinate.
-- Richard James, founding priest of a Toronto based Wicca church, quoted in a thegridto article
John LeCarre, explaining that he didn't have insider information about the intelligence community, and if he had, he would not have been allowed to publish The Spy Who Came in from the Cold, but that a great many people who thought James Bond was too implausible wanted to believe that LeCarre's book was the real deal.
Richard Mitchell - Less Than Words Can Say
Jon Stewart, talking to Richard Dawkins (S18, E156)
Let's get one thing straight: ignorance killed the cat.
Curiosity was framed.
Theophanis the Monk, "The Ladder of Divine Grace"
From Obvious Adam, a business book published in 1916.
Roger Ebert
Would be nice if this were true.
It's probably true for academic film theory. I mean how hard could it really be?
--- Sir Hubert Parry, speaking to The Royal College of Music about the purpose of music examinations
Initially I thought this a wonderful quote because, looking back at my life, I could see several defeats (not all in music) attributable to sipping and sampling. But Sir Hubert is speaking specifically about music. The context tells you Sir Hubert's proposed counter to sipping and sampling: individual tuition aiming towards an examination in the form a viva.
The general message is "counter the tendency to sipping and sampling by finding something definite to work for, analogous to working ones way up the Royal College of Music grade system". But working out the analogy is left as an exercise for the reader, so the general message, if Sir Hubert intended it at all, is rather feeble.
Myers, D. G. (2012). Exploring social psychology (6th ed.). New York: McGraw-Hill, P.334.
So basically: be close to friends and family, save some money, find a job you're good at.
That's close to my understanding of the quote. I suppose, "autonomy" means not just financial independence, but the sense of inner self, something beyond social roles.
Nick Szabo
Is this a similar message to Penn Jillette saying:
"If you don’t pay your taxes and you don’t answer the warrant and you don’t go to court, eventually someone will pull a gun. Eventually someone with a gun will show up. "
or did I miss the boat?
Well, it's similar, but for two differences:
1) It uses a different and wider category of examples. Viz. "initiate force [...] to compel them to hand over goods, to let us search their property, or to testify."
2) It makes a consequentialist claim about forcing people to e.g. let us search their property for evidence: "we can't properly respond to a global initiation of force without local initiations of force."
The second difference here is important because it directly contradicts the typical libertarian claim of "if we force people to do things much less than we currently do, that will lead to good consequences." The first difference is rhetorically important because it is a place where people's gut reaction is more likely to endorse the use of force, and people have been less exposed to memes about forcibly searching peoples' property (compared to the ubiquity of people disliking taxes) that would cause them to automatically respond rather than thinking.
Actually that isn't what Szabo is saying. His point is to contradict the claim of the anarcho-capitalists that "if we never force people to do things, that will lead to good consequences."
Q: Why are Unitarians lousy singers? A: They keep reading ahead in the hymnal to see if they agree with it.
Saturday Morning Breakfast Cereal
by Hannes Leitgeb, from his joint teaching course with Stephan Hartmann (author of Bayesian Epistemology) on Coursera entitled 'An Introduction to Mathematical Philoosphy'.
The course topics are "Infinity, Truth, Rational Belief, If-Then, Confirmation, Decision, Voting, and Quantum Logic and Probability". In many ways, a very LW-friendly course, with many mentions and discussions of people like Tarski, Gödel etc.
Peter Shor replying in the comment section of Scott Aaronson's blog post Firewalls.
Breaking Bad, episode Rabid Dog.
(Although "won't" should be "can't".)
Depending on how the violence is applied, it can also make it better.
Slightly edited from Scott Adams' blog.
And a similar sentiment from SMBC comics.
I personally can't see how a monkey turns into a human. But that's irrelevant because that is not the claim of natural selection. This makes a strawman of most positions that endorse something approximately like free will. Also:
Just the legal system? Gah. Everybody on earth does this about 200 times a day.
But it's not who you are underneath, it's what you do that defines you.
-Rachel Dawes, Batman Begins
-- Gordon R. Dickson, "The Tactics of Mistake".
-Hermann Hesse, The Glass Bead Game
"[G]et wisdom: and with all thy getting, get understanding." -- Proverbs 4:7
Based on the Hebrew original a more accurate translation would be: "The beginning of knowledge is to acquire knowledge, and in all of your acquisitions acquire understanding" pointing to two important principles. 1. First to gain the relevant body of knowledge and only then to begin theorizing 2. to focus our wealth and energy on knowledge
It seems like Proverbs has a lot of important content for gaining rationality, perhaps it should be added to our reading lists
The wisdom books of the Bible are pretty unusual compared to the rest of the Bible, because they're an intrusion of some of the best surviving wisdom literature. As such, they're my favorite parts of the Bible, and I've found them well worth reading (in small doses, a little bit at a time, so I'm not overwhelmed).
I highly recommend Robert Alter's translation in "The Wisdom Books," if you're interested in reading it.
thanks but I prefer reading in the original Hebrew to reading in translation.
Ah, excellent. I've always wanted to ask someone who read Hebrew - Is the writing in the bible of lesser or greater quality in the original (compared to the english - I know translation vary, but is there a distinct difference, or is the Hebrew within the range?)
the original is superior in a number of ways(to any translation have seen, but I suspect that it is superior to all translations since much is of necessity lost in translation generally). But is there a specific aspect you are wondering about so that I could address your question more particularly?
Bizarro Blog
Sorry, this is nonsense. It's not hard to Google up a copy of the FCC rules. http://www.fcc.gov/guides/obscenity-indecency-and-profanity :
I am fairly sure that "I’m going to rape your 8-year-old daughter with a trained monkey" would count as describing sexual activities in patently offensive terms, and would not be allowed when direct use of swear words would not be allowed. Just because you don't use a list of words doesn't mean that what you say will be automatically allowed.
Furthermore, the Wikipedia page on the seven words ( http://en.wikipedia.org/wiki/Seven_dirty_words ) points out that " The FCC has never maintained a specific list of words prohibited from the airwaves during the time period from 6 a.m. to 10 p.m., but it has alleged that its own internal guidelines are sufficient to determine what it considers obscene." It points out cases where the words were used in context and permitted.
In other words, this quote is based on a sound-bite distortion of actual FCC behavior and as inaccurate research, is automatically ineligible to be a good rationality quote.
What is the basis for you being sure?
Howard Stern, a well-known "shock jock" spent many years on airwaves regulated by the FCC. He more or less specialized in "describing sexual activities in patently offensive terms" and while he had periodic run-ins with the FCC, he, again, spent many years doing this.
The FCC rule is deliberately written in a vague manner to give the FCC discretionary power. As a practical matter, the seven dirty words are effectively prohibited by FCC and other offensive expressions may or may not be prohibited. Broadcasters occasionally test the boundaries and either get away with it or get slapped down.
Dan Ariely, Predictably Irrational: The Hidden Forces that Shape Our Decisions, New York, 2008, pp. 171-172
In my experience, who started the conflict, who is to blame, etc. is explicitly taught as fact to each side's children. Israelis and Palestinians don't agree on facts at all. A civilized discussion of politics generally requires agreeing not to discuss most past facts.
-- John Scalzi
So is the failure mode of many people who are not, and don't hold themselves to be, clever. I fail to see the correlation.
ETA: Scalzi addresses a very specific topic, and even then he really seems to address some specific anecdote that he doesn't share. I don't think it's a rationality quote.
-- Norman Page, Auden and Isherwood: The Berlin Years
Not quite seeing this as a rationality quote. What's your reasoning?
"The Great Phrase-book Fallacy" is both amusing and instructive. I laughed when I read it because I remembered I'd been a victim of it too once, in less seedier circumstances.
-- Albert Einstein
Checking Google failed to yield an original source cited for this quote.
I got it from the biography, "Einstein: His Life and Universe" by Walter Isaacson, page 393.
The Notes for "Chapter Seventeen: Einstein's God" on page 618 state that the quote comes from:
Great book, by the way.
These (nebulous) assertions seem unlikely on many levels. Psychopaths have few morals but continue to exist. I have no idea what "inner balance" even is.
He may be asserting that morals are necessary for the existence of humanity as a whole, in which case I'd point to many animals with few morals who continue to exist just fine.
I know of no animals other than humans who have nuclear weapons and the capacity to completely wipe themselves out on a whim.
True, but its not clear morals have saved us from this. Many of our morals emphasize loyalty to our own groups (e.g. the USA) over our out groups (e.g. the USSR), with less than ideal results. I think if I replaced "morality" with "benevolence" I'd find the quote more correct. I likely read it too literally.
Though the rest of it still doesn't make any sense to me.
Most don't even know why they believe what they believe, man
Never taking a second to look at life
Bad water in our seeds, y'all, still growing weeds, dawg
-- CunninLynguists featuring Immortal Technique, Never Know Why, A Piece of Strange (2006)