Rationality Quotes September 2013
Another month has passed and here is a new rationality quotes thread. The usual rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (456)
"[G]et wisdom: and with all thy getting, get understanding." -- Proverbs 4:7
Based on the Hebrew original a more accurate translation would be: "The beginning of knowledge is to acquire knowledge, and in all of your acquisitions acquire understanding" pointing to two important principles. 1. First to gain the relevant body of knowledge and only then to begin theorizing 2. to focus our wealth and energy on knowledge
It seems like Proverbs has a lot of important content for gaining rationality, perhaps it should be added to our reading lists
The wisdom books of the Bible are pretty unusual compared to the rest of the Bible, because they're an intrusion of some of the best surviving wisdom literature. As such, they're my favorite parts of the Bible, and I've found them well worth reading (in small doses, a little bit at a time, so I'm not overwhelmed).
I highly recommend Robert Alter's translation in "The Wisdom Books," if you're interested in reading it.
thanks but I prefer reading in the original Hebrew to reading in translation.
Ah, excellent. I've always wanted to ask someone who read Hebrew - Is the writing in the bible of lesser or greater quality in the original (compared to the english - I know translation vary, but is there a distinct difference, or is the Hebrew within the range?)
the original is superior in a number of ways(to any translation have seen, but I suspect that it is superior to all translations since much is of necessity lost in translation generally). But is there a specific aspect you are wondering about so that I could address your question more particularly?
"Not being able to get the future exactly right doesn’t mean you don’t have to think about it."
--Peter Thiel
— Montaigne, Essays, M. Screech's 1971 translation
-- TychoCelchuuu on Reddit
Fallacy names are useful for the same reason any term or technical vocab are useful.
'But notice how you could've just you meant the quantity 1+1+1+1 without yelling "four" first! In fact, that's how all 'numbers' work. If someone is actually using a quantity, you can just give that quantity directly without being a mathematician and finding a pat little name for all of their quantities used.'
I voted your comment up because I agree that the vocabulary is useful for both the person committing the fallacy and (I think this is overlooked) for the person recognizing the fallacy.
However, I think the point of the original quote is probably that when someone points out a fallacy they are probably felling angry and want to insult their interlocutor.
Fallacy names are great for chunking something already understood. The problem is that most people who appeal to them don't understand them, and therefore mis-use them. If they spoke in descriptive phrases rather than in jargon, there would be less of an illusion of transparency and people would be more likely to notice that there are discrepancies in usage.
For instance, most people don't understand that not all personal attacks are ad hominem fallacies. The quotation encourages that particular mistake, inadvertently. So it indirectly provides evidence for its own thesis.
Yeah, suppose someone argued instead that it should be OK to kill the other person and take their stuff. And were a convicted murderer.
If you're assuming that they won't be punished if they convinced the other person, then that's true. That would be a conflict of interest and hint at them starting with the bottom line.
If you don't assume that, then it sounds like ad hominem combined with circular logic. Them being a murderer doesn't mean their argument is wrong. In fact, since they're living the conclusion, it's evidence that they actually believe it, and thus that it's write. Furthermore, them being a murderer is only bad if you already accept the conclusion that it's not OK to kill the other person and take their stuff.
You can't say that whenever they are a murderer or not has no relation to the argument they're making, while you can say that for the face being ugly, though.
That's not even an example of the ad hominem fallacy.
"You have an ugly face, so you're wrong" is ad hominem. "You have an ugly face" is not. It's just a statement. Did the speaker imply the second part? Maybe... but probably not. It was probably just an insulting rejoinder.
Insults, i.e. "Attacking you, not your argument", is not what ad hominem is. It's a fallacy, remember? It's no error in reasoning to call a person ugly. Only when you conclude from this that they are wrong do you commit the fallacy.
So:
A: It's wrong to stab your neighbor and take their stuff.
B: Your face is ugly.
A: The ugliness of my face has no bearing on moral...
B, interrupting: Didn't say it does! Your face is still ugly!
The effect of the fallacy can be implied, can't it?
Can be and usually is (implied).
I contest the empirical claim you are making about human behaviour. That reply in that context very nearly always constitutes arguing against the point the other is making. In particular, the example to which you are replying most definitely is an example of a fallacious ad hominem.
In common practice it does. The rules do change based on attractiveness. (Tangential.)
But A hadn't specified who the stabber is or who the stabbee is.
They did not logically entail it but they did conversationally implicate it (see CGEL, p. 33 and following, for the difference). As per Grice's maxim of relation, people don't normally bring up irrelevant information.
At which point A would be justified in asking, “Why did you bring it up then?” And even if B had (tried to) explicitly cancel the pragmatic implicature (“It's wrong to stab your neighbor and take their stuff” -- ”I won't comment on that; on a totally unrelated note, your face is ugly”), A would still be justified in asking “Why did you change the topic?”
B here is violating Grice's maxims. That's the point. He's not following the cooperative principle. He's trying to insult A (perhaps because he is frustrated with the conversation). So applying Gricean reasoning to deduce B's intended meaning is incorrect.
If A asks "why are you changing the subject?", B's answer would likely be something along the lines of "And your mother's face is ugly too!".
-rekam
However, to set yourself against all the stupidity in the world is an insurmountable task.
You know, that's really not so implausible...
Professor Quirrell was not being ironic.
That's an surprisingly forgiving thing to say. She lives in a place where eating legs to prevent starvation is a venerable military tradition, and a non-zero number of people end up in the Girls' Working School.
Most don't even know why they believe what they believe, man
Never taking a second to look at life
Bad water in our seeds, y'all, still growing weeds, dawg
-- CunninLynguists featuring Immortal Technique, Never Know Why, A Piece of Strange (2006)
idontknowbut@gmail.com
Won't be as able to be selective, maybe, although many here would argue that studying philosophy will decrease the quality of your bullshit meter rather than improve it.
I think that is most definitely false, because many of the the ideas in philosophy contradict each other, and you get good exposure to contradictory good looking arguments, which teaches you to question such arguments in general.
Popular science books, on the other hand, often tend to explain true conclusions using fallacious arguments.
To steel-man somervta's point, it might be that philosophy decreases the quality of your bullshit meter by making it overactive. I don't find it plausible that philosophy generally makes people hyper-credulous, but I could buy that it generally makes people hyperskeptical, quibbling, self-undermining, and/or directionless.
The same is broadly true of e.g. pop music or politics: you can't really escape them. It's not necessarily a reason to study them, though.
It works similarly for psychology. People who study psychology learn dozen different explanations of human thinking and behavior, so the smarter among them know these things are far from settled, and perhaps there is no simple answer that explains everything. On the other hand, some people just read a random book on psychology, and they believe they understand everything completely.
Or don't read any books and simply pick it up by osmosis.
This seems true. What I am curious about is whether it remains true if you substitute "don't" with "do". Those that do study philosophy have not on average impressed me with their ability to discriminate among the bullshit.
it seems to me that you are identifying 'study philosophy' as 'take philosophy courses/study academic philosophy/etc', which may not have been the intent of the OP
Plato
In a democratic republic of over 300 million people, whether or not you "participate in politics" has virtually no effect on whether your rulers are inferior or superior than yourself (unless "participate in politics" is a euphemism for coup d'état).
A democratic republic is not necessary. In any kind of political regime encompassing 300 million people, your participation in politics has very small expected effect on whether your rulers are inferior to you.
Another case of rationalists failing at collective action.
It's not a nation of 300 million rationalists, however.
Yet.
And you don't even need a majority of rationalists by headcount. You just need to find and hack the vulnerable parts of your culture and politics where you have a chance of raising people's expectations for rational decision making. Actual widespread ability in rationality skills comes later.
Whenever you feel pessimistic about moving the mean of the sanity distribution, try reading the Bible or the Iliad and see how far we've come already.
People don't expect rational decision making from politics, because that's not what politics is for. Politics exists for the sake of power (politics), coordination and control, and of tribalism, not for any sort of decision making. When politicians make decisions, they optimize for political purposes, not for anything external such as economic, scientific, cultural, etc. outcomes.
When people try make decisions to optimize something external like that, we don't call them politicians; we call them bureaucrats.
If you tried to do what you suggest, you would end up trying not to improve or reform politics, but to destroy destroy it. Good luck with that.
Depends on who "we" are. A great many people still believe in the Bible and try to emulate it, or other comparable texts.
A little cynical maybe? Politicians don't spend 100% of the time making decisions for purely political reasons. Sometimes they are trying to achieve something, even if broadly speaking the purposes of politics are as you imply.
But of course, most of the people we would prefer to be more rational don't know that's what politics is for, so they aren't hampered by that particular excuse to give up on it. Anyway, they could quite reasonably expect more rational decision making from co-workers, doctors, teachers and others.
I don't think the people making decisions to optimise an outcome are well exemplified by bureaucrats. Try engineers.
Knowing that politics is part of what people do, and that destroying it is impossible, yes I would be trying to improve it, and hope for a more-rational population of participants to reform it. I would treat a claim that the way it is now is eternal and unchangeable as an extraordinary one that's never been true so far. So, good luck with that :)
You aren't seriously suggesting the mean of the sanity distribution hasn't moved a huge amount since the Bible was written? Or even in the last 100 years? I know I'm referring to a "sanity distribution" in an unquantifiable hand-wavy way, but do you doubt that those people who believe in a literalist interpretation of the Bible are now outliers, rather that the huge majority they used to be?
Certainly, they're often trying to achieve something outside of politics in order to gain something within politics. We should strive to give them good incentives so the things they do outside of politics are net benefits to non-politicians.
So teaching them to be more rational would cause them to be less interested in politics, instead of demanding that politicians be more rational-for-the-good-of-all. I'm not sure if that's a good or bad thing in itself, but at least they wouldn't waste so much time obsessing over politics. Being apolitical also enhances cooperation.
That's very true, it just has nothing to do with politics. I'm all for making people more rational in general.
Politicians can be rational. It's just that they would still be rational politicians - they would use their skills of rationality to do more of the same things we dislike them for doing today. The problem isn't irrationally practiced politics, it's politics itself.
It's changed a lot over the past, but not in this respect: I think no society on the scale millions of people has ever existed that wasn't dominated by one or another form of politics harmful to most of its residents.
Indeed, it depends on how you measure sanity. On the object level of the rules people follow, things have gotten much better. But on the more meta level of how people arrive at beliefs, judge them, and discard them, the vast majority of humanity is still firmly in the camp of "profess to believe whatever you're taught as a child, go with the majority, compartmentalize like hell, and be offended if anyone questions your premises".
This seems a bit mangled. The original in The Republic talks about refusing to rule, not refusing to go into politics. Makes it a bit less of a snappy exhortation for your fellow monkeys to gang up on the other monkeys for the price of actually making more sense.
"One of the penalties for not ruling the world is that it gets ruled by other people." - clearly superior quote
Thomas Edison
Paul Graham
Yes, but it can be either a bad sign about what you're trying to talk yourself into, or about your state of mind. It simply means that your previous position was held strongly - not because of strong rational evidence alone, because stronger evidence can override that - the act of assimilating the information precludes talking yourself into it. If you have to talk yourself into something, it probably means that there is an irrational aspect to your attachment to the alternative.
And that irrational, often emotional attachment can be either right or wrong; were this not true, gut feeling would answer every question truthfully, and the first plausible explanation one could think of would always be correct.
I interpreted the quote as saying that if you are not readily enthusiastic about something but have to beat yourself into doing it, then it is a sign that you should not direct (any more) resources to it.
As did I, but I disagreed with the point that enthusiasm is a necessary indicator of a good idea. Consider the act of eating one's vegetables (assuming that one is a small, stereotypical child) - intuitively repulsive, but ultimately beneficial, the sort of thing which one might have to talk oneself into.
I've had to talk myself into going on some crazy roller-coasters. After the experience though, I'm extremely glad that I did.
Y'know, there are all sorts of counterexamples to this ... but I think its still a bad sign, if not a definitive one, on the basis that if I had been more suspicious of things I was talking myself into I would have had a definite net benefit to my life. (Not counting times I was neurohacking myself, admittedly, but that's not really the same.)
Yes, there's an unfortunate tendency among some "rationalist" types to dismiss heuristics because they don't apply in every situation.
Richard Rhodes
That only tells you that if you just rely on the scientific method, it won't result in only benevolent knowledge. You could use another method to filter for benevolence.
The same techniques of starting fire can be used to keep your neighbor warm in the winter, or to burn your neighbor's house down.
The same techniques of chemistry can be used to create remedies for diseases, or to create poisons.
The same techniques of business can be used to create mutual benefit (positive-sum exchanges; beneficial trade) or parasitism (negative-sum exchanges; rent-seeking).
The same techniques of rhetorical appeal to fear of contamination can be used to teach personal hygiene and save lives, or to teach racial purity and end them.
It isn't the knowledge that is benevolent or malevolent.
That is a completely different reason than presented in the quote.
Good luck finding one that doesn't also bias you into a corner.
That would be wonderful, world-changing, and unlikely. I hope but do not expect to see it happen.
--- Sir Hubert Parry, speaking to The Royal College of Music about the purpose of music examinations
Initially I thought this a wonderful quote because, looking back at my life, I could see several defeats (not all in music) attributable to sipping and sampling. But Sir Hubert is speaking specifically about music. The context tells you Sir Hubert's proposed counter to sipping and sampling: individual tuition aiming towards an examination in the form a viva.
The general message is "counter the tendency to sipping and sampling by finding something definite to work for, analogous to working ones way up the Royal College of Music grade system". But working out the analogy is left as an exercise for the reader, so the general message, if Sir Hubert intended it at all, is rather feeble.
Nate Silver, The Signal and the Noise: Why So Many Predictions Fail — But Some Don’t, New York, 2012, p. 451
Dan Ariely, Predictably Irrational: The Hidden Forces that Shape Our Decisions, New York, 2008, pp. 171-172
In my experience, who started the conflict, who is to blame, etc. is explicitly taught as fact to each side's children. Israelis and Palestinians don't agree on facts at all. A civilized discussion of politics generally requires agreeing not to discuss most past facts.
--Mr. Gradgrind, from Hard Times by Charles Dickens.
The character is portrayed as a villain, but this quote struck me as fair (if you take a less confused view of "Facts" than Gradgrind).
Facts alone are fairly useless without processes for using them to gather more. A piece of paper can have facts inscribed upon it more durably than the human mind can, yet we rely on the latter rather than the former to guide us through life because it is capable of using those facts, not merely possessing them.
http://xkcd.com/863/
It looks to me like you're making the sophisticated point that some facts vary in usefulness. I agree.
The point being made by Gradgrind is much more basic: children should focus on Fact over Fancy. As an example, he refuses to teach his children fairy tales, deciding that they should learn science instead. (Unfortunately, Dickens presents science as dull collections in cabinets, and so the children are rather put out by this.)
"children should focus on Fact over Fancy"
The superiority of facts over fancy in [early] education is an empirical question though, right?
Yep, though I'll point out that the quote isn't limited to what we refer as 'early' education. I'm not an expert in education, so I won't pretend to know a solid answer to that empirical question, but anecdotal evidence from various famous, clever, and productive people suggests that a childhood focused on facts is beneficial.
I think we can assume that no one would suggest that an education omit facts entirely (hence, 'early'). I also agree that a fact-focused early education would be beneficial. The question raised by your quote is whether it would be beneficial to largely or entirely omit fancy. I do think that's a tough empirical question, though that's the kind of thing where empirical answers are not likely to be forthcoming.
Clearly, education in biology, mathematics, and the like should be factual. No one would argue with that. So what sort of thing are we talking about? What is the subject matter for which someone would even suggest fiction as a mode of education?
My guess is that we're talking about something like moral education. I can't think of any alternatives, anyway (other than education in the history of literature, but that suggestion would be question begging). Can we think of another way to provide a moral education that omits fiction?
Well we could certainly teach moral philosophy (though where that lies on the fact-fiction axis I don't know) rather than literature. There we have another empirical question, though my experience has been that moral philosophy doesn't go over very well with the very young. Tends to do more harm than good. Do you have a suggestion here?
One alternative (the alternative that Gradgrind had in mind, I think) is to omit moral education entirely. I take it Dickens' thought was that this is the sort of thing you wouldn't need if you were educating slaves in more sophisticated forms of labor, because their behavior is managed externally and they don't need to give any thought to how to live their own lives. That's my impression, anyway.
Dickens actually mocks Gradgrind for this:
I would suspect another major point of contention is how much weight mathematics and biology should get relative to other subjects. (Now, Gradgrind does have the confusion, more obvious elsewhere, that classifications are important facts rather than fuzzy collections, and this is a confusion worth criticizing.)
It's not clear to me what you mean by "moral education." Gradgrind puts a lot of effort into cultivating the "moral character" of his children (in fact, this seems to be the primary reason for his banishment of fancy). Very little effort is put into teaching them how to cultivate their own character, which is what I would take moral philosophy to mean (but even that may be too practical an interpretation of it!).
It is in fact, but not in fancy.
Witty, but completely unclear - I have no idea what your point is.
It's an empirical question if you deal in facts. But if you deal in fancies, everyone's got their own fancy and nobody's right or wrong, so there are no properly empirical questions.
ah, ok. I interpreted it as a preference for teaching Fact rather than Theory.
Breaking Bad, episode Rabid Dog.
(Although "won't" should be "can't".)
Depending on how the violence is applied, it can also make it better.
-- Albert Einstein
These (nebulous) assertions seem unlikely on many levels. Psychopaths have few morals but continue to exist. I have no idea what "inner balance" even is.
He may be asserting that morals are necessary for the existence of humanity as a whole, in which case I'd point to many animals with few morals who continue to exist just fine.
I know of no animals other than humans who have nuclear weapons and the capacity to completely wipe themselves out on a whim.
True, but its not clear morals have saved us from this. Many of our morals emphasize loyalty to our own groups (e.g. the USA) over our out groups (e.g. the USSR), with less than ideal results. I think if I replaced "morality" with "benevolence" I'd find the quote more correct. I likely read it too literally.
Though the rest of it still doesn't make any sense to me.
The existence of nuclear weapons should be taken as evidence that humans are not very moral. (And yet survive so far.)
Einstein is not saying that humans are necessarily moral, but rather that they ought to be moral.
Furthermore, it is arguable that nuclear weapons are not necessarily immoral in and of themselves. Like any tool or weapon, they can be used for moral and immoral ends. For instance, nuclear weapons may well be one of the most effective means of destroying Earth-directed masses such as Existential Risk threatening asteroids. They may also be extremely effective for deterring conventional warfare between major powers.
The only previous actual use of nuclear weapons against human targets was for the ends of ending a world war, and it did so rather successfully. That we have chosen not to use nuclear weapons irresponsibly may well suggest that those with the power to wield nuclear weapons have in fact been more morally responsible than we give them credit.
Perhaps. Alternatively, when faced with a similarly-armed opponent, even our habitually bloody rulers can be detered by the prospect of being personally burned to death with nuclear fire.
More like our supposedly bloody soldiers, at least in some of the more alarming close calls.
I was about to say your point stands, but actually, wouldn't at least some of them have been in bunkers? I'll have to check that, now...
Huh? Can you unpack this for me, I don't see how it can make sense.
Start from "The very existence of flame-throwers proves that some time, somewhere, someone said to themselves, You know, I want to set those people over there on fire, but I'm just not close enough to get the job done", I guess.
Doesn't help me much. The purpose of weapons -- all weapons -- is to kill. What exactly is the moral difference between a nuclear bomb and a conventional bomb?
Not true. The purpose of some weapons is to incapacitate or subdue. For example, stun guns, tear gas, truncheons, flashbangs, etc.
More exactly, the purpose of a weapon is to use pain to change behavior--which matches a general definition of "punishment." Sometimes the mere threat of pain suffices to change behavior. In cases of mutual deterrence (or less drastic, like everyday border patrols) that's the point: to make you behave differently from what you would otherwise, by appealing merely to your expectation of pain.
No, I don't think so. But to avoid the distraction of trying to define "weapons", let me assert that we are talking about military weapons -- instruments devised and used with the express purpose of killing other humans. The issue is whether nuclear weapons have any special moral status, so we're not really concerned with tear gas and tasers.
Why are nuclear weapons morally different from conventional bombs or machine guns or cannons?
Strategic nuclear weapons - the original and most widespread nuclear weapons - cannot be used with restraint. They have huge a blast radius and they kill everyone in it indiscriminately.
The one time they were used demonstrated this well. They are the most effective and efficient way, not merely to defeat an enemy army (which has bunkers, widely dispersed units, and retaliation capabilities), but to kill the entire civilian population of an enemy city.
To kill all the inhabitants of an enemy city, usually by one or another type of bombardment, was a goal pursued by all sides in both world wars. Nuclear weapons made it much easier, cheaper, and harder to defend against.
Tactical nuclear weapons are probably different; they haven't seen (much? any?) use in real wars to be certain.
What I think places the atom bomb on its own category is that its potential for destruction is completely out of proportion with whatever tactical reason you may have for using it. Here we're dealing with destruction on a civilization level. This is the first time in human history when the end of the world may come from our own hands. Nothing in our evolutionary past could have equipped us to deal with such a magnitude of danger. In the Middle Ages, the Pope was shocked at the implications of archery--you could kill from a distance, almost as effectively as with a sword, but without exposing yourself too much. He thought it was a dishonorable way of killing. By the time cannons were invented, everyone was more or less used to seeing archers in battle, but this time it was the capacity for devastation brought by cannons that was beyond anything previously experienced. Ditto for every increasing level of destructive power: machine guns, bomber airplanes, all the way up to the atom bomb. But the atom bomb is a gamechanger. No amount of animosity or vengefulness or spite can possibly justify vaporizing millions of human lives in an instant. Even if your target were a military citadel, the destruction will inevitably reach countless innocents that the post-WW2 international war protocols were designed to protect. Throwing the atom bomb is the Muggle equivalent of Avada Kedavra--there is no excuse that you can claim in your defense.
Consider what "the cold war" might have been like if we hadn't of had nuclear weapons. It probably would have been less cold. Come to think of it, cold wars are the best kind of wars. We could use more of them.
Yes nukes have done terrible things, could have done far worse, and still might. However since their invention conventional weapons have still killed far, far more people. We've seen plenty of chances for countries to use nukes where they've not, so I think its safe to say the existence of nukes isn't on average more dangerous than the existence of other weapons. The danger in them seems to come from the existential risk which is not present when using conventional weapons.
Indeed, I'm pretty sure that if not for nuclear weapons, some right-thinking Russian would have declared war over the phrase "hadn't of had". And very rightly so. The slaughter inflicted by mere armies of millions, with a few tens of thousands of tanks, would have been a small price to pay to rid the world of abominations like that one.
Consider what the last big "hot war" would have been like if the atom bomb had been developed even a couple of years earlier, or by another side.
The war would have been over faster, with possibly lower total number of casualties?
The war might have been over faster, but I think with a much higher number of casualties.
That's not obvious to me. Consider empirical data: the casualties from conventional bombing raids. And more empirical data: the US did not drop a nuke on Tokyo. Neither did it drop a nuke on Kyoto or Osaka. The use of atomic bombs was not designed for maximum destruction/casualties.
The actual use of the atom bomb against Japan was against an already defeated enemy. The US had nothing to fear from Japan at that point, and so they didn't need to strike with maximum power.
On the other hand, imagine a scenario where use of the Bomb isn't guaranteed to end the war at one stroke, and you have to worry about an enemy plausibly building their own Bomb before being defeated. What would Stalin, or Hitler, or Churchill, do with an atom bomb in 1942? The same thing they tried to do with ordinary bombs, scaled up: build up an arsenal of at least a few dozen (time permitting), then try to drop one simultaneously on every major enemy city within a few days of one another.
WW2 casualties were bad enough, but they never approached the range of "kill 50% of the population in each of the 50 biggest enemy cities, in a week's bombing campaign, conditional only on getting a single bomber with a single bomb to the target".
Checking Google failed to yield an original source cited for this quote.
I got it from the biography, "Einstein: His Life and Universe" by Walter Isaacson, page 393.
The Notes for "Chapter Seventeen: Einstein's God" on page 618 state that the quote comes from:
Great book, by the way.
Paul Graham
-Hermann Hesse, The Glass Bead Game
-Leonard Susskind, Susskind's Rule of Thumb
This is why many scientists are terrible philosophers of science. Not all of them, of course; Einstein was one remarkable exception. But it seems like many scientists have views of science (e.g. astonishingly naive versions of Popperianism) which completely fail to fit their own practice.
Yes. When chatting with scientists I have to intentionally remind myself that my prior should be on them being Popperian rather than Bayesian. When I forget to do this, I am momentarily surprised when I first hear them say something straightforwardly anti-Bayesian.
Examples?
Statements like "I reject the intelligence explosion hypothesis because it's not falsifiable."
I see. I doubt that it is as simple as naive Popperianism, however. Scientists routinely construct and screen hypotheses based on multiple factors, and they are quite good at it, compared to the general population. However, as you pointed out, many do not use or even have the language to express their rejection in a Bayesian way, as "I have estimated the probability of this hypothesis being true, and it is too low to care." I suspect that they instinctively map intelligence explosion into the Pascal mugging reference class, together with perpetual motion, cold fusion and religion, but verbalize it in the standard Popperian language instead. After all, that is how they would explain why they don't pay attention to (someone else's) religion: there is no way to falsify it. I suspect that any further discussion tends to reveal a more sensible approach.
Yeah. The problem is that most scientists seem to still be taught from textbooks that use a Popperian paradigm, or at least Popperian language, and they aren't necessarily taught probability theory very thoroughly, they're used to publishing papers that use p-value science even though they kinda know it's wrong, etc.
So maybe if we had an extended discussion about philosophy of science, they'd retract their Popperian statements and reformulate them to say something kinda related but less wrong. Maybe they're just sloppy with their philosophy of science when talking about subjects they don't put much credence in.
This does make it difficult to measure the degree to which, as Eliezer puts it, "the world is mad." Maybe the world looks mad when you take scientists' dinner party statements at face value, but looks less mad when you watch them try to solve problems they care about. On the other hand, even when looking at work they seem to care about, it often doesn't look like scientists know the basics of philosophy of science. Then again, maybe it's just an incentives problem. E.g. maybe the scientist's field basically requires you to publish with p-values, even if the scientists themselves are secretly Bayesians.
I'm willing to bet most scientists aren't taught these things formally at all. I never was. You pick it up out of the cultural zeitgeist, and you develop a cultural jargon. And then sometimes people who HAVE formally studied philosophy of science try to map that jargon back to formal concepts, and I'm not sure the mapping is that accurate.
I think 'wrong' is too strong here. Its good for some things, bad for others. Look at particle-accelerator experiments- frequentist statistics are the obvious choice because the collider essentially runs the same experiment 600 million times every second, and p-values work well to separate signal from a null-hypothesis of 'just background'.
Not necessarily a great metric; working on the second-most-probable theory can be the best rational decision if the expected value of working on the most probable theory is lower due to greater cost or lower reward.
Hm. A generalized phenomenon of overwhelming physicist underconfidence could account for a reasonable amount of the QM affair.
Great quote.
Unfortunately, we find ourselves in a world where the world's policy-makers don't just profess that AGI safety isn't a pressing issue, they also aren't taking any action on AGI safety. Even generally sharp people like Bryan Caplan give disappointingly lame reasons for not caring. :(
After reading Robin's exposition of Bryan's thesis, I would disagree that his reasons are disappointingly lame.
Peter Shor replying in the comment section of Scott Aaronson's blog post Firewalls.
"To know thoroughly what has caused a man to say something is to understand the significance of what he has said in its very deepest sense." -Willard F. Day
On the other hand, one should consider not only what was said, but also what should have been said.
Oglaf (Original comic NSFW)
How have I been reading Oglaf for so long without knowing about the epilogues?
...oh crap, I'm going to have to reread the whole thing, aren't I.
Nah, the wiki makes it much easier.
bahahahaha
For anyone unaware, SMBC has an additional joke panel when you mouse over the red button at the bottom
AAAARGH!!! Why do they keep it secret?
That's almost as annoying as that you have to know the name of Zach's wife to create an account and comment, when for a long time the name of Zach's wife was not findable either on the website or via Google.
(I don't remember her name.)
Thank you very much.
It's usually the funnest panel, too.
Kelly Weinersmith.
I ... was not aware it was even possible to comment on SMBC.
It's an example of failing to update traditions after their original purpose has eroded, for the record. It was originally a reward for voting, which is why SMBC fans still refer to it as a "votey". The voting atrophied, while creating the reward became part of his routine.
Actually, you have to click it now. Just a heads up to anyone reading this and trying to find them.
Jeremy Silman
A. P. Herbert, Uncommon Law.
A. P. Herbert, Uncommon Law.
Caution in applying such a principle seems appropriate. I say this because I've long since lost track of how often I've seen on the Internet, "I lost all respect for X when they said [perfectly correct thing]."
For most people, is it necessarily wrong to lose all respect for someone in response to a true statement? Most people are respecting things other than truth, and the point "anyone respectable would have known not to say that" can remain perfectly valid.
I agree. It strengthens your point to note that, although the quote is normally used seriously, the author intended it mischievously. In context, the "thirteenth stroke" is a defendant, who has successfully rebutted all the charges against him, making the additional claim that "this [is] a free country and a man can do what he likes if he does nobody any harm."
This "crazy" claim convinces the judge to convict him anyway.
Ted Chiang, Tower of Babylon
Scott Adams
John LeCarre, explaining that he didn't have insider information about the intelligence community, and if he had, he would not have been allowed to publish The Spy Who Came in from the Cold, but that a great many people who thought James Bond was too implausible wanted to believe that LeCarre's book was the real deal.
Another good one from the same source:
Eugene McCarthy, Human Origins: Are We Hybrids?
Anonymous, found written in the Temple at 2013 Burning Man
Part of that seems to be from HPMOR. I'm not sure where the rest comes from.
Yeah, almost certainly HPMOR inspired. Eliezer's work has spread far.
-- Gordon R. Dickson, "The Tactics of Mistake".
Friedrich Nietzsche
— Jack Vance, The Languages of Pao
Q: Why are Unitarians lousy singers? A: They keep reading ahead in the hymnal to see if they agree with it.
But it's not who you are underneath, it's what you do that defines you.
-Rachel Dawes, Batman Begins
-- Norman Page, Auden and Isherwood: The Berlin Years
Not quite seeing this as a rationality quote. What's your reasoning?
"The Great Phrase-book Fallacy" is both amusing and instructive. I laughed when I read it because I remembered I'd been a victim of it too once, in less seedier circumstances.
Roger Ebert
Would be nice if this were true.
It's probably true for academic film theory. I mean how hard could it really be?
Slightly edited from Scott Adams' blog.
And a similar sentiment from SMBC comics.
I personally can't see how a monkey turns into a human. But that's irrelevant because that is not the claim of natural selection. This makes a strawman of most positions that endorse something approximately like free will. Also:
Just the legal system? Gah. Everybody on earth does this about 200 times a day.
From Obvious Adam, a business book published in 1916.
Nick Szabo
Is this a similar message to Penn Jillette saying:
"If you don’t pay your taxes and you don’t answer the warrant and you don’t go to court, eventually someone will pull a gun. Eventually someone with a gun will show up. "
or did I miss the boat?
Well, it's similar, but for two differences:
1) It uses a different and wider category of examples. Viz. "initiate force [...] to compel them to hand over goods, to let us search their property, or to testify."
2) It makes a consequentialist claim about forcing people to e.g. let us search their property for evidence: "we can't properly respond to a global initiation of force without local initiations of force."
The second difference here is important because it directly contradicts the typical libertarian claim of "if we force people to do things much less than we currently do, that will lead to good consequences." The first difference is rhetorically important because it is a place where people's gut reaction is more likely to endorse the use of force, and people have been less exposed to memes about forcibly searching peoples' property (compared to the ubiquity of people disliking taxes) that would cause them to automatically respond rather than thinking.
Actually that isn't what Szabo is saying. His point is to contradict the claim of the anarcho-capitalists that "if we never force people to do things, that will lead to good consequences."
Theophanis the Monk, "The Ladder of Divine Grace"
The biggest problem in the world is too many words. We should be able to communicate, distribution graphs of past experiences, directly from one human brain to another. ~Aang Jie
by Hannes Leitgeb, from his joint teaching course with Stephan Hartmann (author of Bayesian Epistemology) on Coursera entitled 'An Introduction to Mathematical Philoosphy'.
The course topics are "Infinity, Truth, Rational Belief, If-Then, Confirmation, Decision, Voting, and Quantum Logic and Probability". In many ways, a very LW-friendly course, with many mentions and discussions of people like Tarski, Gödel etc.
-- Richard James, founding priest of a Toronto based Wicca church, quoted in a thegridto article
Black Books, Elephants and Hens. H/t /u/mrjack2 on /r/hpmor.
When you know a thing, to hold that you know it, and when you do not know a thing, to allow that you do not know it. This is knowledge.
Confucius, Analects