Rationality Quotes February 2013
Another monthly installment of the rationality quotes thread. The usual rules apply:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments or posts from Less Wrong itself or from Overcoming Bias.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (563)
Syrio Forel, Game of Thrones based on A Song of Ice and Fire by George R R Martin
Scott Adams
Bryan Caplan
This sounds almost horrifically dystopian, in a sort of Friendship is Optimal way.
I wouldn't be surprised if this has come up before:
―Kurt Vonnegut (attributed to Kilgore Trout), in Breakfast of Champions
Yep.
-- David Brin
-- Doc Scratch, Homestuck
I'm not certain what lesson on rationality I'm expected to glean from this, unless it's "model your opponents as agents, not as executors of cached scripts" -- and that seems both strongly dependent on the opponents you're facing and a little on the trivial side.
(Sorry, I couldn't resist.)
Studies show that people who try to run behind a car frequently fail to keep up, while nobody who runs in front of a car fails more than once.
-- Time Braid
Eckhart Tolle, as quoted by Owen Cook in The Blueprint Decoded
--Lawrence Watt-Evans, The Spriggan Mirror
-- Seng-Ts'an
Does this mean something different than "Truth doesn't have a moral valence"?
Cause it seems like it is trying harder to sound deep than to sound insightful. Sigh - maybe I'm just jaded by various other trying-to-sound-deep-for-its-own-sake sayings. Aka seem deep vs. is deep issues.
My primary interpretation was "attaching yourself to arguments obstructs your ability to seek the truth." If you are interested in the truth, it does not matter if you or your interlocutor is wrong or right; it matters what the truth is.
Another interpretation is "is-thinking leads to accuracy, should-thinking leads to delusion."
A third interpretation is "moralistic thinking degrades morals." I don't consider that interpretation interesting enough to agree or disagree with it.
It doesn't seem to be clear whether Seng-Ts'an is talking about moral right and wrong, or the kind of "wrong" that is involved in "proving your opponent wrong" in debates. The first interpretation is just silly according to any philosophy that cares about ethics, but the second one does make a lot of sense.
i'm going to reply to the quote as if it means "Truth doesn't have a moral valence" and rebuttal that truth should be held more sacred then morals rather then simply outside of it. For example if there are two cases and case 1 leads to a morally "better" (in quotes because the word better is really a black box) outcome then case 2 but case 1 leads to hiding the truth (including hiding from it yourself) then I would have to think very specifically about it. In short I abide by the rule "That which can be destroyed by the Truth should be" but am weary that this breaks down practically in many situations. So when presented with a scenario where i would be tempted to break this principle for the "greater good" or the "morally better case" I would think long and hard about whether it is a rationalization or that i did not expend the mental effort to come up with a better third alternative.
Satoshi Kanazawa
Is Newtons theory of gravity true or false? It's neither. For some problems the theory provides a good model that allows us to make good predictions about the world around us. For other problems the theory produces bad predictions.
The same is true for nearly every scientific model. There are problems where it's useful to use the model. There are problems where it isn't.
There are also factual statements in science. Claiming that true and false are the only possible adjectives to describe them is also highly problematic. Instead of true and false, likely and unlikely are much better words. In hard science most scientific conclusions come with p values. The author doesn't try to declare them true or false but declares them to be very likely.
It's also interesting that the person who made this claim isn't working in the hard sciences. He seems to be a evolutionary psychologist based in the London School of Economics. In the Wikipedia article that desribes him he's quoted as suggesting that the US should have retaliated 9/11 with nuclear bombs. That a non-scientific racist position. He published some material that's widely considered as racist in Psychology Today. I don't see why "racist" is no valid word to describe his conclusions.
Huh, what definition of "racist" are you using here? Would you describe von Neumann's proposal for a pre-emtive nuclear strike on the USSR as "racist"?
I'm not sure what you mean by "racist", however is your claim supposed to be that this somehow implies that the conclusion is false/less likely? You may want to practice repeating the Litany of Tarski.
It's basically about putting a low value on the life on non-white civilians. In addition "I would do to foreigners, what Ann Coulter would do to them", is also a pretty straight way to signal racism.
I haven't argued that fact. I'm advocating for having a broad number of words which multidimensional meaning.
I see no reason to treat someone who makes wrong claims about race and who's personal beliefs cluster with racist beliefs in his nonscientific statements the same way as someone who just makes wrong statements about the boiling point of some new synthetic chemical.
So would you call the bombings of civilians during WWII "racist"?
So you would agree that there are some statements that are both "racist" and true.
What do you mean by "wrong"? If you mean "wrong" in the sense of "false", you've yet to present any evidence that any of Satoshi Kanazawa's claims are wrong.
Rather than using the ambiguous word "racist", one could say specifically that Kanazawa is an advocate of genocide.
As I said above, did the bombings of civilians during WWII constitute "genocide"?
A scientist can have an inclination towards--for example--racist ideas. You can't just call this a kind of being wrong, because depending on the truth of what they're studying, this can make them right more often or less often.
So racist scientists are possible, and racist scientific practice is possible. I think 'racist' is an appropriate label for the conclusions drawn with that practice, correct or incorrect.
Though, I think being racist is a property of a whole group of conclusions drawn by scientists with a particular bias. It's not an inherent property of any of the conclusions; another researcher with completely different biases wouldn't be racist for independently rediscovering one of them.
It's a useful descriptor because a body of conclusions drawn by racist scientists, right or wrong, is going to be different in important ways from one drawn by non-racist scientists. It doesn't reduce to "larger fraction correct" or "larger fraction incorrect" because it depends on if they're working on a problem where racists are more or less likely to be correct.
I think it's pretty clear that scientific conclusions can be dangerous in the sense that telling everybody about them is dangerous. For example, the possibility of nuclear weapons. On the other hand, there should probably be an ethical injunction against deciding what kind of science other people get to do. (But in return maybe scientists themselves should think more carefully about whether what they're doing is going to kill the human race or not.)
That's the thing, the science wasn't good or bad, it was the to decision to give the results to certain people that held that quality of good/bad. And it was very, very bad. But the process of looking at the world, wondering how it works, then figuring out how it works, and then making it work the way you desire, that process carries with it no intrinsic moral qualities.
I don't know what you mean by "intrinsic" moral qualities (is this to be contrasted with "extrinsic" moral qualities, and should I care less about the latter or what?). What I'm saying is just that the decision to pursue some scientific research has bad consequences (whether or not you intend to publicize it: doing it increases the probability that it will get publicized one way or another).
The majority of scientific discoveries (I'm tempted to say all but I'm 90% certain that there exist at least one counter example) have very good consequences as well as bad. I think the good and bad actually usually go hand in hand.
To make the obvious example nuclear research lead to both the creation of nuclear weapons but also the creation of nuclear energy.
At what point could you label research into any scientific field as having to many negative consequences to pursue?
I agree that this is a hard question.
General complaint: sometimes when I say that people should be doing a certain thing, someone responds that doing that thing requires answering hard questions. I don't know what bringing this point up is supposed to accomplish. Yes, many things worth doing require answering hard questions. That is not a compelling reason not to do them.
This seems to imply that science is somehow free from motivated cognition — people looking for evidence to support their biases. Since other fields of human reason are not, it would be astonishing if science were.
(Bear in mind, I use "science" mostly as the name of a social institution — the scientific community, replete with journals, grants and funding sources, tenure, and all — and not as a name for an idealized form of pure knowledge-seeking.)
I'd take an issue with "undesirable", the way I understand it. For example, the conclusion that traveling FTL is impossible without major scientific breakthroughs was quite undesirable to those who want to reach for the stars. Similarly with "dangerous": the discovery of nuclear energy was quite dangerous.
If travelling faster than light is possible,
I desire to believe that travelling faster than light is possible;
If travelling faster than light is impossible,
I desire to believe that travelling faster than light is impossible;
Let me not become attached to beliefs I may not want.
Something not (currently) possible can still be desirable.
Sun Tzu on establishing a causal chain from reality to your beliefs.
Dupe.
Klingon proverb.
So it's true what they say! The opposite of a Klingon proverb is also a Klingon proverb...
— Herbert Butterfield, The Whig Interpretation of History
Karl Popper
There's a failure mode associated to this attitude worth watching out for, which is assuming that people who disagree with you are being irrational and so not bothering to check if you have arguments against what they say.
Joke: a tourist was driving around lost in the countryside in Ireland among the 1 lane roads and hill farms divided by ancient stone fences, and he asks a sheep farmer how to get to Dublin, to which he replies:
"Well ... if I was going to Dublin, I wouldn't start from here."
Moral, as I see it anyway: While the heuristic "to get to Y, start from X instead of where you are" has some value (often cutting a hard problem into two simpler ones), ultimately we all must start from where we are.
-- From the final screen of Call of Cthulhu: The Wasted Land
...Hooray for the phygists?
Well, there are lots of cultists running around trying to summon an Elder God. This will almost certainly end in disaster. The options we have to fight this are: a) We can try to stop all Elder-God-summoning related program activities or b) We can try to get there first and summon a Friendly Elder God.
Both a) and b) are almost impossibly difficult and I find it hard to decide which is less impossible.
I think you have the lesson entirely backward.
How so? A person convinced that any nuclear power plant is a risk of multi megaton explosion would have some very weird ideas of how nuclear power plants should be built; they would deem moderated reactors impractical, negative thermal coefficient of reactivity infeasible, etc (or be simply unaware of the mechanisms that allow to achieve stability), and would build some fast neutron reactor that relies on very rapid control rod movement for it's stability. Meanwhile normal engineering produced nuclear power plants that, imperfect they might be, do not make a crater when they blow up.
To the extent that you already know that nuclear power plants are basically safe they clearly do not apply as an analogy here. Reasoning from them like this is an error.
Yes, but you can say that because you have the independent evidence that nuclear power plants are workable, beyond the mere say-so of a couple of scientists. You don't have that kind of evidence for AI safety.
Also, this:
... is not a given. What makes you think that the worst it would do is kill you, when killing is not the worst thing humans do to each other?
-- Martin Fowler
Introduction to Learn Python The Hard Way, by Zed A. Shaw
I'm not sure what this has to do with rationality quotes, but the extract basically convinces me to avoid the guy like the plague. The underlying premises seem to be something like:
The remaining choice when someone knows enough to feel a book is too simple for them is that they know everything.
They should discard all that they know - empty before you fill - so they can learn from someone with more knowledge than them.
Go learn lisp... -shrug-
It seems incredibly bad advice to give to someone who thinks a lot of what's in a book's too simple for them to essentially yell at them to shut up and knuckle down. As compared to say, pointing them to a few things that are generally not covered that well in self-learning and direct them to a more advanced book.
To me, it seems like a horribly hostile approach to teaching people, which comes across as saying, "In order to learn anything from me, you must abase yourself before me." Which is to say, "I am incapable of conveying useful information to anyone who does not present abject submission to me."
But then, it's possible that I'm just hearing Severus Snape (or the class of lousy teachers he is an imitation of) in the "so you think you know everything?" bullshit.
I think the quote's main function is to warn those who don't know anything about programming of a kind of person they're likely to encounter on their journey (people who know everything and think their preferences are very right), and to give them some confidence to resist these people. It also drives home the point that people who know how to program already won't get much out of the book. I quoted it because it addresses a common failure mode of very intelligent and skilled people.
Agreed. I'm actually not sure if what I should take away from that introduction is "This material seems easy but isn't, so go through everything carefully even if you think you understand it" or the opposite: "If this book seems easy, it's not advanced enough for you and you already know everything; so read something else instead."
I took it as meaning the second. There's even a recommendation as to what else to read; a book on Lisp.
I strongly suspect that's just him being an ass. If you're finding the concepts in his book too simple, there are plenty of other concepts you could be learning about in computer science that would expand your ability as a programmer more quickly than just picking up another language.
If you want to become a better programmer after learning the basics of a language, I recommend you go and pick up some books on the puzzles / problems in computer science and look at how to solve them using a computer. Go and read up on different search functions and path finding routines, go and read up on relational databases, and types as an expressive system rather than just as something that someone's making you do, go and read up on using a computer to solve tic-tac toe... Things like that - you'll get better a lot faster and become a much better programmer than you will just from picking up another language, which let's face it you're still not going to have a deep understanding of the uses of.
Which isn't to say that there's no learning in picking up another language. There is, I don't know any good programmers who only know one language. But it's not the fastest way to get the most gain in the beginning.
Once you have that extra knowledge about how to actually use the language you just learned. Then by all means go and learn another language.
If you just know Python, then you know what we'd call a high-level imperative language. Imperative just means you're giving the computer a list of commands, high-level means that you're not really telling the computer how to execute them (i.e. the further away you get from telling it to do things with specific memory locations and what commands to use from the command set on the processor the higher level the language is.)
C will give you, the rest of the procedural/imperative side of things that you didn't really get in Python, you'll learn about memory allocation and talking to the operating system - it's a lower level language but still works more or less the same in the style of programming. Haskell or Lisp are both fairly high level languages, like Python, but will give you functional abstraction which is a different way of looking at things than procedural programming.
But... even if you were going to recommend a language to learn after Python, and you knew the person already knew about stuff like relational databases and search functions and could use their skill to solve problems so that you weren't just playing a cruel joke on them, and even if you were going to recommend a functional language: deep breath ... it wouldn't be Lisp, I think.
Lisp has a horrible written style for a beginner. It does functional abstraction, it's true enough - and that is a different way of thinking about problems than the procedural programming that's used in Python - but so does Haskell, and Haskell programs don't look like someone threw up a load of brackets all over the screen; they're actually readable (which may explain why Haskell actually gets used in real life whereas I've never seen Lisp used for much outside of a university.) Haskell also has the awesomeness of monadic parser combinators which are really nice and don't show up in Lisp.
Lisp's big thing is its macros. I can't think of much other reason to learn the language and frankly I try to use them as little as possible anyway because it's so much easier to misuse them than it is with functions.
So, yeah. I can see where you're coming from but I don't think he's really on the level there.
Would you care to share your reason for the downvote? I promise not to dispute criticism so you don't have to worry about it escalating into a time-sink.
Of course, if your goal is to learn Python but you find Zed's book too easy, "Read a book on Lisp" is probably not suitable advice.
If anyone feels even remotely inspired to click through and actually learn python, do it. Its been the most productive thing I've done on the internet.
-Luc de Clapiers
@slicknet
Or better yet, assume nothing, and reserve judgement until you have more information.
You always assume things, whether you are aware of it or not. At least by making your assumptions explicit and conscious, you have a better chance of noticing when they are wrong. And assuming "that people are dumb, ignorant, and making mistakes" is a common default subconscious failure mode.
In most situations there are multiple people other than yourself who each think the others are dumb, ignorant and making mistakes. Don't assume that the one you happen to be interacting with at the moment is right by default.
Also, consider the possibility that it is you who is dumb, ignorant, and making mistakes.
I don't consider it, I assume it.
But "dumb" and "ignorant" are not points on a line, they are relative positions.
To quote this bloke at a climbing gym I used to frequent "We all suck at our own level".
You may or may not have noticed, but most people are biased. Whether bias counts as "dumb", "ignorant" or "making mistakes" is left as an exercise for the reader.
If we are in the business of making assumptions, there is no dichotomy, you can as well consider both hypotheticals. (Actually believing that either of these holds in general, or in any given case where you don't have sufficient information, would probably be dumb, ignorant, a mistake.)
This misses the point a bit due to an equivocation on "assume". In ordinary discourse, it usually means "assume for the purpose of action until you encounter contrary evidence". That's very different from the scientist's hypothetical assumptions that are made in order to figure out what follows from a hypothesis.
It's epistemically incorrect to adopt a belief "for the purpose of action", and permitting "contrary evidence" to correct the error doesn't make it a non-error.
I think what Creutzer is trying to mean is in ordinary discourse meaning everyday problems in which you are not always able to give the thought time it deserves, when you don't even have 5 minutes by the clock hand to think about the problem rationally, it is better to rely on the heuristic assume people are smart and some unknown context is causing problems then to rely on the heuristic people who make mistakes are dumb. this said heuristics are only good most of the time and may lead you to errors such as
in this case it is still technically an error but you are merely attempting to be "less wrong" about a case where you don't have time to be correct then assuming the heuristic until you encounter contrary evidence (or you have the time to think of better answers) follows closely the point of this website
Exactly, thanks for the clarification.
Using a heuristic doesn't require believing that it's flawless. You are in fact performing some action, but that is also possible in the absence of careful understanding of the its effect. There is no point in doing the additional damage of accepting a belief for reasons other than evidence of its correctness.
I believe that this statement, while correct, misses the point of preemptive debiasing. Yvain said it better.
The original quote draws attention to the mistake of not giving enough attention to the hypothetical where something appears to be wrong/stupid, but upon further investigation turns out to be correct/interesting. However, it confuses the importance of the hypothetical with its probability, and endorses increasing its level of certainty. I pointed out this error in the formulation, but didn't restate the lesson of the quote (i.e. my point didn't include the lesson, only the flaw in its presentation, so naturally it "misses" the point of the lesson by not containing it).
With apologies for double-commenting: "Don't assume others are ignorant" is likely to be read by a lot of people (including myself at first) as "Aim high and don't be easily be convinced of an inferential gap". Posts on underconfidence may also be relevant.
I would somewhat agree with this if the phrase "making mistakes" was removed. People generally have poor reasoning skills and make non-optimal choices >99% of the time. (Yes, I am including myself and you, the reader, in this generalization.)
-- Randall Munroe
The Last Psychiatrist (http://thelastpsychiatrist.com/2009/06/delaying_gratification.html)
ShittingtonUK
— Gaston Leroux
Only with very low probability.
and the human mind loves to find patterns even when the probabilities of the pattern being a rule are low. Coincidences are correlation.
[Footnote to: "This was a most disturbing result. Niels Bohr (not for the first time) was ready to abandon the law of conservation of energy". The disturbing result refers to the observations of electron energies in beta-decay prior to hypothesizing the existence of neutrinos.]
-David Griffiths, Introduction to Elementary Particles, 2008 page 24
From this recent talk
I'm confused. I thought that deathpigeon's quote was downvoted because it was anti-deathism and not rationality, but this quote is similar in that way and it has lots of upvotes. Was deathpigeon's quote actually downvoted because it incorrectly attributed a line to ASoIaF instead of Game of Thrones? Seriously?
/clicks link, watches
... I can barely understand a single word this guy is saying. Is it just me or is the audio in that video really bad? I don't suppose it was transcribed anywhere?
(Joseph Heath, The Efficient Society)
Heath is an excellent writer on economics/philosophy.
-- Magnificent Sasquatch
-- Scott Sumner (talking about Italian politicians when the EU controls their monetary policy, but it generalizes)
This just prompted me to (hypothetically, for the sake of amusement) reinterpret many of Eliezer's actions as a psychological experiment wherein he has contrived exaggerated scenarios in order to test this empirically.
Francis Spufford, Red Plenty
-- Screwtape, The Screwtape Letters by C.S. Lewis
I kind of wish people did use the future more, sometimes. For example, in Australia at the moment, neither major political party supports gay marriage. And beyond all the direct arguments for/against the concept, I can't help but wonder if they really expect, in 50 years time, that we will live in a world of strictly hetrosexual marriages. What are they possibly hoping to achieve? Maybe that reasoning isn't the best way to decide to actively do a thing, but it surely counts towards the cessation of resistance to a thing.
FWIW, 20 years ago (when my now-husband and I first got together) I expected that I would live in a world of strictly heterosexual marriages all my life.
That didn't incline me to cease my opposition to that world.
So I can empathize with someone who expects to live in a world of increasing marriage equality but doesn't allow that expectation to alter their opposition to that world.
Being elected at some point in the next 3 years. They aren't trying to achieve anything related to homosexual marriages. They don't care.
Um, I know this is classic Hansonian "X is not about X" cynicism, but I doubt it's actually true of most politicians. Sure, the need to get elected skews their priorities, but they do have policy preferences, which they are willing to pursue at cost if necessary.
Here are a few things that have at one time or another been considered "obviously inevitable":
The spread of enlightened dictatorship on the Prussian model.
The spread of eugenics.
The control of the world economy by "rational" central planners.
My point is that you appear to be overestimating how well you can predict the future.
I don't think you really believe this argument. In particular if the success of something you opposed seemed inevitable, you'd still oppose it.
What I think is happening is that you support the "inevitable" outcome but are getting frustrated that the opposition just won't go away like they're "supposed" to.
-- Chad Fowler (from The 4-Hour Body)
"We're even wrong about which mistakes we're making."
-Carl Winfeld
W. H. Auden, "The More Loving One"
The only interpretation I've been able to read into this is that the speaker wants to become more emotionally accepting of death. Am I missing something?
That interpretation didn't even occur to me, possibly because I read the whole poem instead of the bit I quoted (and maybe I quoted the wrong bit). Here is the whole thing (it's short). I always feel a bit awkward arguing about how I interpreted a poem, so maybe this will resolve the issue?
(Incidentally, am I the only one mildly annoyed by how people seem to think of "rationality quotes" as "anti-deathism quotes"? The position may be rational, but it is not remotely related to rationality.)
I had a thought recently, what if the existence of a benevolent, omnipotent creator was proven? and my first thought was that I would learn to love the world as the creation of a higher power. And that disturbed me. It's too new a thought for me to have plumbed it properly. But this reminded me. In the absence of the stars, what becomes of their beauty?
When the world is bereft of tigers, glaciers, the Amamzon, will we feel it to be sublime? imma go read the poem now
-Yevgeny Yevtushenko
I wonder if we'll ever learn to reconstruct people-shadows from other people's memories of them. Also, whether this is a worthwhile thing to be doing.
It's a little creepy the way Facebook keeps dead people's accounts around now.
Relevant: Greg Egan, "Steve Fever".
(Joseph Heath & Andrew Potter, The Rebel Sell)
--Sam Harris
Put them in a situation where they need to use logic and evidence to understand their environment and where understanding their environment is crucial for their survival, and they'll figure it out by themselves. No one really believes God will protect them from harm...
If you threaten someone in their survival they are likely to get emotional. That's not the best mental state to apply logic.
Suicide bombers don't suddenly start believing in reason just before they are send out to kill themselves.
Soldiers in trenches who fear for their lives on the other hand do often start to pray. Maybe there are a few atheists in foxholes, but that state seems to promote religiousness.
Does it promote religiousness or attract the religious?
I think it just promotes grasping at straws.
Sadly, that only works on a natural-selection basis, so the ethics boards forbid us from doing this. If they never see anyone actually failing to survive, they won't change their behavior.
Can't make an omelette without breaking some eggs. Videotape the whole thing so the next one has even more evidence.
Take all their stuff. Tell them that they have no evidence that it's theirs and no logical arguments that they should be allowed to keep it.
They beat you up. People who haven't specialized in logic and evidence have not therefore been idle.
Shoot them?
If you can't appeal to reason to make reason appealing, you appeal to emotion and authority to make reason appealing.
You put them into a social enviroment where the high status people value logic and evidence. You give them the plausible promise that they can increase their status in that enviroment by increasing the amount that they value logic and evidence.
--Tom Chivers
I agree subject to the specification that each such observation must look substantially more like the absence of a duck then a duck. There are many things we see which are not ducks in particular locations. My shoe doesn't look like a duck in my closet, but it also doesn't look like the absence of a duck in my closet. Or to put it another way, my sock looks exactly like it should look if there's no duck in my closet, but it also looks exactly like it should look if there is a duck in my closet.
If your sock does not have feathers or duck-shit on it, then it is somewhat more likely that it has not been sat on by a duck.
--Gabe Newell during a talk. The whole talk is worthwhile if you're interested in institutional design or Valve.
Ozy Frantz - Brain Chemicals are not Fucking Magic
Klingon proverb.
—Yagyū Munenori, The Life-Giving Sword
thefolksong
Because you're a human, not a butterfly. It seems like an animal that used a cognitive filter that defaulted to the latter case would take a pretty severe fitness hit.
-- Noah Brand
I'd prefer if this quote ended with " ... and then I got done weeping and started working on my shoe budget," but oh wells.
This. If only people realized that unpleasant facts do not cancel each other out, and pointing out one unpleasant fact in addition to another should never ever make us feel better, because it only leaves us in a worse world than we started out in. Compute the actual utilities. It's such a common and avoidable error.
But if you look at it other way, then pointing out unpleasant facts about other people's condition (that don't apply to us) is equivalent to pointing out good facts about our condition, which should make us feel better, as it leaves us in a better world than we started out in.
That's exactly the kind of thinking the world needs less of, and the kind that I was trying to warn readers against in the parent comment. Why? Just why would a worse world for someone else make for a better world for you, if that someone is not your mortal enemy? It just makes for a worse world, period.
The point isn't that you're taking pleasure in their misfortune, it's that you're taking pleasure in your own fortune. "I'm so lucky for having X." If you don't do that, then any improvements in your standard of living or situation in general will end up having no impact on your happiness, since you just get used to them and take them for granted and don't even realize that you would have a million reasons to be happy. And then (in the most extreme case) you'll end up feeling miserable because of something completely trivial, because you're ignoring everything that could make you happy and the only things that can have any impact on your state of mind are negative ones.
Someone commented above about the instrumental value of crying and feeling bad, and you're actually pointing out the case where crying and feeling bad fail at being instrumental. Basically, I'm for whatever attitude that gets you to stop crying and start fixing some problem, and if resetting your baseline helps, it's fair game! It definitely works for me in some cases.
I think this quote is trying to argue against the attitude that problems that are minor compared to other problems don't deserve any attention at all. That everyone without shoes should just wrench themselves into happiness and go around being grateful, rather than acknowledging that they keep stepping on snails and pointy things, which sucks, and making productive steps toward acquiring shoes.
I remember reading something about plastic surgeons getting kind of looked down upon because they're not proper heroic doctors that handle real medical problems.
... I think I see where you're coming from -- by realizing we're not at the far end of the unhappiness scale (since we have a counterexample to that), we should calibrate our feelings about our situation accordingly, yes?
It's still not the way I view things; I'd like to say I prefer judging these things according to an absolute standard, but it's likely that that would be less true for me than I want it to be. To the extent that it doesn't hold true for me, I think it's better to take into consideration better states as well as worse ones. Saying, "at least I don't have it as bad as X" just doesn't feel enough; everybody who doesn't have it as bad as X could say it, and people in this category can vary widely in their levels of satisfaction, the more so the worse X has it. It's more complete to say "Yes, but I don't have it as good as Y either" or, better yet, "I have it better/worse than my need requires".
Yes, pretty much.
"...And then I remembered status is positional, felt superior to the footless man, and stopped weeping."
S. T. Rev
S. T. Rev
-- Geoff Anders (paraphrased)
-- John C Wright
-Alex Tabarrok
One amusing aspect is that assuming the person is justified in their belief that their church/country is ethical, the above is a valid inference.
Not necessarily. You don't punish people based on their likelihood of being guilty but based on severity of their crime.
If torture is used as tool to gain information instead of being used to punish it's even more questionable whether the likelihood of being guilty correlates with the severity of the torture. The fact that someone decides to torture to get more information suggests that they have an insuffienct amount of information.
If there a 50% chance that a person has information that can prevent a nuclear explosion, you can argue that it's ethical to torture to get that information.
After the bomb has exploded and you know for certain who did the crime, there not much need to torture anyone.
An interrigator that tortures is more likely to get false confession that implicate innocents. If he then goes and tortures those innocents, you see that people who torture are more likely to punish innocents than people who don't.
Been making a game of looking for rationality quotes in the super bowl
"It's only weird if it doesn't work" --Bud Light Commercial
Only a rationality quote out of context, though, since the ad is about superstitious rituals among sports fans. My automatic mental reply is "well that doesn't work"
Randall Munroe, on updating on other people's beliefs.
The " every single person I know, many of them levelheaded and afraid of heights, abruptly went crazy at exactly the same time" scenario should be given some credence in human society; there is such a thing as puberty. The definition of puberty being " every single person I know abruptly went crazy at exactly the same time, including me".
Let me just put the text string ‘xkcd’ in here, because I was going to add this if nobody else had, and it's lucky that I found it first.
Oh, and there's more text in the comic than what's quoted, and it's good too, so read the comic everybody!
Dilbert dunnit first!
(Seeing that strip again reminds me of an explanation for why teenagers in the US tend to take more risks than adults. It's not because the teenagers irrationally underestimate risks but because they see bigger benefits to taking risks.)
Devine and Cohen, Absolute Zero Gravity, p. 96.
So, uh, what's the explanation?
Perhaps because pressure is (approximately) constant, for every molecule going into the car, one must leave it (on average)?
-- Milton Friedman
Couldn't I also set up the system to try to exclude the wrong people from ever getting power?
It seems to me that computers get better at detecting liars, and we have an ease of fact checking on things now we never used to have, and conflicts of interest are generally relatively easily seen, and we've got all this research about how influence functions... In short that we've made a lot more progress on the judging people front, than we have on the side of designing procedures and regulations that suit us and also serve as one-way functions.
No. No-one can set up the system. The most that anyone can do is introduce a new piece into the game, pieces like Google, or Wikipedia, or Wikileaks.
Not if having power over others turns the right people into the wrong people.
-- Bertold Brecht
(I'm always amused when people of opposite political views express similar thoughts on society.)
Also:
Insultingly Stupid Movie Physics' review of The Core
32 people in the same ten block radius simultaneously dying of malfunctioning pacemakers seems so tremendously unlikely, I can't imagine how one could even locate that as an explanation in a matter of seconds.
If I recall correctly, he also pointed out that the fact they had invited two experts on magnetic fields was also a strong clue.
From a participant at the January CFAR workshop. I don't remember who. This struck me as an excellent description of what rationalists seek.
Faramir, from Lord of the Rings on lost purposes and the thing that he protects
another great quote for 2013
Except that a non-overwhelming love of a useful art may help you become better in the art, even though you would switch to another if it helped you optimize more.
-Joel Spolsky
I would have quoted more, because on reading that out of context I was like “YOU DON'T SAY?”
If your service is down, it has no features.
And no bugs.
-- Steve Jobs
(The Organization Formerly Known as SIAI had this problem until relatively recently. Eliezer worked, but he never published anything.)
And they ship the characters the fans want.
Men in Black on guessing the teacher's password:
Zed: You're all here because you are the best of the best. Marines, air force, navy SEALs, army rangers, NYPD. And we're looking for one of you. Just one.
[...]
Edwards: Maybe you already answered this, but, why exactly are we here?
Zed: [noticing a recruit raising his hand] Son?
Jenson: Second Lieutenant, Jake Jenson. West Point. Graduate with honors. We're here because you are looking for the best of the best of the best, sir! [throws Edwards a contemptible glance]
[Edwards laughs]
Zed: What's so funny, Edwards?
Edwards: Boy, Captain America over here! "The best of the best of the best, sir!" "With honors." Yeah, he's just really excited and he has no clue why we're here. That's just, that's very funny to me.
-- C. S. Lewis, Out of the Silent Planet
William Deseriewicz
The whole speech is worth reading as one giant rationality quote
Not bad, although it seems to equate originality with goodness a little too much.
Linus Pauling
It's necessary, but not sufficient.
The example in the comic is not a good one. Of the choices on the board, E being proportional to mc^2 is the only option where the units match. You only need to have that one idea to save yourself the trouble of having lots of other ideas.
It's a joke, which I assume is intended for a mostly non-physicist audience.
We demand complete rigour from all forms of levity! The unexamined joke is not worth joking!
Mickey Mouse is dead Got kicked in the head Cause people got too serious They planned out what they said They couldn't take the fantasy They tried to accept reality Analyzed the laughs Cause pleasure comes in halves The purity of comedy They had to take it seriously Changed the words around Tried to make it look profound ...
--Sub Hum Ans, "Mickey Mouse is Dead"
To prevent lines from being merged together, add two spaces at the end of each one.
That's so...typewriter.
Thanks.
Yes, but also being able to tell which of those ideas are good is even better.
From the alt-text in the above-linked comic:
--Jovah's Angel by Sharon Shinn
maybe it's just my most recent physchem lecture talking, but my instant response to that was 'truth is a state function'. Or perhaps 'perceived truth', and 'should be'. (i.e., shouldn't depend on the history preceding current perceived truth)