Rationality Quotes September 2012
Here's the new thread for posting quotes, with the usual rules:
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself
- Do not quote comments/posts on LW/OB
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (1088)
-L. A. Rollins, Lucifer's Lexicon: An Updated Abridgment
-L. A. Rollins, Lucifer's Lexicon: An Updated Abridgment
Kate Evans https://twitter.com/aristosophy/status/240589485098795008
... which one wish, carefully phrased, could also provide.
you can't wish for more wishes
Er... actually the genie is offering at most two rounds of feedback.
Sorry about the pedantry, it's just that as a professional specialist in genies I have a tendency to notice that sort of thing.
Why only 2 rounds of feedback if you have 3 wishes?
The third one's for keeps: you can't wish the consequences away.
Right, but the consequences still qualify as feedback, no?
I always imagine the genie just goes back into his lamp to sleep or whatever, so in the hypothetical as it exists in my head, no. But I guess there could be a highly ambitious Genie looking for feedback after your last wish, so maybe.
I think in this case, Eliezer in talking about a genie like in Failed Utopia 4-2 who grants his wish, and then keeps working, ignoring feedback, because he just doesn't care, because caring isn't part of the wish.
The genie doesn't care about consequences, he just cares about the wishes. The second wish and third wish are the feedback.
The feedback is for you, not what you happen to say to the genie.
Rather than a technical correction you seem just to be substituting a different meaning of 'feedback'. The author would certainly not agree that "You get 0 feedback from 1 wish".
Mind you I am wary of the the fundamental message of the quote. Feedback? One of the most obviously important purposes of getting feedback is to avoid catastrophic failure. Yet catastrophic failures are exactly the kind of thing that will prevent you from using the next wish. So this is "Just Feedback" that can Kill You Off For Real despite the miraculous intervention you have access to.
I'd say "What the genie is really offering is a wish and two chances to change your mind---assuming you happen to be still alive and capable of constructing corrective wishes".
One well-known folk tale is based on precisely this interpretation. Probably more than one.
Open question: Do you care about what (your current brain predicts) your transhuman self would want?
Yes, I think so. It surely depends on exactly how I extrapolate to my "transhuman self," but I suspect that its goals will be like my own goals, writ larger
If you don't, you're really going to regret it in a million years.
The chance of human augmentation reaching that level within my lifespan (or even within my someone's-looking-after-my-frozen-brain-span) is, by my estimate, vanishingly low. But if you're so sure, could I borrow money from you and pay you back some ludicrously high amount in a million years' time?
More seriously: Seeing as my current brain finds regret unpleasant, that's something that reduces to my current terminal values anyway. I do consider transhuman-me close enough to current-me that I want it to be happy. But where their terminal values actually differ, I'm not so sure - even if I knew I were going to undergo augmentation.
I'm rather skeptical about that, even conditioning on Ezekiel being around to care. I expect that the difference between him having his current preferences and his current preferences+more caring about future preferences will not result in a significant difference in the outcome the future Ezekiel will experience.
— Michael Kirkbride / Vivec, "The Thirty Six Lessons of Vivec", Morrowind.
Am I the only one who thinks we should stop using the word "simple" for Occam's Razor / Solomonoff's Whatever? In 99% of use-cases by actual humans, it doesn't mean Solomonoff induction, so it's confusing.
Well, his point only makes any sense when applied to the metaphor since a better answer to the question
is:
"where would Sisyphus get a robot in the middle of Hades?"
Edit: come to think of it, this also works with the metaphor for human struggle.
Borrowing one of Hephaestus', perhaps?
Now someone just has to write a book entitled "The Rationality of Sisyphus", give it a really pretentious-sounding philosophical blurb, and then fill it with Grand Theft Robot.
Answer: Because the Greek gods are vindictive as fuck, and will fuck you over twice as hard when they find out that you wriggled out of it the first time.
Who was the guy who tried to bargain the gods into giving him immortality, only to get screwed because he hadn't thought to ask for youth and health as well? He ended up being a shriveled crab like thing in a jar.
My highschool english teacher thought this fable showed that you should be careful what you wished for. I thought it showed that trying to compel those with great power through contract was a great way to get yourself fucked good an hard. Don't think you can fuck with people a lot more powerful than you are and get away with it.
EDIT: The myth was of Tithonus. A goddess Eos was keeping him as a lover, and tried to bargain with Zeus for his immortality, without asking for eternal youth too. Ooops.
I'm no expert, but that seems to be the moral of a lot of Greek myths.
They Might Be Giants
-- Linus Pauling
Citation for this was hard; the closest I got was Etzioni's 1962 The Hard Way to Peace, pg 110. There's also a version in the 1998 Linus Pauling on peace: a scientist speaks out on humanism and world survival : writings and talks by Linus Pauling; this version goes
How about doing unto others what maximizes total happiness, regardless of what they'd do unto you?
By acting in a way that discourages them from hurting you, and encouraging them to help you, you are playing your part in maximizing total happiness.
Yeah, but it's not necessarily the ideal way to act. Perhaps you should act generally better than that, or perhaps you should try to amplify it more. Do what you can to find out the optimal way to act. At least pay attention if you find new information. Don't just make a guess and assume you're correct.
You don't think you should discourage others from hurting you? I think that seems sort of obvious. Now, if you could somehow give a person a strong incentive to help you/ not hurt, while simultaneously granting them a shitload of happiness, that seems ideal. This doesn't really exclude that, it's just on the positive side of doing/ being done unto.
You should probably discourage others from hurting you. It's just not clear how much.
As much as possible for the least amount of harm possible and the least amount of wasted time and resources, obviously. Which varies on a case by case basis.
I mean if it was practical, you'd give your friends 2 billion units of happiness, and then after turning the cheek to your enemies, grant them 1.9 billion units of happiness, but living on planet earth, giving you 80% of the crap you gave me seems about right.
Not necessarily. If I horribly torture Jim because Jim stepped on my toes, then I am not maximizing total happiness; the unhappiness given to Jim by the torture outwieghs the unhappiness in me that is prevented by having no-one step on my toes.
That's a lot of effort and pain to prevent someone stepping on your toes.
Also, I'm not sure that'd be a terribly effective way to prevent harm to yourself. I mean, to the extent possible, once everyone knows you tortured Jim, people will be scared shitless to step on your toes, but Jim and Jim's family are very likely to murder you, or at least sue you for all your money and put you in jail for a long time.
It's a nice sentiment, but the optimization problem you suggest is usually intractable.
It's better to at least attempt it than just find an easier problem and do that. You might have to rely on intuition and such to get any answer, but you're not going to do well if you just find something easier to optimize.
Yes, but there's no way a pithy quote is going to solve the problem for you. It might, however, contain a useful heuristic.
The former is computationally far more feasible.
It has a tendency to go horribly wrong.
It's impossible to find a strategy that produces happiness better than trying to produce happiness, since if you knew of one, you'd try to produce happiness by following that strategy. If this method is what works best, then in doing what works best, you'd follow this method.
Also, linking to TVTropes tends to fall under generalizing from fictional evidence.
Art imitates life. ;)
And it's not hard to think of real life examples of atrocities "justified" on utilitarian grounds that the rest of the world thinks are anything but justifiable. The Reign of Terror during the French Revolution, for example, is generally regarded as having gone too far.
You may do that if you must, I recommend against it.
Why do you recommend against it? Do you have a more complicated utility function?
Did you take "expect" to mean as in prediction, or as in what you would have them do, like the Jesus version?
Imām al-Ḥaddād (trans. Moṣṭafā al-Badawī), "The Sublime Treasures: Answers to Sufi Questions"
Reminds me of Moore's "here is a hand" paradox (or one man's modus tollens is another's modus ponens).
This also made me think of the aphorism "if water sticks in your throat, with what will you wash it down?"
Or "if salt loses its savor", although I wonder if they're really making the same philosophical point about relative weights of evidence on two sides of a contradiction/paradox.
Richard Carrier on solipsism, but not nearly as pithy:
I think that's actually a really terrible bit of arguing.
We can stop right there. If we're all the way back at solipsism, we haven't even gotten to defining concepts like 'random chance' or 'design', which presume an entire raft of external beliefs and assumptions, and we surely cannot immediately say there are only two categories unless, in response to any criticism, we're going to include a hell of a lot under one of those two rubrics. Which probability are we going to use, anyway? There are many more formalized versions than just Kolmogorov's axioms (which brings us to the analytic and synthetic problem).
And much of the rest goes on in a materialist vein which itself requires a lot of further justification (why can't minds be ontologically simple elements? Oh, your experience in the real world with various regularities has persuaded you that is inconsistent with the evidence? I see...) Even if we granted his claims about complexity, why do we care about complexity? And so on.
Yes, if you're going to buy into a (very large) number of materialist non-solipsist claims, then you're going to have trouble making a case in such terms for solipsism. But if you've bought all those materialist or externalist claims, you've already rejected solipsism and there's no tension in the first place. And he doesn't do a good case of explaining that at all.
Good points, but then likewise how do you define and import the designations of 'hand' or 'here' and justify intuitions or a axiomatic system of logic (and I understood Carrier to be referring to epistemic solipsism like Moore -- you seem to be going metaphysical)? (or were you not referring to Moore's argument in the context of skepticism?)
(Simon Blackburn, Truth)
The pithiest definition of Blackburn's minimalism I've read is in his review of Nagel's The Last Word:
It is followed by an even pithier response to how Nagel refutes relativism (pointing that our first-order conviction that 2+2=4 or that murder is wrong is more certain than any relativist doubts) and thinks that this establishes a quasi-Platonic absolutism as the only alternative:
"What is truth" is a pretty good question, though a better one is "what do we do with truths?"
We do a lot of things with truths, it can serve a lot of different functions. The problem comes where people doing different things with their truths talk to each other.
John Mayer
He's just showing that those people don't give infinite value, not that it's nonsense. It's nonsense because, even if you consider life infinitely more intrinsically valuable than a green piece of paper, you'd still trade a life for green pieces of paper, so long as you could trade them back for more lives.
If life were of infinite value, trading a life for two new lives would be a meaningless operation - infinity times two is equal to infinity. Not unless by "life has infinite value" you actually mean "everything else is worthless".
Not quite so! We could presume that value isn't restricted to the reals + infinity, but say that something's value is a value among the ordinals. Then, you could totally say that life has infinite value, but two lives have twice that value.
But this gives non-commutativity of value. Saving a life and then getting $100 is better than getting $100 and saving a life, which I admit seems really screwy. This also violates the Von Neumann-Morgenstern axioms.
In fact, if we claim that a slice of bread is of finite value, and, say, a human life is of infinite value in any definition, then we violate the continuity axiom... which is probably a stronger counterargument, and tightly related to the point DanielLC makes above.
You could use hyperreal numbers. They behave pretty similarly to reals, and have reals as a subset. Also, if you multiply any hyperreal number besides zero by a real number, you get something isomorphic to the reals, so you can multiply by infinity and it still will work the same.
I'm not a big fan of the continuity axiom. Also, if you allow for hyperreal probabilities, you can still get it to work.
At which point why not just re-normalize everything so that you're only dealing with reals?
You could have something have infinite value and something else have finite value. Since this has an infinitesimal chance of actually mattering, it's a silly thing to do. I was just pointing out that you could assign something infinite utility and have it make sense.
True
Only if you have a way to describe infinity in terms of a real number.
You just pick some infinite hyper real number and multiply all the real numbers by that. What's the problem?
"Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity -- in all this vastness -- there is no hint that help will come from elsewhere to save us from ourselves. It is up to us." - Sagan
EDIT: Quote above is from the movie.
Verbatim from the comic:
I personally think that Watchmen is a fantastic study* on all the different ways people react to that realisation.
("Study" in the artistic sense rather than the scientific.)
"In a society in which the narrow pursuit of material self-interest is the norm, the shift to an ethical stance is more radical than many people realize. In comparison with the needs of people starving in Somalia, the desire to sample the wines of the leading French vineyards pales into insignificance. Judged against the suffering of immobilized rabbits having shampoos dripped into their eyes, a better shampoo becomes an unworthy goal. An ethical approach to life does not forbid having fun or enjoying food and wine, but it changes our sense of priorities. The effort and expense put into buying fashionable clothes, the endless search for more and more refined gastronomic pleasures, the astonishing additional expense that marks out the prestige car market in cars from the market in cars for people who just want a reliable means to getting from A to B, all these become disproportionate to people who can shift perspective long enough to take themselves, at least for a time, out of the spotlight. If a higher ethical consciousness spreads, it will utterly change the society in which we live." -- Peter Singer
I'm not at all convinced of this. It seems to me that a genuinely ethical life requires extraordinary, desperate asceticism. Anything less is to place your own wellbeing above those of your fellow man. Not just above, but many orders of magnitude above, for even trivial luxuries.
Julia Wise would disagree, on the grounds that this is impossible to maintain and you do more good if you stay happy.
That sounds to me like exactly the sort of excuse a bad person would use to justify valuing their selfish whims over the lives of other people. If we're holding our ideas to scrutiny, I think the idea that the 'Sunday Catholic' school of ethics is consistent could take a long, hard look.
Julia Wise holds the distinction of having actually tried it though. Few people are selfless enough to even make the attempt.
We're talking about a person who, along with her partner, gives to efficient charity twice as much money as she spends on herself. There's no way she doesn't actually believe what she says and still does that.
That she gives more than most others doesn't imply that her belief that giving even more is practically impossible isn't hypocritical. Yes, she very likely believes it, thus it is not a conscious lie, but only a small minority of falsities are conscious lies.
Yeah, but there's also a certain plausibility to the heuristic which says that you don't get to second-guess her knowledge of what works for charitable giving until you're - not giving more - but at least playing in the same order of magnitude as her. Maybe her pushing a little bit harder on that "hypocrisy" would cause her mind to collapse, and do you really want to second-guess her on that if she's already doing more than an order of magnitude better than what your own mental setup permits?
There's an Italian proverb “Everybody is a faggot with other people's asses”, meaning more-or-less ‘everyone is an idealist when talking about issues that don't directly affect them/situations they have never experienced personally”.
You're using hypocritical in a weird way -- I'd only normally use it to mean ‘lying’, not ‘mistaken’.
Is it justified? Pretend we care nothing for good and bad people. Do these "bad people" do more good than "good people"?
Do you live a life of extraordinary, desperate asceticism? If not, why not? If so, are you happy?
And the great philosopher Diogenes would disagree with her.
So, how many lives did he save again?
Clever guy, but I'm not sure if you want to follow his example.
As it is probably intended, the more reminders like this I read, the more ethical I should become. As it actually works, the more of this I read, the less I become interested in ethics. Maybe I am extraordinarily selfish and this effect doesn't happen to most, but it should be at least considered that constant preaching of moral duties can have counterproductive results.
xkcd reference.
Not to mention the remarks of Mark Twain on a fundraiser he attended once:
It might be worth taking a look at Karen Horney's work. She was an early psychoanalyst who wrote that if a child is abused, neglected, or has normal developmental stages overly interfered with, they are at risk of concluding that just being a human being isn't good enough, and will invent inhuman standards for themselves.
I'm working on understanding the implications (how do you get living as a human being right? :-/ ), but I think she was on to something.
I wasn't abused or neglected. Did she check experimentally that abuse or neglect is more prevalent among rationalists than in the general population?
Of course that's not something a human would ordinarily do to check a plausible-sounding hypothesis, so I guess she probably didn't, unless something went horribly wrong in her childhood.
I was thinking about prase in particular, who sounds as though he might have some problems with applying high standards in a way that's bad for him.
Horney died in 1952, so she might not have had access to rationalists in your sense of the word.
When I said it might be worth taking a look at Horney's work, I really did mean I thought it might be worth exploring, not that I'm very sure it applies. It seems to be of some use for me.
Second thought: Maybe I should have not mentioned her theory about why people adopt inhuman standards, and just focused on the idea that inhuman standards are likely to backfire, Viliam_Bur did.
Also-- if I reread I'll check this-- I think Horney focused on inhuman standards of already having a quality, which is not quite the same thing as having inhuman standards about what one ought to achieve, though I think they're related.
I suspect it's because authors of "ethical remainders" are usually very bad at understanding human nature.
What they essentially do is associate "ethical" with "unpleasant", because as long as you have some pleasure, you are obviously not ethical enough; you could do better by giving up some more pleasure, and it's bad that you refuse to do so. The attention is drawn away from good things you are really doing, to the hypothetical good things you are not doing.
But humans are usually driven by small incentives, by short-term feelings. The best thing our rationality can do is better align these short-term feelings with out long-term goals, so we actually feel happy when contributing to our long-term goals. And how exactly are these "ethical remainders" contributing to the process? Mostly by undercutting your short-term ethical motivators, by always reminding you that what you did was not enough, therefore you don't deserve the feelings of satisfaction. Gradually they turn these motivators off, and you no longer feel like doing anything ethical, because they convinced you (your "elephant") that you can't.
Ethics without understanding human nature is just a pile of horseshit. Of course that does not prevent other people from admiring those who speak it.
"Is this a victory or a defeat? Is this justice or injustice? Is it gallantry or a rout? Is it valor to kill innocent children and women? Do I do it to widen the empire and for prosperity or to destroy the other's kingdom and splendor? One has lost her husband, someone else a father, someone a child, someone an unborn infant... What's this debris of the corpses?" -- Ashoka
"He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his candle at mine, receives light without darkening me. No one possesses the less of an idea, because every other possesses the whole of it." - Jefferson
"Nontrivial measure or it didn't happen." -- Aristosophy
(Who's Kate Evans? Do we know her? Aristosophy seems to have rather a lot of good quotes.)
*cough*
"I made my walled garden safe against intruders and now it's just a walled wall." -- Aristosophy
Is that you? That's ingenious.
For more rational flavor:
This should be the summary for entangled truths:
how to seem and be deep:
Dark Arts:
More Dark arts:
Luminosity:
No, I'm not her. I don't know who she is, but her Twitter is indeed glorious. (And Google Reader won't let me subscribe to it the way I'm subscribed to other Twitters, rar.)
She's got to be from here, here's learning biases can hurt people:
Cryonics:
I'm starting to think this is someone I used to know from tvtropes.
Twitter RSS feed
I found that, but it won't let me subscribe to it with Google Reader, only with other things I don't use.
That's odd - I did subscribe to it with Google Reader, right before I posted the link.
My bookmarklet says "can't find a feed", and the dropdown menu doesn't offer Google Reader as far as I can tell. How did you do it?
"Google" was auto-selected in my dropdown menu, so it was straightforward for me, same as always. Two clicks, one on Subscribe, the second to indicate Google Reader rather than Google Homepage.
Not sure how much troubleshooting help I can give you. Does that page at least show the recent tweets? Are you logged into Google? Maybe try going to your Google Reader page and entering the url there in the subscribe-to-a-new-feed place?
Yes.
Yes.
Doesn't work, says it can't find it. I don't know why; this is how I've subscribed to Twitters in the past.
Stewart Brand
(from Bret Victor's excellent quotes page)
Subway ad: "146 people were hit by trains in 2011. 47 were killed."
Guy on Subway: "That tells me getting hit by a train ain't that dangerous."
Wait, 32% probability of dying “ain't that dangerous”? Are you f***ing kidding me?
If I expect to be hit by a train, I certainly don't expect a ~68% survival chance. Not intuitively, anyways.
I'm guessing that even if you survive, your quality of life is going to take a hit. Accounting for this will probably bring our intuitive expectation of harm closer to the actual harm.
Hmmm, I can't think of any way of figuring out what probability I would have guessed if I had to guess before reading that. Damn you, hindsight bias!
(Maybe you could spell out and rot-13 the second figure in the ad...)
I would expect something like that chance. Being hit by a train will be very similar to landing on your side or back after falling 3 to 10 meters (I'm guessing most people hit by trains are at or near a train station, so the impacts will be relatively slow). So the fatality rate should be similar.
Of course, that prediction gives a fatality rate of only 5-20%, so I'm probably missing something.
There's the whole crushing and high voltage shock thing, depending on how you land.
-- Aristosophy (again)
From "An Elementary Approach to Thinking Under Uncertainty," by Ruth Beyth-Marom, Shlomith Dekel, Ruth Gombo, & Moshe Shaked.
Or not.
Arguably, assigning a particular floating point number between 0.0 and 1.0 to represent subjective degrees of belief is a specialized skill and it could take years of practice in order to become fluent in numerical-probability-speak.* Another possibility is that it merely adds a kind of pseudo-precision without any benefit over natural language.
In any case, it seems to be an empirical question and so should be answered with empirical data. I guess we won't really know until we have a good-sized number of people using things such as PredictionBook for extended periods of time. I'll keep you posted.
*There does exist rigorously defined verbal probabilities, but as far as I know they haven't been used much since the Late Middle Ages/Early Modern Period.
--Game of Thrones, Season 2.
Also effort, expertise, and insider information on one of the most powerful Houses around. And magic powers.
He has magic powers?
Rot13'd for minor spoiling potential: Ur'f n jnet / fxvapunatre.
Reminds me of Patton:
Guy Steele
Can you elaborate on what this is getting at?
You shouldn't be decieved by the use of the word formal as an applause light.
I think the message is pretty similar to this quote. Put another way: be careful to not favor the letter of the law over the spirit of the law. Which is hard because brains prize anything that spares them work, and the letter of the law is (I'm guessing) more compressible than its spirit.
Michael Welfare, quoted in The Autobiography of Benjamin Franklin
Anonymous
Ken Wilber
Unless you're a fictional character. Or possibly Mike "Bad Player" Flores:
Lol, my professor would give a 100% to anyone who answered every exam question wrong. There were a couple people who pulled it off, but most scored 0<10.
I'm assuming a multiple-choice exam, and invalid answers don't count as 'wrong' for that purpose?
Otherwise I can easily miss the entire exam with "Tau is exactly six." or "The battle of Thermopylae" repeated for every answer. Even if the valid answers are [A;B;C;D].
--G.K. Chesterton, "The Duel of Dr. Hirsch"
Reversed malevolence is intelligence?
Inverted information is not random noise.
-- Motaigne
--George R. R. Martin, A Game of Thrones
I think the quote could be trimmed to its last couple sentences and still maintain the relevant point..
Oh, totally. But I prefer the full version; it's really a beautifully written passage.
I disagree, in fact. That books strengthen the mind is baldly asserted, not supported, by this quote - the rationality point I see in it is related to comparative advantage.
-Robin Hanson, Human Enhancement
I think he's mischaracterizing the issue.
Beliefs serve multiple functions. One is modeling accuracy, another is signaling. It's not whether the environment is harsh or easy, it's which function you need. There are many harsh environments where what you need is the signaling function, and not the modeling function.