Rationality Quotes May 2013
Here's another installment of rationality quotes. The usual rules apply:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, Overcoming Bias, or HPMoR.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (387)
-Marcus Aurelius
Observing the sky is good and productive science. Perhaps he meant that as an emperor (or responsible senator, etc) he should not have been drawn into a serious scientific or philosophical career, but for those who can afford the time and effort, it's a fine pursuit.
I was told that that part was actually a reference to astrology.
Gasp, I definitely didn't read that way. Observing the sky sounded like science, and the logical puzzles sounded like math. Plus, it was already useful at the time: it helped keep track of time, predict seasons…
Quite so-- and less obvious applications are evidenced by the example of Thales.
There's a bit in CS Lewis about modern people thinking of astrology and alchemy as the same sort of thing, but when they were current, astrology was a way of asserting an orderly universe while alchemy was asserting human power to make things very different.
That's actually kind of sad. Hopefully times have changed since then.
It's my understanding that Marcus Aurelius no longer voices this opinion.
And the people who preserved his words to reach us were more like wise men who watched the skies and solved the puzzle of cheaply distributing text, than like emperors or philosophers.
This doesn't really make sense. Just because the mice can't be coached for success, aren't aware of corporate goals, etc., it does not follow that they are one's "real bosses". Can the mice fire you? Can they give you a raise? Can they write you up for violations of corporate protocol? If you are having trouble with a coworker, can you appeal to the mice to resolve the issue? Do the mice, finally, decide what you work on? Your actual boss can take the mice away from you! Can the mice reassign you to a different boss?
Yes, and yes. This is spelled out in the original post.
I read the original post. The mice are not giving anyone any raises. The mice are not capable of human-level cognition, and do not occupy positions of administrative power in the company. The mice are just mice.
Your actual, human boss decides whether to give you a raise, what you work on, etc. He or she might choose to implement a policy that ties your assigment and your compensation package, in some indirect way, to the behavior of the mice (although to be more accurate, to what you do with the mice), but to insist that it is therefore accurate to say that the mice are your bosses and are making the decisions that control your career, is absurd.
There is, perhaps, a word missing from the English language. If Derek Lowe were speaking, instead of writing, he would put an exaggerated emphasis on the word real and native speakers of English would pick up on a special, metaphorical meaning for the word real in the phrase real boss. The idea is that there are hidden, behind the scenes connections more potent (more real?) than the overt connections.
There is a man in a suit, call him the actual boss, who issues orders. Perhaps one order is "run the toxicology tests". The actual boss is the same as the real boss so far. Perhaps another order is "and show that the compound is safe." Now power shifts to the mice. If the compound poisons the mice and they die, then the compound wasn't safe. The actual boss has no power here. It is the mice who are the real boss. They have final say on whether the compound is safe, regardless of the orders that the actual boss gave.
Derek Lowe is giving us an offshoot of an aphorism by Francis Bacon: "Nature, to be commanded, must be obeyed." Again the point is lost if one refuses to find a poetic reading. Nature accepts no commands; there are no Harry-Potter style spells. Nature issues no commands; we do not hear and obey, we just obey. (So why is Bacon advising us to obey?)
I'm afraid I just don't buy it. The distinguishing feature of one's boss is that this person has certain kinds of (formally recognized) power over you within your organization's hierarchy. No one thinks that their boss has the power to rearrange physical reality at a whim.
My objection to the quote as a rationality quote is that it reads like this: "Because my job performance may be affected by the laws of physical reality, which my boss is powerless to alter, he (the boss) in fact has no power over me!" Which is silly. It's a sort of sounds-like-wisdom that doesn't actually have any interesting insight. By this logic, no one has any legal/economic/social power over anyone else, and no one is anyone's boss, ever, because anything that anyone can do to anyone else is, in some way, limited by the laws of physics.
P.S. I think the Francis Bacon quote is either not relevant, or is equally vacuous (depending on how you interpret it). I don't think Bacon is "advising" us to obey nature. That would be meaningless, because we are, in fact, physically incapable of not obeying nature. We can't disobey nature — no matter how hard we try — so "advising" us to obey it is nonsense.
In a similar vein, saying that the mice have "the final say" on whether the compound is safe is nonsensical. The mice have no say whatsoever. The compound is either safe or not, regardless of the mice's wishes or decisions. To say that the have "the final say" implies that if they wished, they might say differently.
In short, I think a "poetic reading" just misleads us into seeing nonexistent wisdom in vacuous formulations.
It is a very common feature of bad bosses that they think they have the authority to order their underlings to rearrange physical reality. This seems to be exactly what's going on in the original post.
The fact that the speaker is addressing his boss directly changes the meaning a lot. I'd read it as "No matter what official authority you have, if you order me to violate the laws of physics then the laws of physics are going to win." Referring to the mice as his "real boss" is an attempt to explain why he's constrained by the nature of reality to someone who spends a lot more time thinking about org charts than about the nature of reality.
This makes sense.
You're considering just the word "boss". Consider the phrase "real boss". Regardless of the meanings of the constituent words, the phrase itself can often be replaced with "the one with the real power", or "the one who actually makes the decisions." For example, "The king may have nominal power, but he's really only a figurehead, his vizier is the real boss."
Now, we still find something lacking in that the mice don't actually make decisions, the people observing the mice do. However, if the people observing the mice care about doing good research, then decisions about what course of action to take in the future must take into account what happens with the mice. What happens with the mice provides evidence which forces the researchers to update their models, possibly changing the optimal course of action, or fail. The literal meaning "The mice provide evidence, forcing us to update our models, making us, in order to do our job correctly, change our decisions." may be expressed metaphorically as "The mice make decisions on how to do our job correctly" or "The mice are the real boss."
From the context of the article, in which he uses this as an argument for not coming up with certain specific goals before beginning research, this is likely what the author meant.
Well, except that the researchers could:
a) Ignore the evidence
b) Fudge or outright falsify the evidence (horribly unethical, but it happens)
c) Abandon the experiments and do something else
etc.
and deciding to do any of these things is influenced heavily by what your boss does (i.e. what rules and incentives exist in your organization).
I do get the point made by wylram in the other subthread (communicating to your boss that one cannot change reality by managerial fiat), and it's a good point, I just don't find that it's conveyed well by the original quote (or even the source article). The key issue here, for me, is that despite the fact that "the mice" (but really more like "the laws of reality") are what determine the outcome of the experiment, not your boss, that does not mean that said laws of reality, much less said mice, in any way supplant your boss as the agent who is in control of your career advancement, position in the company, etc. (Incidentally, that is why the vizier / figurehead analogy does not hold.)
The article is talking about a salary scheme in which a certain percentage of the salary was based on how performance matched against goals-so for a research guy such as Derek, his experimental results (his mice) were determining a part of his salary. No poetry required.
The answer lies in his diary...
ETA: No it doesn't. DanArmak: scholarship fail.
That's the other Bacon.
Do you mean the artist one? Can't link but: http://en.wikipedia.org/wiki/Francis_Bacon_(artist)
But Wikiquote says it was the right one...
It's the diary of Roger Bacon.
(If I am totally confused and you were not making a Methods of Rationality reference like I thought you were, please ignore this entire subthread.)
Oh dear. It turns out I thought MoR referenced Francis Bacon, not Roger Bacon... Francis was just so much more available as a "great person from the history of rationality and science" that I must have kept misreading the name every time. I had to check Wikipedia just now to make sure I knew who Roger Bacon even was!
Thanks for correcting me, then.
It does seem a bit odd why Harry would attach such importance to Roger Bacon, compared with someone like Francis.
It might be the extra cool-factor of Roger being farther back in history (died about 700 years before Harry Potter went to Hogwarts) than Sir Francis (about 360 years).
Eliezer says (emphasis mine):
Yes, that makes sense. Although the WIkipedia article on Roger Bacon has a section called "Changing interpretations of Bacon" that says:
In other words, the things he did and wrote were always known correctly, but now it's known that others said similar things too. He wasn't a rationalist revolutionary or "ahead of his time", he was a point in an uninterrupted progression towards the modern idea of science.
-- Richard Feynman's Surely You're Joking, Mr Feynman!
Fortunately, things have since gotten better in that respect.
-- Teddy Atlas
Discussing the "Near-miss bias" which they define as a tendency to "take more risk after an event in which luck played a critical role in deciding the event's [favorable] outcome."
Top Dog: The Science of Winning and Losing by Po Bronson and Ashley Merryman, page 150.
Aristotle
Source: Nicomachean Ethics, book II
-- René Descartes
I was pleasantly surprised to see this elegant phrasing of a (Machian?) rationalist principle in popular culture:
-- Joe Adama in the TV series Caprica
This is at least as old as Leibnitz.
-- Devo, on the value of confronting problems rather than letting them fester
-- Byron Katie, Loving What Is
I was originally confused when I read this quote, assuming that "should" was being used in the sense of "morally just". It makes a lot more sense with "should" meaning "according to my model of reality". I assume the latter is the intended meaning.
A little of both. Her point is that human brains have a tendency to confuse "is" and "ought", mixing the moral or preferential with the actual, thereby clouding the issue.
If you don't want it to be raining, then feeling or protesting that it shouldn't be happening is an error. But it's an error that human brains commonly make, because our genes wish us to signal our disapproval of things we find objectionable, so that others will be persuaded to behave differently.
The problem is that reality isn't going to behave differently because you think it should, and most of the time even people aren't going to behave differently just because you think they should. Protesting that something should or shouldn't be a particular way is generally a non-helpful response to things as they are: if you want to change how things are, that change can only be made to happen in the future. At the present moment, things simply are how they are, and there is nothing you can do about that without using a time machine. (Even then, the change will still have to happen in your subjective future!)
The reason this passage uses "raining" is that it's a relatively innocuous example to introduce the problems involved in "arguing with reality", in a non-controversial way. Most of the subjects touched on in the rest of the book are things that people usually feel much more strongly about... and therefore have even more reason to separate "is" and "ought" about. (Like, "my spouse should listen to me", to stick to a still relatively-innocuous example.)
I'm not sure I see the problem the quotation is attacking then. Allowing for the very real possibility that I'm oblivious or live in a bubble, my model of how people work has them understanding the difference between "ought" and "expected" most of the time.
I get the impression that there is a real insight here into how people think about the world, but there's a disconnect between the idea and the author's words that I'm not bridging.
Understanding it and applying it are two different things, in the same way that knowing about a bias doesn't stop you from exhibiting it.
People tend to obsess over things that "shouldn't have" happened -- a mistake they made, an embarrassing situation, something infuriating that somebody else did, or some impending but inevitable life change. They fret and scheme and worry and just can't seem to get it out of their mind, even if they want to.
This behavior is generally caused by the alief that the thing "should not" have happened that way, or that the upcoming thing should not happen, or that they "should have done better", or some other "should" belief. Byron Katie's book is about a method of surfacing and questioning these aliefs, so as to stop fretting over what can't be changed, thus to focus on what can. As Quirrelmort put it:
While Byron Katie and Quirrelmort would disagree on quite a few things, this is one thing they have in common.
(Interestingly, her book "I need your love; is that true?" is very Quirrelmortish in the sense of highlighting how much people's seeming goodness or altruism is driven by self-centeredness -- but it's a book about how to stop doing that yourself, not using other people's actions as a way to justify doing more of it. Indeed, it's about being able to have compassion for the misguided or self-centered actions of others, not contempt ala Quirrelmort. Hm. Actually, the more I think about it, the more she seems like a true opposite to Quirrelmort, in a way that neither Harry nor Dumbledore are. If she were in-world, she'd be sort of like a non-naive McGonagall crossed with a Dumbledore who could not be made to despair or blinded by grief or regret or vengeance.)
-- Byron Katie, Loving What Is
To recognize that some of the things our culture believes are not true imposes on us the duty of finding out which are true and which are not.
--Allan Bloom, Giants and Dwarfs, "Western Civ"
That clashes in an interesting way with the recent post on Privileging the Question. Let us draw up our own, independent list of things that matter. There will be some, high up our list, about which our culture has no particular belief. Our self imposed duty is to find out whether they are true or not, leaving less important, culturally prominent beliefs alone.
Culture changes and many prominent beliefs of our culture will fade away, truth unchecked, before we are through with more urgent matters.
I'm not sure you have avoided the question completely. When culture tells you, "X is the most important thing on which I have no particular belief", do you believe it?
Umberto Eco, Foucault's Pendulum (1989)
In statistics this is known as "overfitting".
The Tau idea from Eco also?
-Arabian proverb
...this seems exactly, diametrically wrong.
Why do you say that? Many times, you say something publicly, it then becomes part of your identity, and after that there is a subconscious force that tries to make sure that your future actions and words are in line with what you said earlier.
This is also what I take from the quote - before I state a belief out loud I have a much easier time adjusting and retracting it - once it's out there, I've got pride and status tied up with it being right. Once I realized this a few years ago I starting making a conscious effort to not say things out loud until I was extremely confident that I was right. I still make this mistake more often than I would like, but less frequently.
I would have said merely wrong. ie. When reversed it would still be stupidity. There seem to be both advantages and disadvantages to public expression with respect to it influencing you. Something along the lines of identity commitments on one side and the potential for denial, hypocrisy and lack of feedback on the other.
One of the things that I dislike about aphorisms is that they sometimes compress insight so much that it's not easy to see what they were actually saying. I intuitively think that this is sometimes done because sounding deeply wise is often high status.
I thought over verbal overshadowing when I read it.
I'm not sure what that means.
– James Alexander Lindsay
The Vulcan your Vulcan could sound like if he wasn't made of straw, I guess? Link
Thanks for the link. I really enjoyed reading the comic archives.
Well... not quite. The selection effect makes the survival number basically impossible to calculate, but regularly surviving risky scenarios seems like it would provide a bit better odds for the influence of moxie than 249:200.
Fun Bayes application: what's the likelihood ratio for the existence vs. nonexistence of moxie-based immunity to death during battle for military leaders, given the military history of Earth?
I read that charitably as indicating occasional failures in non-deadly situations. Not even Captain Kirk wins 'em all.
At some point, if the Vulcan is smart enough, I suspect the calculation would begin to hinge more on plot twists and the odds that the story is nearing its end, as the hypothesis that they are wearing Plot Armor rises up to the forefront.
I'd also suspect that the Vulcan would realize quickly that as his prediction for the probability of success approaches 1, the odds of a sudden plot reversal that plunges them all in deep poo also approaches 1. And then the Vulcan would immediately adjust to always spouting off some random high-odds-against-us number all the time just to make sure they'd always succeed heroically.
Ow, this is starting to sound very newcomblike.
Holy crap, canon!Spock is a genius rationalist after all.
The C3PO of rationalists.
(At least when in a fight, the bridge crew always takes great care to ask for damage reports, and whether someone anywhere on the ship broke a finger, before, you know, firing back.)
Hey, the humans have to do something while the computer (which somehow hasn't obtained sentience) does all the real work.
The computer is secretly making paper clips in cargo bay 2, beaming them into space when noone is looking.
I want to believe.
The last line of reasoning doesn't quite work. Not every incident has an episode made out of it.
I don't see why not. Clearly, they're even more immune to death, dismemberment and other Bad Endings when they're not in a running episode. Or they just never run into the kind of exciting situations that happen during episodes.
I also suspect that distinguishing whether an episode is running would be even easier. One dead-obvious clue: The captain insists on going on an away mission, RedShirts are sent with him, all the RedShirts die unless they're part of the primary rotation bridge crew. Instant signal that an episode is running. AFAICT, very few redshirts ever die in this manner outside of episode incidents.
I was referring to the chances that something would go wrong when it looks nearly certain to succeed. Things can go blissfully smoothly when the camera isn't running.
This discussion seems like it needs a reference to Redshirts by John Scalzi.
Yes.
Hell yes it did.
*adds to want-to-read list*
You shouldn't trust people who claim to know 4 digits of accuracy for a forcast like this. The uncertainity involved in the calcuation has to be greater.
You shouldn't trust a human person who makes that claim. But if we are using 'person' in a way that includes the steel-Vulcan from the quote then yes, you should.
It is all uncertainty. There is no particular reason to doubt the steel-Vulcan's ability to calibrate 'meta' uncertainties too.
In the face of all the other evidence about the relative capabilities of the species in question that the character in question is implied to have it would be an error to overvalue the heuristic "don't trust people who fail to signal humility via truncating calculations". The latter is, after all, merely a convention. Given the downsides of that convention (it inevitably makes predictions worse) it is relatively unlikely that the Vulcans would have the same traditions regarding significant figure expression.
There inherent uncertainity in the input. The steel-Vulcan in question counted one specifc case as being 24% relevant to the current question. That's two digits of accuracy.
If many of your input variables only have two digits of accuracy the end result shouldn't have four digits of accuracy.
That is indeed the (mere, human) convention as taught in high schools of our shared culture. See above regarding the absurdity of using that heuristic as a reason for rejecting the advice of what amounts to a superintelligence.
It's not about accuracy, it's about not privileging 3700 over 3745. Neither is a particularly round number in, say, binary, and omitting saying "forty five" after converting this number into decimal system for human consumption is not much of a time saver.
But re-mentioning the “forty five” after a human asks you “three thousand seven hundred?” is mostly pointless nitpicking, and demonstrates a lack of understanding of human (well, at least, of neurotypical human) psychology IMO.
Either that, or it reflects an accurate understanding of the things that humans (justifiably or otherwise) treat as signals of authoritative knowledge. I mean, there's a reason people who want to sound like experts quote statistics to absurd levels of precision; rounding off sounds less definitive to most people.
Almost-inaudibly, whispering in a small corner of the room while scribbling in a notebook that the teacher is totally stupid while said teacher says something similar to the quote above:
under the assumption that all variables have equivalent ratios of weight to the final result and that the probability distribution of the randomness is evenly distributed across sub-digits of inaccuracy, along with a few other invisible assumptions about the nature of the data and calculations
Yep, that's me in high school.
In your example, the cited specific case only means that the final accuracy to be calculated is +- 0.01 individual ship relevance, which means that at the worst this one instance, by the standard half-the-last-significant-digit rule of thumb (which is not by any means an inherent property of uncertainties) means that there's +- 0.5% * 1 ship variance over the 542 : 2 000 000 ratio for this particular error margin.
Note also that "24% weight of the relevance of 1 ship in the odds" translates very poorly in digit-accuracies to "3745 : 1", because 3745:1 is also 0.026695141484249866524292578750667% chance, which is a shitton of digits of accuracy, and is also 111010100001 : 1, which is 12 digits of accuracy, and is also (...) *
As you can see, the "digits of accuracy" heuristic fails extremely hard when you convert between different ways to represent data. Which is exactly what happened several times in the steel-vulcan's calculations.
Moral of the story: Don't work with "digits of accuracy", just memorize your probability distribution functions over uncertainty and maximal variances and integrate all your variables with uncertainty margins and weights during renormalization, like a real Vulcan would.
Edit: * (Oh, and it's also 320 in base-35, so that's exactly two significant digits. Problem solved, move along.)
And lo, Wedrifid did invent the concept of Steel Vulcan and it was good.
Do we actually have enough fictional examples of this to form a trope? (At least 3, 5 would be better.)
Perhaps, but on the off chance that the captain doesn't listen, giving the exact probability increases the chances of success. The Vulcan mentioned that.
Unweighted, that's 3690:1 odds.
Since odds to three or more significant figures have been quoted, that gives us 2856:1 odds (still without weighting). From this, I conclude that the successful incidents usually involved ships that were either very differently designed to the ship in question, or were a long time ago (case in point - the 47-year-old success case). This implies that the current ship's design is actually somewhat more likely to fall afoul of the nebula than an average ship, or an older ship. Rather substantially, in fact; enough to almost exactly counter the determination/drive factor.
An investigation into the shipyards, and current design paradigms, may be in order once the trillions of lives have been saved. I suspect that too little emphasis is being placed on safety at some point in the design process.
...as I recommended strenuously before we left dock at the beginning of this mission, since a similar analysis performed then gave approximately 8000:1 odds that before this mission was complete you would do something deeply stupid that got us all killed, no matter how strenuously I tried to instruct you in basic risk factor analysis. That having failed, I gave serious consideration to simply taking over the ship myself, which I estimate will increase by a factor of approximately 3000 the utility created by our missions (even taking into account the reduced "moxie factor", which is primarily of use during crises a sensible Captain would avoid getting into in the first place). However, I observe that my superiors in the High Command have not taken over Starfleet and the Federation, despite the obvious benefits of such a strategy. At first this led me to 83% confidence that the High Command was in possession of extremely compelling unshared evidence of the value of humanity's leadership, which at that time led me to update significantly in favor of that view myself. I have since then reduced that confidence to 76%, with a 13% confidence that the High Command has instead been subverted by hostile powers partial to humanity.
If I was rich enough, I would pay you to write fanfic like this.
Given how well my time is recompensed these days, I suspect you could find many far-cheaper, equally good writers.
Hmm, good point.
The steel-Vulcan in the original quote admits that humans have an edge in the field of interpersonal relations. I imagine that's why the Vulcans let the humans lead; because the humans are capable of persuading all the other races in the Federation to go along with this whole 'federation' idea, and leave the Vulcans more-or-less alone as long as they share some of their research results.
Or, to put it another way; Vulcan High Command has managed to foist off the boring administration work onto the humans, in exchange for mere unimportant status, and is not eager to have it land back on their laps again.
Of course, some Vulcans do think that a Vulcan-led empire would be an improvement over a human-led one. The last batch to think that went off and formed the Romulan Empire. The Vulcans and the Romulans are currently running a long-term, large-scale experiment to see which paradigm creates a more lasting empire in practice. (They don't tell the other races that it's all a political experiment, of course. They might not be great at interpersonal realtions, but they have found out in the past that that is a very bad idea).
It's not mere unimportant status, though. The Federation makes decisions that affect the state of the Galaxy, and they make different decisions than they would under Vulcan control, and those differences cash out in terms of significant differences in overall utility. For a culture that believes that "the fate of the many outweighs the fate of the few, or the one," the choice to allow that just so they can be left alone seems bizarre.
Of course, that assumes that they consider non-Vulcans to be part of "the many." Now that I think about it, there's no particular reason to believe that's a commonly held Vulcan value/belief.
It's Spock's belief... but Spock was half-human, and the other Vulcans mostly seemed to think he was perhaps a bit too attached to that side of his ancestry. I think that they definitely assigned a good deal less weight to non-Vulcans. (Not zero weight... they did help out Humanity a bit on first contact, after all... just less weight).
Besides, given that the Vulcan High Council is pretty influential in the Federation, they can steer things their way at least some of the time; they might not be able to persuade the Federation to follow the path of maximal utility, but they can signpost the path (and warn about any cliffs in the area); the other races might not listen to them all the time, but they're quite likely to listen at least some of the time, severely limiting the utility loss.
This is a conceptually simple trade-off, although the math would be difficult. Assume that a Federation under Vulcan control would make better decisions but would have more difficulty implementing them (either on a sufficient scale or as effectively) because the strengths that make them better analysts are not the same strengths that make humans charismatic leaders. The Federation might not have as many planets, those planets might not be as willing to implement Vulcan ideas when advocated by Vulcans, etc. Is overall utility higher if Vulcans take the optimal action A% of the time at X% effectiveness or if humans take the optimal action B% of the time at Y% effectiveness? (You would adjust "the optimal action" for the relative strengths of the two species.)
If you believed that AX > BY, you formed the Romulan Empire. If you believed that AX < BY, you joined the Federation. I don't know enough Star Trek lore to say what happens if you end up with different estimates than the rest of your faction (defection, agitation for political change, execution?).
This was what I'd meant to say, only much, much better phrased. Thank you.
Eh, questionable. I'm sure many of us have been in situations where we're advising more senior staff and the manager or whoever isn't really the one making the decision anymore - they're just the talking head we get to rubber stamp what those of us who actually deal with the problem have decided is going to happen.
In practice I tend to find that the people who control access to information, rather than the people who wield formal authority, tend to have the most power in an organisation.
One could object by pointing out that moxie, determination, drive, and the human spirit have the strongest effect in life-or-death situations: situations in which their rate of survival over the past three years is obviously 100%.
And with 542 survivals, assuming Poisson statistics, the one-sigma bounds are around +-4% of that. I'll believe Spock most significant figure, but not the other three. :-)
To summarize the important bits of the "Do steel-Vulcans provide excessive significant digits?" discussion:
Suppose that the one-sigma range tells us that where the quote has 3745, some reasonable error analysis says 3745 plus or minus 173. Then the steel-Vulcan would still say 3745 and not, e.g., 3700 or 4000, for the following reasons:
3745 is still the midpoint of the range of reasonable values, and thus the closest single value to "the truth".
Taking meta-uncertainty into account, you still should assign some probability to how likely you are to survive, which is going to be some probably-not-round number like 1 in 3745.
This sort of accuracy is probably not very helpful to humans: I don't have a cognitive algorithm that lets me distinguish between 1 in 3745 odds and 1 in 3812 odds, so saying "about 1 in 4000" provides all the information I'll actually use. Presumably a species that can come up with this kind of answer in the first place feels differently about this; in fact, there's probably some strong cultural taboo against rounding.
Johann Wolfgang von Goethe, Hermann und Dorothea, IX. 303.
Nietzsche's hilariously intense (albeit somewhat tempestuous) intellectual crush on Goethe makes a little more sense to me now.
Can someone please explain to me how this is a rationality quote? (not sarcastic)
Seems to be along the lines of encouraging proactive agency. (Actively taking actions to optimise the world according to his preferences.) An instrumental rationality lesson.
(There are also less positive messages embedded there, which are a mix of anti-epistemology and dark arts, but I assume Malik is intending the instrumental message.)
It seems to me personally like the much more rational quote would be "He who is firm in will molds himself to the world."
There is a sense in which that is true, but unfortunately it is very close in concept space to a less rational message. Once one has already internalised the notion and intent to optimise the world according to one's preferences by any means necessary then it is a critical additional insight that one must do so by adapting to the universe that is and choosing the most effective actions within that context. Without the proactive intent already firmly in place the advice to mold oneself to the world could be misleading.
My guess is that the vast majority of the people who are trying to change the world would be better off trying to adapt to the situation they're in, both in small areas (e.g. someone who hates the fact his friends smoke would be better off not hanging around them when they smoke or just accepting it, rather than berating them and trying to get them to quit), and in big areas (e.g. someone who is extremely upset about injustice in the world would be better off carving out their own small niche in which they can do a small amount of good, rather than trying to alter foreign policy to save the whole continent of Africa). And when there are times when one person can do a whole lot of good in the world, it probably looks much more like "having an idea no one has had before and causing a ripple effect" than "molding the world to your will".
George Bernard Shaw
-Congressman Frank Underwood in the TV series House of Cards
Dan Ariely, Predictably Irrational: The Hidden Forces that Shape Our Decisions, New York, 2008, pp. 138-139
-- Walter Russell Mead, describing someone else's failure to understand what a desperate effort actually looks like.
Oglaf webcomic, "Bilge"
(Oglaf is usually NSFW, so I'm not linking, even if this particular comic has nothing worse than coarse language.)
I'll do it!
Throw away months of hard work? Fuck that! Let's fight!
A great illustration of sunk cost bias.
Took me a second.
I find myself wondering whether that pun was the original impetus for the comic. (If so, I commend the artist's restraint, which isn't something one can often say about Oglaf.)
On the contrary, a sizable fraction of Oglaf's comics involve restraints.
- Gunbuster
Well, since Conscientiousness is heritable to a substantial degree, perhaps she inherited her knack for hard work.
-- Paul Crowley
How often? I can imagine this heuristic being better or worse depending on the details of which figures are chosen and how the are used.
If I had to guess, I'd say that it's often better because picking a few random numbers leads to actually thinking about the decision for at least half a minute.
On first pass, I read this as "which figures are chosen and how the arse is used". That seemed oddly appropriate.
I figure it works better about 80% of the time, so I'm going to go with it.
In practice, guessing at numbers and running a calculation actually serves as a quick second opinion on your original intuitive decision. If the numbers imply something far different from the decision that System 1 is offering, I don't immediately shrug and go with the numbers: I notice that I am confused, and flag this as something where I need to consider the reliability both of the calculation and of my basic intuition. If the calculation checks out with my original intuition, then I simply go for it.
Basically, a heuristic utility calculation is a cheap error flag which pops up more often when my intuitions are out of step with reality than when they're in step with reality. That makes it incredibly valuable.
There's some good discussion in Thinking, Fast and Slow about when intuition works well.
Does Paul Crowley fall under the recent clarification that the spirit of the quotes thread is against quoting LessWrong regulars?
Huh! I hadn't heard of that. Retracted. (Anyway, I propose to state that explicitly.)
But there's sometimes a thread for rationality quotes with the complementary rule :)
This is a claim about reality. Do we actually know that pulling numbers out of your arse actually does produce better results than pulling the decisions out directly? Or does it just feel better, because you have a theory now?
IME pulling decisions directly out of my arse usually produces results so bad that it'd be hard to do worse, except in certain situations in which it wouldn't even occur to me to use numbers anyway.
That's a good point.
Plugging gut assumptions into models to make sure that the assumptions line up with each other generally produces better results for me. Beyond it just feeling better, it gives me things I can go away and test that I'd never have got otherwise.
Like if I think something's 75% likely to happen in X period and I think that something else is more likely to happen than that - do I think that the second thing is 80% likely to happen? And does that line up with information that I already have? Numbers force you to think proportionally. They network your assumptions together until you can start picking out bits of data that you have that are testable.
Intuitions aren't magic, of course, but they're rarely completely baseless.
Well at least if you pull numbers out of your arse and then make a decision based explicitly on the assumption that they are valid, the decision is open to rational challenge by showing that the numbers are wrong when more evidence comes in. And who knows, the real numbers may be close enough to vindicate the decision.
If you just pull decisions out of your arse without reference to how they relate to evidence (even hypothetically), you are denying any method of improvement other than random trial and error. And when the real numbers become available, you still don't know anything about how good the original decision was.
-Adam Stark
Upvoted initially because this seemed like a good example of what I've taken to calling a "leprechaun" - a fact that spreads in spite of limited empirical backing; however a quick Google search (fact-checking the fact-check, as it were) leads to this article which at the very least suggests that the second-hand story told above is somewhat exaggerated: the evidence for bleeding associated with Gingko Biloba is rather more solid than "one case report - of a single person". Upvote retracted, I'm afraid...
(ETA: also, the other story at that link makes for... interesting reading for a rationalist.)
Thanks for the fact-check! In retrospect, it probably would have been a good idea for me to fact-check this before I posted it.
And yes, the other story is odd indeed. I actually hadn't read it before I posted the link.
... And I have no upvoted both of you for the irony of failing to fact-check an anecdote about the importance of proper fact-checking.
Eliezer Yudkowsky, ‘Cognitive Biases Potentially Affecting Judgement of Global Risks’, in Nick Bostrom and Milan M. Ćirković (eds.), Global Catastrophic Risks, Oxford, 2008, p. 114
Retracted because it violates the spirit of one of the section rules.
Thou shalt not quote Yudkowsky.
Is this a trivial extension of
to include SI/MIRI stuff or a new commandment?
I think that the purpose of the current instruction is to refrain from quoting ourselves and each other. So I'd see it as a trivial extension to understand that Eliezer and other well-known members of the community should not be used for a source for quotes.
Yep, that trivial extension one.
...how are we supposed to tell people about this rule?
Edit: Aw, I thought it was funny.
Ever played Mao?
Saying the name of the game ::gives card::
One of my fondest childhood memories.
"We don't put quotes from Eliezer in the Rationality Quotes thread" seems to work. Quoting the expression of an authority is a way to lend persuasiveness to your rule assertion but it is not intrinsic to the process of rule explaining.
I can tell people "Don't drive through intersections when the lights are red" and I'm telling someone about the rule without quoting anything.
Understood.
Can someone explain to me what is going on here? The comment is getting downvoted and Eliezer himself is telling me not to quote him (or so it appears--it's not clear whether he is being serious or not). Before deciding to post the comment, I read the instructions closely and it seemed clear that the quote--which comes from a published book, not from LW, OB, or HPMoR--didn't violate any of the rules. Maybe this is all obvious to those who post regularly on this section, but I am myself rather puzzled by the whole thing.
Just don't quote Eliezer and you should be safe. Better yet, don't quote any of the LW regulars, regardless of where you found the quote. If you want to share something they posted elsewhere, use the Open thread or create a Discussion post, if it's interesting enough and you have something to add to the quote.
The spirit of the no-LW, OB, HPMoR rule is that the community shouldn't be quoting itself in quotes threads. That has a dangerous echo chamber-y feel to it.
Thanks. I didn't perceive that this was the spirit of the rule precisely because it was explicitly restricted to apply to writings from certain websites and ebooks. If the purpose is to ban quotes by (past and present) members of LessWrong, why not simply write, "No quotes by past or present members of LessWrong"?
Dunno. Maybe that's what it should be.
There's a family resemblance effect going on here. Since Eliezer is the founder of the site, quoting him violates the spirit of the rule more strongly than quoting off-site writings of other Less Wrongers.
You have the honour to have provoked the introduction of a new guideline (or a more explicit and precise modified version of an existing one). The norms shall henceforth be clearer to everyone. Bravo!
http://www.youtube.com/watch?v=rMXs1C434B8
A example of possibly resolvable different directions--
-- Karen Pryor, Don't Shoot the Dog!: The New Art of Teaching and Training
Hunter Felt
Witty to be sure, but obviously false. The causal connection between baseball and the content (as opposed to the name) of the law is probably fairly tenuous. The number three is ubiquitous in all areas of human culture.
I think further investigation would reveal that is at most a Western cultural thing, not a hardwired human universal. Elsewhere in time and place, 4 has been the important number -- e.g. recurrences of 4 and 40 in the Hebrew scriptures; the importance of 4 and (negatively) 8 in Chinese culture, etc.. Possibly some other digits have performed similarly in other places as well.
Illustration of availability bias:
http://www.youtube.com/watch?v=LVM4jR3TZsU
Thus the availability bias defeats the Pascal mugging.
-- aristosophy
Arguable example: probability and uncertainty. (More or less identical in my theorizing, but some call the idea of their identity the ludic fallacy.)
There's still a couple related fallacies that Bayesians can commit.
Most related to the "ludic fallacy" as you've described it: if you treat both epistemic (lack of knowledge) and aleatory (lack of predetermination) uncertainty with the same general probability distribution function framework, it becomes tempting to try to collapse the two together. But a PDF-over-PDFs-over-outcomes still isn't the same thing as a PDF-over-outcomes, and if you try to compute with the latter you won't get the right results.
Most related to the "ludic fallacy" as I inferred it from Taleb: if you perform your calculations by assigning zero priors to various models, as everybody does to make the calculations tractable, then if evidence actually points towards one of those neglected priors and you don't recompute with it in mind, you'll find that your posterior estimates can be grossly mistaken.
This is often a good idea in mathematics. Two concepts that are equivalent in some context may no longer be equivalent once you move to a more general context; for example, familiar equivalent definitions are often no longer equivalent if you start dropping axioms from set theory or logic (e.g. the axiom of choice or excluded middle).
Outside of mathematical logic, some familiar examples include:
Kevin Warwick
I don't suppose you've got a cite for the central claim here? It's a decent enough example of reasoning from the bottom line whether or not it turns out to be true, but I Googled a couple different sets of keywords, and the only thing that came up besides a whole mess of birth records and obstetricians' papers was Warwick's lecture notes.
Google turns up a source for the "women of genius" quote, a book "Sex Differences in Cognitive Abilities" by D. Halpern. The book's quote is from someone named Bayertahl, and it's an indirect quotation from a 1989 article, "Sexual Dimorphism in the Human Brain: Dispelling the Myths" supposedly by a J. Janowsky. I say supposedly because looking for a fulltext leads me to a version with a similar title ("Sexual Dimorphism of the Human Brain: Myths and Realities") but is by M. A. Hofman and D. F. Swaab; it contains the Bayertahl quote in the original German and says that the primary source is this 1932 article by a Louis Bolk, "Hersenen en Cultuur" (Brains and Culture). This is also a full text, in Dutch; Google's translation seems to roughly confirm the claim as reported by Warwick (though the "women of genius" quote does not seem to appear in Bolk's article, at a first cursory glance).
This cites "Bayerthall 1911".
This paper is my best lead so far, but it's behind a paywall at the moment. I think it's in "Bayerthal (1911)", whatever that turns out to be.
Bayerthal (1911) is unfortunately in German. Now I'm waiting for access to this paper.
"People don't pay much attention to anything unless you give them reason to"
--The Night Circus
People don't do anything unless they have a reason to - given a sufficiently broad definition of "reason".
-- Eric Hoffer
Perhaps, but absolute power tends to be the more relevant one, as it definitionally also includes the means to persue the goals derived from absolute corruption.
I wonder where one could apply "Absolute" and not come up with a scary sounding conclusion. Absolute skepticism seems it would turn one into a gibbering madman. Absolute logic--well what is a dangerous AI but absolute logic plus power?
Absolute knowledge also seems like it'd leave you gibbering... Just think about it: knowledge of everything, that is to say every atom of every single object in the universe.
I can only say Ouch
Absolute goodness?
Anything else would be problematic. Making people smile is good. Tiling the universe with microscopic smiley faces is not.
Absolute goodness seems tautalogically good. If you pick any one good trait or action and maximize it it grows ominous again.
That's why I chose it.
Like the smiling example I gave.
-- Megan McArdle, trying to explain Bayesian updates and the importance of making predictions in advance, without referring to any mathematics.
The value of health insurance isn't that it keeps you from getting sick. It's that it keeps you from getting in debt when you do get sick.
It does help you to pay for (say) blood-pressure medication. This might be expected to result in more people with medical aid and blood-pressure problems taking their medication.
It also helps to pay for doctors. This leads to more people going to the doctor with minor complaints, and increased chances of catching something serious earlier.
This may be true, but McArdle's point is precisely that this was not said before the study came out. At that time, people confidently expected that health insurance would, in fact, improve health outcomes. Your argument is one that was only made after the result was known; this is a classic failure mode.
(nods) Yup. Of course, McArdle's claims about what people would have said before the study, if asked, are also only being made after the results are known, which as you say is a classic failure mode.
Of course, McArdle is neither passing laws nor doing research, just writing articles, so the cost of failure is low. And it's kind of nice to see someone in the mainstream (sorta) press making the point that surprising observations should change our confidence in our beliefs, which people surprisingly often overlook.
Anyway, the quality of McArdle's analysis notwithstanding, one place this sort of reasoning seems to lead us is to the idea that when passing a law, we ought to say something about what we anticipate the results of passing that law to be, and have a convention of repealing laws that don't actually accomplish the thing that we said we were passing the law in order to accomplish.
Which in principle I would be all in favor of, except for the obvious failure mode that if I personally don't want us to accomplish that, I am now given an incentive to manipulate the system in other ways to lower whatever metrics we said we were going to measure. (Note: I am not claiming here that any such thing happened in the Oregon study.)
That said, even taking that failure mode into account, it might still be preferable to passing laws with unarticulated expected benefits and keeping them on the books despite those benefits never materializing.
This annoys me because she doesn't talk at all about the power of the study. Usually, when you see statistically insignificant positive changes across the board in a study without much power, its a suggestion you should hesitantly update a very tiny bit in the positive direction, AND you need another study, not a suggestion you should update downward.
When ethics prevent us from constructing high power statistical studies, we need to be a bit careful not to reify statistical significance.
That is Kevin Drum's take. Post 1:
Post 2:
If the effect is so small that a sample of several thousand is not sufficient to reliably observe it, then it doesn't even matter that it is positive. An analogy: Suppose I tell you that eating garlic daily increases your IQ, and point to a study with three million participants and P < 1e-7. Vastly significant, no? Now it turns out that the actual size of the effect is 0.01 points of IQ. Are you going to start eating garlic? What if it weren't garlic, but a several-billion-dollar government health program? Statistical significance is indeed not everything, but there's such a thing as considering the size of an effect, especially if there's a cost involved.
Moreover, please consider that "consistent with zero" means exactly that. If you throw a die ten times and it comes up heads six, do you "hesitantly update a very tiny bit" in the direction of the coin being biased? Would you do so, if you did not have a prior reason to hope that the coin was biased?
I respectfully suggest that you are letting your already-written bottom line interfere with your math.
If I throw a die once and it comes up heads I'm going to be confused. Now, assuming you meant "toss a coin and it comes up heads six times out of ten".
What is your intended 'correct' answer to the question? I think I would indeed hesitantly update a very (very) tiny bit in the direction of the coin being biased but different priors regarding the possibility of the coin being biased in various ways and degrees could easily make the update be towards not-biased. I'd significantly lower p(the coin is biased by having two heads) but very slightly raise p(the coin is slightly heavier on the tails side), etc.
My intended correct answer is that, on this data, you technically can adjust your belief very slightly; but because the prior for a biased coin is so tiny, the update is not worth doing. The calculation cost way exceeds any benefit you can get from gruel this thin. I would say "Null hypothesis [ie unbiased coin] not disconfirmed; move along, nothing to see here". And if you had a political reason for wishing the coin to be biased towards heads, then you should definitely not make any such update; because you certainly wouldn't have done so, if tails had come up six times. In that case it would immediately have been "P-level is in the double digits" and "no statistical significance means exactly that" and "with those errors we're still consistent with a heads bias".
Daniel Waterhouse says to Hooke in Neal Stephenson's Quicksilver
Leibniz in Neal Stephenson's The Confusion
Citation? I've read the Tao Teh Ching in a few translations and I don't recognize that at all; a Google and Google Books makes it sound like the usual apocrypha.
This is technically true for inclusive definitions of 'want' but highly misleading. There is a world of difference between "I want X but the opportunity cost (Y) is too great" and "I actively prefer !X". X and Y may be the prevention of parasitic worm infections and combating malaria. Precisely which limited resource is being allocated (time or money) changes little.
If "I don't have time" is to be replaced with an expression which conveys more personal acceptance of responsibility then it would be reasonable to translate it to "I have other priorities" but verging on disingenuous to translate it into "I don't want to".