Rationality Quotes June 2014
Another month, another rationality quotes thread. The rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (279)
Three Bayesians walk into a bar: a) what's the probability that this is a joke? b) what's the probability that one of the three is a Rabbi? c) given that one of the three is a Rabbi, what's the probability that this is a joke? (c)
According to the base rate there is an evidence that this is a joke about Russia national team or Suarez bite
Wait this is actually brilliant in a couple of ways, because to get the right (estimated) answer, the listener has to distinguish between probability that one of the three is a rabbi and this is a joke, and probability that this is a joke if we put the probability of the third being a rabbi at 100%.
It follows the setup of a rationality calibration question while subverting it and rendering "guessing the teacher's password" useless, since c) is (maybe) higher than a) or b)
Three Bayesians walk into a bar. The third one ducks
Now I'm trying to figure out if your missing period is part of the joke.
This seems to be the original source, as far as I can tell: https://twitter.com/rickasaurus/status/471930220782448641
And now this must become a canonical example used in logical probability papers.
"My ambition is to say in ten sentences what everyone else says in a book - what everyone else does not say in a book."
-Nietszche
Relevant to bounded cognition and consequentialism:
-- Loyal to the Group of Seventeen, The Citadel of the Autarch, Gene Wolfe
To give some context here: 'Loyal to the Group of Seventeen' is a captured POW who is telling a story to the main characters in the hospital he's recuperating in, as part of a storytelling competition. He is from the enemy country "Ascia", which is a parody/version of Maoist China (the name comes from a typically Wolfean etymological joke: New Sun is set in South America, the reader eventually realizes, and the Ascians or 'shadowless' live near the equator where the sun casts less of a shadow); in particular, Ascians speak only in quotations from official propaganda (Maoists were notorious for quotation). Sort of Wolfe's reply to Newspeak. So when Loyal tells his story, "Loyal to the Group of Seventeen's Story—The Just Man", he speaks only in quotations and someone interprets for him.
The story simply recounts a commune whose inequitable distribution of work & food prompts the Just Man to travel to the capital and petition the mandarins there for justice, in the time-honored Chinese fashion, but he is rejected and while trying to make his case, survives by begging:
The story itself is simple but it's still one of the most interesting of the stories told within Book of the New Sun and comes up occasionally on urth.net. It's also often compared to a Star Trek episode: "Shaka, When the Walls Fell: In one fascinating episode, Star Trek traced the limits of human communication as we know it - and suggested a new, truer way of talking about the universe".
While some parts of me agree with it, there are other parts that set off alarms like: but judges will try to use this as a rationalization for what looks like a kind behaviour(by habit, social proof) instead of trying to evaluate the justness, especially when it looks like it's complex or is likely to threaten one of their biased beliefs.
"Emotions are not tools of cognition"
Ayn Rand
This seems to be in tension with what she has stated elsewhere. For instance:
-- Ayn Rand, Philosophy: Who Needs It?
Wouldn't immediately available estimates be a good tool of cognition?
Very interesting... it would seem that Rand doesn't actually define emotion consistently, that was not the definition I was using. But the Ayn Rand Lexicon has 11 different passages related to emotions.
http://aynrandlexicon.com/lexicon/emotions.html
More charitably we could say her conception of emotions evolved over time. Thanks for the link, I actually found some of that insightful. Also, I had forgotten how blank slatey her theory of mind was.
I beg to differ. Or are you saying that, if Ayn Rand says it, it must be wrong? In which case, I still disagree.
How does the definition you link to contradict Rand's statement? You can acknowledge emotions as real while denying their usefulness in your cognitive process.
The article I linked to wasn't just saying that emotions exist. It was saying that they're part of rationality.
If emotions didn't make people behave rationally, then people wouldn't evolve to have emotions.
Rand doesn't deny that emotions are part of rationality, she denies that they are tools of rationality. It is rational to try to make yourself experience positive emotions, but to say "I have a good feeling about this" is not a rational statement, it's an emotional statement. It isn't something that should interfere with cognition.
As for emotions affecting humans behavior, I think all mammals have emotions, so it's not easy for humans to discard them over a few generations of technological evolution. Emotions were useful in the ancestral environment, they are no longer as useful as they once were.
If your hunches have a bad track record, then you should learn to ignore them, but if they do work, then ignoring them is irrational.
Even if emotions are suboptimal tools in virtually all cases (which I find unlikely), that doesn't mean that ignoring them is a good idea. It's like how getting rid of overconfidence bias and risk aversion is good, but getting rid of overconfidence bias OR risk aversion is a terrible idea. Everything we've added since emotion was built around emotion. If emotion will give you an irrational bias, then you'll evolve a counter bias elsewhere.
If your hunches have a good track record, I think you should explore that and come up with a rational explanation, and make sure it's not just a coincidence. Additionally, while following your hunches isn't inherently bad, rational people shouldn't be convinced of an argument merely based on somebody else's hunch.
Nobody is suggesting we ignore emotions, merely that we don't let them interfere with rational thought (in practice this is very difficult).
I don't follow this argument. Your biases can be evaluated absolutely, or relative to the general population. If everybody is biased underconfidence, the being biased in towards overconfidence can be an advantage. There's a similar argument for risk aversion.
I'm not sure I agree with this, do you think that The Big Bang Theory is based on emotion? You can draw a path from emotion to the people who came up with the Big Bang Theory, but you can do that with things other than emotion as well.
My issue with emotions is only partly that they cause biases, it's also that you can't rely on other people having the same emotions as you. So you can use emotions to better understand your own goals. But you won't be able to convince people who don't know your emotions that your goals are worth achieving.
My explanation is that hunches are based on aggregate data that you are not capable of tracking explicitly.
Hunches aren't scientific. They're not good for social things. Anyone can claim to have a hunch. That being said, if you trust someone to be honest, and you know the track record of their hunches, there's no less reason to trust their hunches than your own.
I mean ignore the emotion for the purposes of coming up with a solution.
Overconfidence bias causes you to take too many risks. Risk aversion causes you to take too few risks. I doubt they counter each other out that well. It's probably for the best to get rid of both. But I'd bet that getting rid of just one of them, causing you to either consistently take too many risks or consistently take too few, would be worse than keeping both of them.
Emotions are more about considering theories than finding them. That being said, you don't come up with theories all at once. Your emotions will be part of how you refine the theories, and they will be involved in training whatever heuristics you use.
I'm certainly not arguing that rationality is entirely about emotion. Anything with a significant effect on your cognition should be strongly considered for rationality before you reject it.
This looks like you're talking about terminal values. The utility function is not up for grabs. You can't convince a rational agent that your goals are worth achieving regardless of the method you use. Am I misunderstanding this comment?
The only part I object to what you wrote is * emotions shouldn't interfere with cognition*. I think they already are a part of cognition and it's a bit like calling "quantum physics is weird". Perhaps you meant "emotions shouldn't interfere with rationality" in which case I'll observe that it doesn't seem to be a popular view around lesswrong. Also observe, I used to believe that emotions should be ignored, but later came to the conclusion that it's a way too heavy-handed strategy for the modern world of complex systems. I'll try to conjecture further, by saying, cog, psychologists tend to classify emotion, affect, and moods differently. AFAIK, it's based on the temporal duration it exists with short - long in order of emotion, mood, affect. My conjecture is emotions can and should be ignored, mood can be ignored ( but not necessarily should) and affect should not be ignored, while rational decision-making.
This is an ideal which Objectivists believe in, but it is difficult/impossible to actually achieve. I've noticed that as I've gotten older, emotions interfere with my cognition less and less and I am happy about that. You can define cognition how you wish, but given the number of people who see it as separate from emotion it's probably worth having a backup definition in case you want to talk to those people.
RE: emotions, affect, moods. I do think that emotions should be considered when making rational decisions, but they are not the tools by which we come to decisions, here's an example.
If you want to build a house to shelter your family, your emotional connection to your family is not a tool you will use to build the house. It's important to have a strong motivation to do something, but that motivation is not a tool. You'll still need hammers, drills, etc to build the house.
I believe we can and should use drugs (I include naturally occurring hormones) to modify our emotions in order to better achieve our goals.
-- Warren Buffett, in some thoughts on investing.
-- Johan George Granstrom, Treatise on Intuitionistic Type Theory
The value of these definitions is completely opaque to me. Could you elaborate on why you believe this is a good rationality quote?
Because it emphasizes that logic is a machine with strict laws and moving parts, not a pool of water to be sloshed in any direction. When you lay down what counts as a (hypothetical) cause of a proposition, you define it clearly and subject it to proof or disproof. When you demonstrate that one proposition causes another, you send truth from effects into causes according to the laws of proof.
Implication, deductibility, and computation are thus the exact same thing.
But what does it mean for one proposition to cause another? For instance, here's a true proposition: "Either Hilary Clinton is the President of the United States or there exists a planet in the solar system that is smaller than Earth." What is the cause of this proposition?
Also, when Granstrom says a proposition is true if it has a cause, what does that mean? What is "having" a cause? Does it mean that in order for a proposition to be true, its hypothetical cause must also be true? That would be a circular definition, so I'm presuming that's not it. But what then?
In the sense of implication?
A well-formed OR proposition comes with the two alternatives and a cause for one of the alternatives. So in this case, a cause (or evidence, we could say) for "there exists a planet in the solar system smaller than Earth" is the cause for the larger OR proposition.
In this case, cause is identified with computation. When we have an effective procedure (ie: a computation) taking any cause of A into a cause of B, we say that A implies B.
This is true, but the recursion hits bottom when you start talking about propositions about mere data. Constructive type theory doesn't get you out of needing your recursive justifications to bottom-out in a base case somewhere.
2 is somewhere between wrong and not even wrong. Propositions are regarded, by those who believe in them as abstracta , and as such, non causal. Setting that aside, it's obvious that, say, a belief can have cause but be wrong. Fori instance, someone can acquire a false believe as the causal consequence of being lied to.
I agree that this is how propositions are usually regarded. The impression I got from the quote, though, is that Granstrom is proposing a re-definition of "proposition", so saying it's wrong seems like a category error. It does seem like a fairly pointless re-definition, though, which is why I asked the question.
-- Lemony Snicket, All The Wrong Questions, Book 2, When Did You See Her Last?, Chapter Seven
Does that actually work?
Ask yourself whether you'd notice someone following you when you weren't looking out for it.
The part that impressed me and led me to post it was the whole "X applies to everyone else, hmm maybe it applies to me too" idea.
But how would I distinguish 'no one has been following me and so I did not notice no one has been following me' from 'someone has followed me at some point and I did not notice that they were following me'?
No.
-- Anne Morris
“The root of all superstition is that men observe when a thing hits, but not when it misses"
-- Francis Bacon
https://www.goodreads.com/quotes/5741-the-root-of-all-superstition-is-that-men-observe-when
The quote is true to Bacon's thought, and its expression much improved in the repetition. Here is the nearest to it I can find in Bacon's works on Gutenberg:
Francis Bacon, "The Advancement of Learning"
Chance is always powerful. Let your hook always be cast; in the pool where you least expect it, there will be fish
-- Ovid
http://izquotes.com/quote/140231
Don't know the original. Anyone? Quidquid in latine dictum sit, altum videtur, and all that.
If my research is correct:
Ovid's Ars Amatoria, Book III, Lines 425-426.
I copied the text from Tuft's "Perseus" archive.
Prisca iuvent alios: ego me nunc denique natum gratulor
Let others praise ancient times; I am glad I was born in these.
-- Ovid
http://izquotes.com/quote/140267
What about future times?
Arthur Martine, quoted by Daniel Dennett
It tells a lot about the way our brains are built that you have to consciously remind yourself of this in the course of the argument and it doesn't really come naturally.
Taleb, Aphorisms
I don't get it. (I know what random variables and covariance are)
That some people do in fact work towards the common good, or conversely, are outright malevolent rather than focused on personal gain? It's a standard warning against the typical mind fallacy and the spherical cow.
That is supposedly true, but what does it have to do with covariance?
I read it as saying that people have many interests in common, so pursuing "selfish" interests can also be altruistic to some extent.
If that is the intended reading, then it's an example of sounding wise while saying nothing.
Yeah, that's Taleb for you. Sounding Wise/cynical while adding very little of any novel content.
Taleb, Aphorisms.
Peter Thiel
As somewhat of a libertarian, I tend to fall into that last group. I have to keep reminding myself that if nobody could outguess the market, then there'd be no money in trying to outguess the market, so only fools would enter it, and it would be easy to outguess.
There is a great deal of money to be made by entering the market and not trying to outguess it. This is what index funds are about. Thus many non-fools enter the market.
By "enter", I meant try to outguess.
If everyone bought stocks randomly, it would be easy to outguess.
A lot of the thread descending from here is covered by the whole of the quoted essay. I'd quote more, but I don't want to make it easy for people to just read another quote.
But it's an equilibrium, right? Lumifer's joke may be funny, but as an empirical matter, you don't see a lot of $20 bills lying on the ground. There's no easy pickings to be had in that manner. So the only people who can "outguess" the market (and I think that framing is seriously misleading, but let's put that aside for now) are individuals and organizations with hard-to-reproduce advantages in doing so - in the same way that Microsoft is profitable, but it doesn't follow that just anyone can make a profit through an arbitrage of buying developer time and selling software.
On the contrary, there are very many profitable software companies of all sizes. Writing software is a huge market that has grown very quickly and still provides large profit margins to many companies.
You might make an argument that Microsoft's real advantage is the customer lock-in they achieve through control of a huge installed base of software and files. Even there there are many software companies in the same position. It's hard to reproduce the advantage of having a large share of a large market. But that doesn't necessarily make it unprofitable to acquire even a small share of the market.
I think you misunderstand my point. Of course there are many profitable software companies (I work for one of them!), in the same way that there are also many banks, hedge funds, etc. But all of these have hard-to-reproduce advantages ("moats" in the lingo). The reason Microsoft (or any other software company) is able to buy developer time and sell software at a profit is because they have social and organisational capital, because they have synergy between that capital and their intellectual property rights, because they have customer relationships, etc etc. It is not an arbitrage and it's not true that just anyone can do it. Microsoft themselves are in fact a fine example of this; throwing resources in the fight against Google has not proven successful.
Yeah, but it's an equilibrium that it's really hard to outguess the market, not impossible.
No, why would it be?
Equilibrium is a convenient mapping tool that lets you assume away a lot of difficult issues. Reality is not in equilibrium.
Because when it's easy to outguess the market, the people who are good at it get richer and invest more money in it until it gets hard again.
It's not in perfect equilibrium constantly. I've heard of someone working out some new method that made it easy which took off over the course of a few years until enough people used it that outguessing the market was hard again.
This is an extremely impoverished framework for thinking about financial markets.
Let's introduce uncertainty. Can Alice outguess the market? Um, I don't know. And you don't know. And Alice doesn't know. All people involved can have opinions and guesses, but no one knows.
Okay then, so let's move into the realm of random variables and probability distributions. Say, Alice has come up with strategy Z. What's the expected return of implementing strategy Z? Well, it's a probability distribution conditional on great many things. We have to make estimates, likely not very precise estimates.
Alice, of course, can empirically test her strategy Z. But there is a catch -- testing strategies can be costly. It can be costly in terms of real money, opportunity costs, time, etc.
Moreover, the world is not stationary so even if strategy Z made money this year whether it will make money next year is still a random variable, the distribution parameters of which you can estimate only so well.
It's good enough.
Knowing about things like risk will tell you about the costs and benefits with higher precision. It will explain somewhat why there's lots of people involved in a market, and not just a couple of people that control the entire thing and work out the prices using other methods.
All that uncertainty makes the market difficult to predict. But all you really need to know is that regardless of how easy or hard it is to guess how a business will do, the market will ensure that you're competing with other people who are really good at that sort of thing, and outguessing them is hard.
No, I don't think so.
This can be applied to anything from looking for a job to dating.
So, no, that's not all you really need to know.
You wouldn't expect to be able to do job X better than a professional if you don't have any training, would you?
Also, economists say the same about the job market. If you don't have any particular advantage for any given job, you can't easily beat the market and make more money by picking a high-paying job. If a job made more money without some kind of cost attached, people would keep going into it until it stops working.
I guess there is more to the market. It's something that scales well, so doing it on a small scale is especially bad. It takes exactly as much work to by $100 in stocks as $10,000. If you're dealing with tiny companies where someone trying to make trades on that scale would mess around with the price of the stock, that won't apply, but in general trying to make money on small investments would be like playing poker against someone who normally plays high stakes. They're the ones good enough to make huge amounts of money. The market won't support many of them, so they must be good.
I have a weird feeling that a bunch of people on LW have decided that there's nothing to be done in financial markets (except invest in index funds), fully committed to this belief, and actively resist any attempts to think about it... :-/
Whether it's worth picking up a $20 bill depends on
The odds for #2 and #3 are pretty high compared to the odds of similar activities when playing the market. The odds of #1 vary depending on how well travelled the place is but are generally a lot higher than for whether you're the first person to notice an opportunity in the market.
Of course, #1 is also affected by how many people use this entire chain of reasoning and conclude it;'s not worth picking up the bill, but the other factors are so important that this hardly matters.
The way I see it, in practical terms, it's always worth picking up. I've picked up a number of fake bills. I keep them. It's better than leaving them to torment each successive person who picks it up until someone else does it instead.
There is the old joke about a student and a professor of economics walking on campus. The student notices a $20 bill lying on the sidewalk and starts to pick it up when the professor stops him. "Don't bother," the professor says, "it's fake. If it were real someone already would have picked it up".
That joke got less funny the first time I picked up a Christian tract disguised as a $20 bill. It got a lot less funny the second.
In other words, the Patrician thinks you should be a better person than Terry Pratchett.
Duplicate
Steve Sailer
It ain't what we don't know that causes trouble, it's what we know that just ain't so.
David Deutsch, claiming the authority of an "unknown sage" http://www.theguardian.com/science/2012/oct/03/philosophy-artificial-intelligence
Usually attributed to Mark Twain.
David Deutsch is right - it doesn't appear in Twain, and it's difficult to find any good citation for the true originator.
-- Peter Singer, Marx: A Very Short Introduction
Funny how the same meaning expressed by different people led to so much outrage...
source
I always thought that this quote was probably fabricated. When a Tribe B reporter encounters a "man on the street" , "black friend", "highly placed source" or in this case "an aide" who is ostensibly a member of Tribe A, yet goes on to issue a quote that is more or less a call to arms for B I'm immensely suspicious.
I could still buy it though, if the aide talked like the protagonist of his own story. But he's just an orc, snarling his hatred of applause lights to the innocent reporter. I don't buy it.
It's not too uncommon for reporters to massage quotes, or at the very least to quote selectively, in order to push an editorial agenda or tell a better story.
Because no one ever got outraged at fluffy old Karl Marx, dear me no.
I agree with Plasmon that there are important differences between the two quotations other than what political tribe they come from, and that the words attributed to the Bush aide suggest a contempt for "judicious study" and looking before one leaps, which Marx's aphorism doesn't. But even if we set that aside and stipulate that the two quotations convey the exact same meaning and connotations, the point you seem to be making -- that the Bush guy got pilloried for being from the wrong tribe, whereas everyone loves Karl Marx when he says the same thing because he's from the right tribe -- seems to me badly wrong.
First of all, if you think Marx is of the same political tribe as most people who take exception to the Bush aide's remarks, you might want to think again. That's a mistake of the same magnitude (and perhaps the same type?) as failing to distinguish Chinese people from Japanese because they all look "Oriental".
Secondly, while Marx's aphorism gets quoted a lot, I don't think that's because everyone (or everyone on the "left", or whatever group you might have in mind) agrees with it. It expresses an interesting idea pithily, and that suffices.
I wouldn't suggest that Marx is in the same tribe as people who don't like Bush. I would, however, suggest that Marx is within the Overton Window for such people and that Bush is not, and that has similar effects to actually being in the same tribe.
I don't think going around and making a violent revolution to get rid of capitalism is within the overton window of most people on the left in the US.
One could be sympathetic to many of Marx's ideas while nevertheless holding that the violent revolution idea has been shown not to work.
Most people don't reject violent revolution for the practical reason that it's a unworkable strategy but because they find the idea of going and lynching the capitalists is morally wrong.
Marx idea of putting philosophy into action brought along the politics of revolution.Bush's relationship with the "reality-based community" leads to misleading voters and ignoring scientific findings. In both cases the ideas get judged by their practical political consequences.
No need to lynch anyone: after all, Marx didn't feel that capitalists were evil, he felt that they were just doing what the prevailing economic system forced them to do: to squeeze profit out of workers to avoid being outcompeted and driven to bankruptcy by the other capitalists. But (most of them) are not actively evil and don't need to be punished. So you could just let them live but take their stuff, and there does exist wide support for the notion of forcibly taking at least some of people's stuff (via taxation).
Marx didn't think that you can simply get a democratic majority and tax rich peoples wealth away via taxation. He considered that no viable political strategy but advocated revolution. You ignore the political actions that Marx advocated. In dialectics a thesis needs a contrasting antithesis to allow for synthesis.
Stalin also didn't kill people because they were evil. That's besides the point. The action that resulted in dead people were justified because they move history along.
The earlier question was about whether Marx would be in people's Overton window. I think that if someone thinks "well, Marx had a pretty good analysis of the problems of capitalism, though he was mistaken about the best solutions", then that counts as Marx being within the window.
What evidence moves you to say that the primary reason for rejection of violent revolution is morality rather than practicality? (And why do you/the majority of people think that violent revolution has to end in lynchings? Is there another widely-held opinion that simply stripping the capitalists of their defining trait - wealth - would be insufficient?)
That's not true. The violent revolution idea worked very well. It's just that what happened after that revolution didn't quite match Marx's expectations.
Well if you ignore all the predictions for what should happen afterwards, the mere idea that it's possible to have a violent revolution that would topple an old authoritarian regime wasn't exactly original to Marx.
The thing that was original to Marx was that a revolution is the only way to create real political change and that it's impossible to create that change inside the system.
I find it hard to believe this was an original idea. In a classic autocracy with a small rich legally empowered class, how could you possibly expect to radically change things except through violence? What alternatives are there that were ignored by all the previous violent revolutions in history?
The parent comment and the replies to it are a case study in how different people intuitively draw strongly opposed conclusions from a piece of ambiguous, politically charged evidence.
These quotes don't seem similar to me at all.
The first quote talks of "changing" reality, the second talks of "creating" it , making the first seem like an encouragement to try and change reality, and the second like sollipsism (specifically, "creating our own reality").
The second also seems very dismissive of the need to think before you act, the first much less so (if at all).
The second quote is clearly not solipsism; note that what is "created" will be solid enough to be judiciously studied by other actors, empiricism, etc. Note also that the second quote does not talk about creating "reality," it talks about creating "our own reality." In other words, remoulding the world to suit your own purposes. Any sensible reading of the quote leads to that interpretation.
Like Lumifer, the Singer/Feuerbach quote, made me think immediately of the famous "reality-based community" quote.
No, the second clearly implies that the speaker simply doesn't hold with Enlightenment principles, empiricism, and all that "judicious study of discernible reality" crap. That speaker clearly prefers to just act, not out of rational calculation towards a goal, but because acting is manly and awesome. This is why people have such vicious contempt for that speaker: not only is he not acting rationally on behalf of others, he doesn't even care about acting rationally on his own behalf, and he had the big guns.
I cannot read this anywhere in the text,not even between the lines.
What other option is there? Preferring to act out of rational calculation towards a goal would put the speaker among those who "believe that solutions emerge from judicious study of discernible reality", i.e. the very people he's arguing against. We are left to guess what alternative decision procedure the speaker is proposing. eli_sennesh's interpretation is one possibility, do you have another?
I read him as saying his empire was so powerful he didn't need to care about existing reality or to plan ahead; he could make it up as he went and still expected to succeed no matter what, so he didn't need to judiciously study the existing reality before overwriting it.
I read him as saying that the people he is talking to and about are out of the loop. They write about what the politicians are doing, but only after the fact. The politicians have their own sources of information and people to analyse them, and the public-facing writers have no role in that process.
Actually, no, I think that some people have such vicious contempt for that speaker because he is a prominent member of the enemy political tribe and so needs to have shit thrown at him given the slightest opportunity.
Denotationally, that seems like a reasonable interpretation. It sets off solipsism warnings in my head, possibly because I know some self-described sollipsists who really are fond of using that kind of phrasing.
However, the speaker could have chosen to say this in a more straightforward way, as you do. Something like "We are an empire now, we have the power to remould parts of the world to better suit our purposes". And yet, he did not. Why not? This is not a rhetorical question, I'm open to other possible answers, but here's what I think:
I don't think it is very controversial that this quote is arguing against "the reality-based community". It is trying to give the impression that "acting ... to create our own reality" is somehow contradictory to "solutions emerging from your judicious study of discernible reality". In reality of course, most or all effective attempts at steering reality towards a desired goal are based on "judicious study of discernible reality". He is trying to give the impression that he ("we, an empire") can effectively act without consulting, or at least using the methods of, the "reality-based community". He doesn't say that denotationally, because it's false.
Seems to me you're overthinking the simple difference between being a passive observer and being an agenty mover and shaker.
I think you are underestimating the importance of being well-informed for being an "agenty mover and shaker". Look at this guy and these guys for example. Were they "agenty movers and shakers" ? They certainly tried!
Even the famous Sun Tzu, hardly a passive observer himself, devotes an entire chapter to the importance of being well-informed.
Or in an old Arab saying, the dogs bark, but the caravan moves on.
I think this is exactly the same thing. When you change existing reality you create new reality.
I read it more as pointing out that what many accept as immutable is actually mutable and changeable. This also plays into the agent vs NPC distinction (see e.g. here).
"I just don't have enough data to make a decision."
"Yes, you do. What you don't have is enough data for you not to have to make one"
http://old.onefte.com/2011/03/08/you-have-a-decision-to-make/
These two merely disagree on the meaning of the word decision, not the nature of the situation; one should pick a different scenario to make the possible point about how choosing not to choose doesn't quite work.
You post a link to "Disputing Definitions" as if there is no such thing as a wrong definition. In this case, the first speaker's definition of "decision" is wrong - it does not accurately distinguish between vanadium and palladium - and the second speaker is pointing this out.
I think that they both agree that "decision" here means "choice to embark on a course of action other than the null action", where the null action may be simply waiting for more data. Where they disagree is the relative costs of the null action versus a member of a set of poorly known actions; it seems that the second speaker is trying to remind the first that the null action carries a cost, whether in opportunity or otherwise.
Nassim Taleb
I would be interested to know how well documented this "curse of success" is? Is it studied in the economic literature? When do corporations, nations, firms, individuals suffer from this curse, when do they not? When do entire industries--like universities-- suffer from the curse? When do they survive and recover? When do they go completely bust? It seems possible to find examples going both ways, so I'm guessing there's something more subtle going on.
I'd love to see Taleb actually prove his assertion here, rather than expecting his readers' cynicism and bitterness to do the work of evidence.
Do you really doubt that universities used to take smaller fees and now sell degrees for a large cost?
I certainly doubt the latter portion. From my observations, whether the professoriat at any given university cares about teaching well or not has little to do with their funding sources.
The professoriat is not the university. That, actually, is one of the changes in academia that entangles with the quote above: universities are becoming money-making machines and the professoriat becomes the proletariat -- nothing more than salaried employees (notice what's happening to tenure).
Though it would be weird if that were what Taleb was talking about: he has nothing but contempt for the institution of tenure (I think another of Eugine's quotes makes that clear). For Taleb, the proletarianization of professors is a good thing, and presumably he doesn't think that this is the cause of the degeneration (if there is in fact any) of higher education.
True, it's more likely to be a consequence.
If you see yourself primarily as a business with the task of exchanging cheapest-to-deliver services for money, an ossified and unyielding labor force is something you very much do not want.
I suspect that the root of the problem goes to the fact that the universities are supposed to be both centers of research and teaching institutions. It worked well on small scale when the few students were, basically, professors' apprentices. But it doesn't work well for the delivery of education to the masses.
In my estimation (having worked at several universities of various size and prestige, and more recently having consulted at all sorts of businesses) the problem is a common problem in a lot of American business/government since the 1970s/80s- the rise of professional management.
At large flagship U down the street from my house, professor labor costs have dropped markedly (the trend has been to replace tenure track lines with adjuncts and grad students as well as to increase grant overhead. In the science departments, many professors turn a net profit because grant overhead is larger than their salary costs). Enrollment is way up, tuition is way, way up. A drive to leverage university held patents has created massive profits for the university (with some absurdity along the way- a professor tried to start a company only to get a cease and desist order from a semi-conductor company. The university had sold the rights to his research to the semi-conductor company.)
And yet- the university finds itself on the verge of bankruptcy- why? Because management has exploded. The university now has a fellowship office (staffed entirely by managers who add no direct value), not one, but two bureaucratic offices devoted to education quality (how many people does it take to administer teacher feedback forms? Apparently about 20, of which several make more than 100k a year (roughly 5x an adjunct teaching a full load of 10 courses). Twenty years ago, all of the deans were tenured professors who rotated into the job for a few years, now all but one are outside hires who are deans full time. The last president they hired made an absurd amount of money, and brought with him several subordinates all making 150k+ a year. I often wonder how that negotiation went- "I need not only my salary, but I need these extra people to do the parts of the job I don't like."
The problem is insidious- you hire some managers to deal with work no one wants to do. But then, they start hiring people to deal with work THEY don't want to do, so on and so on. Pretty soon all your recent hires have nothing to do with the core competency of your business and they are eating all your profit from within. Its also damn near impossible to get rid of them, because by this point all the hiring and firing that no one wanted to deal with has become their domain.
Its not just education, I've consulted with companies that have more IT project managers than developers, that spend more money on medical benefits-management then they would have spent if they simply paid every claim that walked through the door,etc.
I would call it "being taken over by bureaucracy", but I basically agree.
At private companies the bureaucracy is constrained by market pressures (unless the company finds a particularly juicy spot at the government trough), but for colleges and universities these pressures have been largely absent. Until now.
I expect the next decade to be pretty painful for, um, institutions of higher learning.
I disagree- you'd be amazed how inefficient you can be and still be profitable. Lots of very large companies are being strangled by their bureaucracy even while remaining at least somewhat profitable (generally the existence of a huge company is all-in-itself a barrier to entry for competitors). I've worked for a surprising number of companies that have the basic problem of "I used to be very profitable, but now I find I'm slightly less profitable despite selling more products at higher margins." Even worse, I've seen attempts to solve the problem derailed by the same management apparatus.
A former boss was fond of blaming MBAs. He had a saying something along the lines of- the core problem with MBAs is the idea that you can good at "business" without being good at any particular business. MBAs march in, say "we need to quantify these decisions" and add a ton of process (which invites the managers in). A decade later, they notice that despite generally better conditions they aren't as profitable, they higher some big data consultants to come in and we say things like "you are spending $x+100 dollars to better quantify decisions that are only worth $x, and thats not even counting all the time you waste for all the paperwork that the process requires."
Maybe not, though I would like to see some statistics on that. My prior on this is that education has probably followed the pattern of pretty much every other good thing in 1st world society: it is decade by decade both better and more widely available than it ever has been before.
To clarify, I am not making claims here about how well the higher education works. I am saying that the structure of the US universities where faculty are hired on the basis of their ability to do original research (well, kinda sorta, it's really the ability to publish) but are expected to teach, often pretty basic stuff to pretty stupid undergrads, that structure is suboptimal.
And the changes are easy to see: tenure is becoming harder and harder to get, while adjuncts (who are generally expected to have a Ph.D. but are not expected to do research) are multiplying on all campuses.
Some problems with your perception of American academia:
Ability to publish gets you to the interview stage, the rest is good old-fashioned politics.
Adjuncts are still expected to publish, unless they have no interest at all in upward mobility.
Of course the structure is suboptimal, but no one's really come up with a better alternative.
I don't think that's the evidence-needing assertion in that quote.
C is quirky, flawed, and an enormous success
-- Dennis Ritchie, The Development of the C language
This is true, but what makes it a rationality quote?
He's saying that one does not need to do a perfect job to win. A common failure mode is to spend ages worrying about the details while someone else's good-enough quick hack takes over the world. It's quite a resonant quote for programmers.
C and Unix obliterated their technically superior rivals. There's a whole tradition of worrying about why this happened in the still extant LISP community which was one of the principal losers. Look up 'Worse is Better' if you're interested in the details.
I'm aware of all that. But the idea that perfection is not needed, and that successful things are almost always flawed in some way, seemed too obvious to merit a quote.
But that is just typical mind fallacy on my part: if others feel this is an insight people should be reminded of, I shouldn't argue.
My understanding -- and I wasn't there for that particular holy war, so I might have some of the details wrong -- is that while LISP is in many ways the better language, it didn't at the time have the practical implementation support that C did. Efficient LISP code at the time required specialized hardware; C was and is basically a set of macros to constructs common in assembly languages for most commodity architectures. It worked, in other words, without having to build an entire infrastructure and set of development practices around it.
Later implementations of LISP pretty much solved that problem, but by that time C and its derivatives had already taken over the world.
C was a major improvement on the languages of the day: COBOL, Fortran, and plain assembly. Unlike any of those, it was at the same time fully portable, supported structured programming, and allowed freeform text.
But I don't think programmers would have embraced LISP even if its performance was as good as the other languages. For the same reasons programmers don't embrace LISP-derived languages today. It is an empirical fact that the great majority of programmers, particularly the less-than-brilliant ones, dislike pure functional programming.
Note, though, that (a) "Lisp doesn't look like C" isn't as much of a problem in a world where C and C-like languages are not dominant, and (b) something like Common Lisp doesn't have to be particularly functional - that's a favored paradigm of the community, but it's a pretty acceptable imperative/OO language too.
"Doesn't run well on my computer" was probably a bigger problem. (Modern computers are much faster; modern Lisp implementations are much better.)
Edit: still, C is clearly superior to any other language. ;-)
I suspect the main reason lisp failed is the syntax, because the first thing early computer users would try to do is get the computer to do arithmetic. In C/Fortran/etc. you can write arithmetic expressions that look more-or-less like arithmetic expressions, e.g. (a + b/2) ** 2 / c. In Lisp you can't.
I dislike pure functional programming. I can't think of a pure functional LISP that isn't a toy. I'm sure there is one. I wouldn't use it.
And before we hijack this thread and turn it into a holy war, C is my other favourite language.
Nassim Taleb
Nassim Taleb
Michel de Montaigne, Essais, Book III.
--Sorry, no cite. I got this from someone who said they'd been seeing it on twitter.
And what is the probability that one of them is a Prior?
Maximally uninformative.
On Confidence levels inside and outside an argument:
Heimskringla - The Chronicle of the Kings of Norway
... Huh.
On a miscellaneous note, now I know one of Pratchett's inspirations...
Obligatory xkcd
-- Hávamál (not really)
I love the Hávamál. Not so great for rationality as such, but it's probably the single best source of concentrated bitter-old-man wisdom that I know about.
I don't get how the quote is related to the article.
If the model of dice are perfectly fair and unbreakable is correct, then the Swedish king is justified in assigning very low probability to losing after rolling two sixes; but this model turns out to be incorrect in this case, and his confidence in winning should have been lower.
Of course it would be silly to apply this reasoning to dice in real life, but there are cases (like those discussed in the linked article) where the lesson applies.
If they were fair dice, there would still be a one in 72 chance of King Olaf getting the district. That's definitely worth rolling dice for.
Admittedly, the Swedish king knew his own dice were weighted, so if he thought Olaf's weren't he'd definitely win, but since he's not going to admit to cheating he's not going to tell Olaf that.
I think the idea is something like: the probability of rolling 12 on fair 2d6 is 1/36, but the probability of fair dice being used when kings gamble for territory is far lower.
Yeah, that never happened.
probably not, but why are you certain
Because certainty is higher status than uncertainty.
More importantly, whether or not it happened is irrelevant to its use as a rationality quote...
Update not upon fictional evidence.
Most of these quotes are just something people said, not something that happened that we could gain a moral from. Even if they were, they're not a random sample. We're cherry picking.
Whatever it is you can learn from quotes and a selection of things someone has picked out you can learn from fiction.
One man's fictional evidence is another man's thought experiment, and another's illustrative story.
To me, the lesson is "square dice are physical objects which imperfectly embody the process 'choose a random whole number between one and six'".
If you make the map-territory error and assume that "whatever the dice roll, is what we accept" while simultaneously assuming that "the dice can only each roll whole numbers between one and six; other outcomes such as 'die breaks in half' or 'die rolls into crack in floor' or 'die bursts into flame' or 'die ends up in Eliezer Yudkowsky's pants and travels unexpectedly to Washington DC' are out-of-scope", you're gonna have a bad time when one of those out-of-scope outcomes occurs and someone else capitalizes on it to turn a pure game-of-chance into a game-of-rhetoric-and-symbolcrafting.
I shall cheerfully bet at very high odds against this happening the next time I roll a standard die.
If you are actually offered this bet, you probably should not take it.
I almost said "so shall I, but... " - but then caught myself, because I may very well NOT bet at very high odds against this happening the next time I roll what I perceive to be a standard die.
If I believe my opponent is motivated to cheat, and capable of cheating in a manner that turns "roll a standard die" into "listen to my narrative interpretation of why whatever-just-happened means I won", then I'm apparently willing to take some of the resources I would have otherwise put on that bet, and instead put them on "watch out for signs of cheating and/or malfunctioning dice".
What about fanfictional evidence?
More seriously, shouldn't it be "don't update on fictional evidence as if it were true"?
Certainly it's reasonable for a story to make us reconsider our beliefs.
It's reasonable to update as a result of the analysis of fiction (including fanfiction) for two reasons, neither of which are directly related to the events of the story in the same way that events in real life are related to updating. The first is: does this prompt me to think in a way I did not before? If so, it is not evidence, but it allows you to better way the evidence by providing you with more possibilities. The second is: why was this written? Even a truthless piece of propaganda can be interesting evidence in that it is entangled with human actions and motivations.
I think that this would only be true if it prompts you to think in a new and random way. Fiction which prompts you to think in a new but non-random way (that is, all fiction) could very well make it worse. It could very well be that the author selectively prompts you to think only in cases where you got it right without doing the thinking. If so, then this will reduce your chance of getting it right.
For a concrete example, consider a piece of homeopathic fiction which "prompts you to think" about how homeopathy could work. It provides a plausible-sounding explanation, which some people haven't heard of before. That plausible-sounding explanation either is rejected, in which case it has no effect on updating, or accepted, making the reader update in the direction of homeopathy. Since the fiction is written by a homeopath, it wouldn't contain an equally plausible sounding (and perhaps closer to reality) explanation of what's wrong with homeopathy, so it only leads people to update in the wrong direction.
Furthermore, homeopathy is probably more important to homeopaths than it is to non-homeopaths. So not only does reading homeopathic fiction lead you to update in the wrong direction, reading a random selection of fiction does too--the homeopath fiction writers put in stuff that selectively makes you think in the wrong direction, and the non-homeopaths, who don't think homeopathy is important, don't write about it at all and don't make you update in the right direction.
does anyone else find it ironic that we're using fictional evidence (a story about homeopathic writers that don't exist) to debate fictional evidence?
The scenario is not evidence at all, fictional or not. The reasoning involved might count as evidence depending on your definition, but giving a concrete example is not additional evidence, it only makes things easier to understand. Calling this fictional evidence is like saying that an example mentioning parties A, B, and C is "fictional evidence" on the grounds that A, B, and C don't really exist.
Interesting point. The sort of new ways of thinking I had imagined were more along the lines of "consider more possible scenarios" - for example, if you had never before considered the idea of a false flag operation (whether in war or in "civil" social interaction), reading a story involving a false flag operation might prompt you to reinterpret certain evidence in light of the fact that it is possible (a fact not derived directly from the story, but from your own thought process inspired by the story). While it is certainly possible to update in the wrong direction, the thought process I had in mind was thus:
I have possible explanations A, B, and C for this observed phenomenon Alpha.
I read a story in which event D* occurs, possibly entangled with Alpha*, a similar phenomenon to Alpha.
I consider the plausibility of an event of the type D* occurring, taking in not only fictional evidence but also real-world experience and knowledge, and come to the conclusion that while D* takes certain liberties with the laws of (psychology/physics/logic), the event D is entirely plausible, and may be entangled with a phenomenon such as Alpha*.
I now have possible explanations A, B, C, and D for the observed phenomenon Alpha.
It is important to note that fiction has no such use for a hypothetical perfect reasoner, who begins with priors assigned to each and every physically possible event. Further, it would be of no use to anyone incapable of making that second-to-last step correctly; if they simply import D* as a possible explanation for Alpha, or arrive at some hypothetical event D which is not, in fact, reasonable to assume possible or plausible, then they have in fact been hindered by fictional "evidence".
If considering a new hypothesis fundamentally changes the way you think about priors, and the arguments you used to justify ratios between hypotheses no longer hold, then, yes, you will have to look at the evidence again.
I feel a little odd about calling that process 'updating', since I think it's a little more involved than taking into account a single new piece of evidence.
Medivh
edit: If you decide to reply, please read the original comment on SSC for context.
Not all idiots are like that, and otherwise-non-idiotic people can also get caught in dollar-auction-like discussions if they're sufficiently mind-killed (I mean, I've seen it happen on a website where supposedly 75% of people have IQs over 130), but that's a good heuristic.
Having been on both sides... how do you know when you are the idiot?
I assume I'm the idiot based on probability.
By means of prolonged reflection on my writings.
Though I am glad not everyone followed this advice with regards to me, when I was (more of) an idiot. I owe those patient, sympathetic, tolerant people a great deal.
I would also like to note that I have learned a number of interesting things by (a) spending an hour researching idiotic claims and (b) reading carefully thought out refutations of idiocy - like how they're called "federal forts" because the statutes of the states in which they were built include explicitly ceding the land upon which they were built to the federal government.
I do like the old "Never argue with stupid people, they will drag you down to their level and then beat you with experience" maxim :-)
Maybe so, but this also assumes that you're good at determining who's an idiot. Many people are not, but think they are. So you need to consider that if you make a policy of "don't argue with idiots" widespread, it will be adopted by people with imperfect idiot-detectors. (And I'm pretty sure that many common LW positions would be considered idiocy in the larger world.)
Consider also that "don't argue with idiots" has much of the same superficial appeal as "allow the government to censor idiots". The ACLU defends Nazis for a reason, even though they're pretty obviously idiots: any measures taken against idiots will be taken against everyone else, too.
Having come from there, the general perception is that LW-ers and our positions are not idiots, but instead the kind of deluded crackpot nonsense smart people make up to believe in. Of course, that's largely for the more abstruse stuff, as people in the outside world will either grudgingly admit the uses of Bayesian reasoning and debiasing or just fail to understand what they are.
A large part of the problem is that all the lessons of Traditional Rationality teach to guard against actually arriving to conclusions before amassing what I think one Sequence post called "mountains of evidence". The strength and stridency with which LW believes and believes in certain things fail a "smell test" for overconfidence, even though the really smelly things (like, for example, cryonics) are usually actively debated on LW itself (I recall reading in this year's survey that the mean LW-er believes cryonics has a 14% chance of working, which is lower than people with less rationality training estimate).
So in contradistinction to Traditional Rationality (as practiced by almost everyone with a remotely scientific education), we are largely defined (as was noted in the survey) by our dedication to Bayesian reasoning, and our willingness to take ideas seriously, and thus come to probabilistic-but-confident conclusions while the rest of the world sits on its hands waiting for further information. Well, that and our rabid naturalism on philosophical topics.
There's also a tendency to be doctrinaire among LW-ers that people may be reacting to - an obvious manifestation of this is our use of local jargon and reverential capitalization of "the Sequences" as if these words and posts have significance beyond the way they illuminate some good ideas. Those are social markers of deluded crackpots, I think.
Yes, very definitely so. The other thing that makes LW seem... a little bit silly sometimes is the degree of bullet swallowing in the LW canon.
For instance, just today I spent a short while on the internet reading some good old-fashioned "mind porn" in the form of Yves Couder's experiments with hydrodynamics that replicate many aspects of quantum mechanics. This is really developing into quite a nice little subfield, direct physical experiments can be and are done, and it has everything you could want as a reductive explanation of quantum mechanics. Plus, it's actually classical: it yields a full explanation of the real, physical, deterministic phenomena underlying apparently quantum ones.
But if you swallowed your bullet, you'll never discover it yourself. In fact, if you swallow bullets in general, I find it kind of difficult to imagine how you could function as a researcher, given that a large component of research consists of inventing new models to absorb probability mass that currently has nowhere better to go than a known-wrong model.
How could you function? Well, a quote from last year put it nicely:
Yves Couder's experiments are neat, but the underlying 'quantum' interpretation is basically just Bohm's interpretation. The water acts as a pilot wave, and the silicon oil drops act as Bohmian particles. Its very cool that we can find a classical pilot-wave system, but its not pointing in a new interpretational direction.
Personally, I would love Bohm, but for the problem that it generalizes so poorly to quantum field theories. Its a beautiful, real-feeling interpretation.
Edit: Also neat- the best physical analogue to a blackhole that I know of is water emptying down a bathtub drain faster than the speed-of-sound in the fluid. Many years ago, Unruh was doing some neat experiments with some poor grad student, but I don't know if they ever published anything.