Rationality Quotes September 2012
Here's the new thread for posting quotes, with the usual rules:
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself
- Do not quote comments/posts on LW/OB
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (1088)
Linus Pauling
Edit: another one captured by an old thread!
From the alt-text in the above-linked comic:
Steven Johnson, Everything Bad is Good For You
(His book argues that pop culture is increasing intelligence, not dumbing it down. He argues that plot complexity has increased and that keeping track of large storylines is now much more common place, and that these skills manifest themselves in increased social intelligence (and this in turn might manifest itself in overall intelligence, I'm not sure). Here, he's specifically discussing video games and the internet.)
I highly recommend the book, it's interesting in terms of cognitive science as well as cultural and social analysis. I thought it sounded only mildly interesting when I first picked it up, but now I'm thinking more along the lines that it's extremely interesting. At least give it a try, because it's difficult to describe what makes it so good.
Does he look at the possibility that people are getting more intelligent for some other reason, and popular art is the result of creators serving a more intelligent audience rather than more complex art making people smarter?
No. But your question seems odd. I didn't interpret the book as an attempt to start with the increase in intelligence and then to assume/explain why pop culture was the cause. Rather, I interpreted the book as an attempt to analyze pop culture, which then found that pop culture did things that seemed like they would have beneficial effects. His analysis of the things that pop culture does in our minds is what I found interesting, not so much the parts which talked about intelligence more generally.
Additionally, I'm not really sure what someone would do to identify pop culture as the cause of this increase as opposed to something else. I'm not sure what other factors could be responsible.
I was reacting to the title of the book.
I don't believe the title implies that his primary concern is explaining an intelligence increase.
There are two ways of looking at the interaction between pop culture and intelligence. You can start by analyzing intelligence and noticing that it seems to increase, and then trying to figure out why, and then figuring out that pop culture caused it. Or, you can start by analyzing pop culture, and then noticing that it seems to do things that would have cognitive benefits, and then attaching this to the increase in intelligence as a factor that helps explain it. The book does the latter, not the former.
I think any link between tv and intelligence is unproven, but at least the book does something to debunk the common idea that television is making people stupider.
Really? I thought it was very short and not in depth at all; yeah, his handful of graphs of episodes was interesting from the data visualization viewpoint, but most of his arguments, such as they were, were qualititative and hand-wavey. (What, there are no simplistic shows these days?)
It was rather broad and not very in depth, but it was largely conceptually oriented. He conceded that there were simplistic shows, but argued that the simplistic shows of today tend to be more complicated than the simplistic shows of yesterday. If you disagree...
I don't know how I'd refute him - there are so many TV shows, both now and then! One can cherrypick pretty much anything one likes, although I don't personally watch TV anymore and couldn't do it.
(I'm reminded how people online sometimes say 'anime really sucked in time period X', because they're only familiar with anime released in the '00s and '10s, while if you look at an actual full 30+ strong roster of one of their example 'sucking' years like eg. 1991, you'll often see a whole litany of great or influential series like Nadia, City Hunter, Ranma 1/2, Dragon Ball Z, and Gundam 0083: Stardust Memory. Well, yeah, if you forget entirely about them, I suppose 1991 seems like a really sucky year compared to 2010 or whatever.)
Those anime you cite all sucked though, they were considered "great" or "influential" at the time because people didn't know any better. Anime technology has advanced vastly in the past twenty years.
Anime technology has advanced, yes, but I don't know how you go from that to 'all my examples sucked'.
That was explanation or elaboration, not evidence. I was going to just leave "they sucked" as a bare assertion rather than get into an anime slapfight on LessWrong. If you link me to your anime blog I will be happy to take it up in the comments section there, though.
Alas, I have no anime blog!
Out of curiosity, what in the 90s compares to Hikaru no Go or Madoka Magica?
Seconding Serial Experiments Lain and Evangelion. Also Cowboy Bebop was in the 90s.
Irresponsible Captain Tylor, Berserk, Excel Saga and Trigun are uneven, but have their moments.
I also have a soft spot for the trashy ultraviolent OVA stuff from the early 90s, like Doomed Megalopolis and AD Police Files, but I'm not sure if it's good in any objective sense.
Heh... In my myanimelist profile I've only listed three anime series as favourites, and Hikaru no Go and Madoka Magica are two of them.
The third one is "Revolutionary Girl Utena", from the 1990s. I think it's the sort of series that one either loves or hates -- but I loved it.
Serial Experiments Lain.
Serial Experiments Lain severely disappointed me. It's nicely creepy and atmospheric but....
(rot13) vg'f ernyyl n fgnaqneq "punatryvat" fgbel -- ohg vafgrnq bs snvevrf naq zntvpny jbeyqf naq punatryvat puvyqera, jr unir cebtenzf naq gur Vagrearg naq cebtenzf orpbzvat syrfu.
Gur fpvrapr-svpgvba ryrzragf srry whfg pbfzrgvp punatrf jura gur pber bs gur fgbel vf cher snvel-gnyr... N tevz snvel-gnyr gb or fher, ohg n snvel-gnyr abarguryrff.
I don't consider Hikaru no Go to be anything more than a gimmick anime like Moyashimon, so I have no idea for it.
The most obvious counterpart to Madoka would be Evangelion (yeah I know Sailor Moon was airing in the '90s and was more popular and influential than Madoka will ever be, but I think Eva is a better comparison).
Exactly. It doesn't look like I'm going to finish Hikaru no Go by the end of the year, but I finished Serial Experiments Lain (a 90s anime) in less than 3 days.
I'd start by looking at the shows with the highest ratings.
You could analyze the way that people in the TV business think and talk about complexity, while assuming that they know what they're doing. He seemed to do a bit of this.
-- Nate Silver, The Signal and the Noise
-- Iain McKay et al., An Anarchist FAQ, Sec. F.2.1
-- W.H. Press et al., Numerical Recipes, Sec. 15.1
I don't think it's fair to blame the mathematical statisticians. Any mathematical statistician worth his / her salt knows that the Central Limit Theorem applies to the sample mean of a collection of independent and identically distributed random variables, not to the random variables themselves. This, and the fact that the t-statistic converges in distribution to the normal distribution as the sample size increases, is the reason we apply any of this normal theory at all.
Press's comment applies more to those who use the statistics blindly, without understanding the underlying theory. Which, admittedly, can be blamed on those same mathematical statisticians who are teaching this very deep theory to undergraduates in an intro statistics class with a lot of (necessary at that level) hand-waving. If the statistics user doesn't understand that a random variable is a measurable function from its sample space to the real line, then he/she is unlikely to appreciate the finer points of the Central Limit Theorem. But that's because mathematical statistics is hard (i.e. requires non-trivial amounts of work to really grasp), not because the mathematical statisticians have done a disservice to science.
-- G.K. Chesterton
But so does lukewarm water (which is also cheaper, and doesn't steam up the mirror in the bathroom).
-- Bryan Caplan
Hitler was at least a hypocrite - he got his Jewish friends to safety, and accepted same-sex relationships in himself and people he didn't want to kill yet. The kind of corruption Caplan is pointing at is a willingness to compromise with anyone who makes offers, not any kind of ignoring your principles. And Nazis were definitely against that - see the Duke in Jud Süß.
?
Please provide evidence for this bizarre claim?
Spared Jews:
Whether Hitler batted for both teams is hotly debated. There are suspected relationships (August Kubizek, Emil Maurice) but any evidence could as well have been faked to smear him.
Hitler clearly knew that Ernst Röhm and Edmund Heines were gay and didn't care until it was Long Knives time. I'm less sure he knew about Karl Ernst's sexuality.
Wittgenstein paid a huge bribe to allow his family to leave Germany. Somewhere I read that this particular agreement was approve personally be Hitler (or someone very senior in the hierarchy).
That doesn't contradict the general point that Nazi Germany was generally willing to kill and steal from its victims (especially during the war) rather than accept bribes for escape.
This may have happened some of the time, but everything I read suggests it was the exception and not the rule.
The reason Jews did not emigrate out of Germany during the 30s was that Germany had a big foreign balance problem, and managed tight government control over allocation of foreign currency. Jews (and Germans) could not convert their Reichsmarks to any other currency, either in Germany or out of it, and so they were less willing to leave. And no other country was willing to take them in in large numbers (since they would be poor refugees). This continued during the war in the West European countries conquered by Germany. (Ref: Wages of Destruction, Adam Tooze)
Later, all Jewish property was expropriated and the Jews sent to camps, so there was no more room for bribes - the Jews had nothing to offer since the Nazis took what they wanted by force.
The last bit is most famously true of Rohm, though of course there's a dozen different things going on there.
That sums it up.
Bargains and bribes seem of questionable use when a power is willing and able to kill you and seize all your assets anyway. I suppose there's the odd Swiss bank account or successful smuggling case to deal with, or people willing to destroy their possessions rather than let them fall into the hands of a murderous authority, but I'd be surprised if any of these weren't fairly small minorities in the face of the total. We're certainly not talking millions.
Corruption at lower levels could have reduced the death toll of many famous genocides (in fact, I'd imagine it did), but at the level of Hitler or Pol Pot I can only see it helping if the bribes or bargains being offered are quite large and originate outside of the regions where the repression's taking place. Much like the present situation with North Korea, come to think of it.
Baudrillard, In the Shadow of Silent Majorities
I was curious why the Baudrillard comment was downvoted when it expresses the same idea as the Nietzsche comment, it just uses a different style and approaches the problem from a different direction. Ideas, anyone?
Well, I'm not even sure whether Boudrillard's quote is grammatically well-formed, so there's that. Then again, postmodernist texts tend to be imbued with near-poetical and mystical qualities. Much like Zen koans, they're more about exemplifying a particular mind-posture and way of thinking than they are about straightforward argumentation. I think it's unfair to expect LessWrongers to be familiar with such texts.
Oh. So this quote is difficult to read, then? More difficult than the Nietzsche one? I guess inferential gaps must be coming into play here. I'm having a difficult time trying to not-understand it, trying to emphasize with your viewpoint. I'm having a difficult time believing that you couldn't understand the quote, honestly.
I feel like you're generalizing too much about post modernism. I like lots of it, and don't think that it's mystical oriented. I would say rather that it packs a lot of information into a small amount of words through the clever use of words and through recurring concepts and subtle variations on those concepts.
Post modernism can be difficult to understand, but I don't think it is in this case, and I think that it's complexity is justified. Scientists use obscure terminology, but for a good purpose, generally. Some scientists use obscure terminology to hide the flaws in their ideas. I view post modern criticisms in almost exactly the same way - their complexity can be for both good and bad.
Also, Baudrillard is French. It might not be his fault if there's problems with the translated text.
Nope, I'm a native French speaker and my reaction to Baudrillard is "WTF?" and building a Markov Baudrillard quote generator to see if I can tell the difference.
Jargon is good. Vaguely defined jargon isn't bad - sometimes all you can do is say "sweet refers to the taste of sugar, if you don't know what that is I can't help you".
But structure shouldn't be completely unclear. Baudrillard has a lot of "X is Y" statements and very few "therefore"s. I can't tell what is a conclusion, what is an argument, what is a definition, or even whether there are anything but conclusions.
I've found some Baudrillard texts that clearly mean things, but they're not very good.
Can you specify more about what parts of the quote are confusing?
This one isn't that bad. (For utter, words-don't-work-that-way confusion, see Debord. Or good ol' Hegel.)
That bit is straightforward.
"The masses" has a standard denotation but various connotations. Freddy Nietzsche talks about enthusiastic young people, which is more specific.
What's "to keep within reason"? What this evokes is talking someone down, preventing outbursts. Applied to the masses, does he mean control - propaganda, opiate of the masses? The context suggests the opposite: to present a logical argument and try to convince audiences with it as the core of communication, more important than ethos and pathos and Cheetos.
What?
Okay, "imperative" seems to mean what social justice types can "enforcement by shaming". If you don't talk like a Vulcan, whoever is producing those great media reform plans (pretentious elites?) will shame you.
Okay, so media becomes morally loaded: information good, fluff bad. Much like food is morally loaded: vegetables good, fat bad.
Examples! Hallelujah, hosanna in excelsis! So the media reformers want to make people better. If you say a thing and hearing it doesn't make listeners better, you're selling junk food.
That seems pretty clear too: logical arguments aren't what convinces people. Nietzsche says that too, but in a more specific context: recruiting for a cause.
I assume this means: "the masses decide what they want to take from what they hear, and it's not logical argument, it's"
I'll grant that "spectacle" is a totally precise and useful term of art that people clearly define whenever I'm out of earshot. But if he's saying what Fred says, he doesn't need the jargon; it's not a rare concept.
Freddypants is saying "If you want a young, energetic, status-seeking enthusiast to be enthusiastic about your cause, don't bother calmly explaining why your cause is good. Instead, make it look awesome and promise exciting heroics.". (Which he what he does in Zarathustra, and it worked on me but I already agreed.) Baudrillard appears to be saying "If you want to convince people, calm explanations won't work.".
Okay, thank you.
I agree that Hegel is ridiculously opaque, too.
I'm using "mystical" in a rather specialized sense, actually. What I mean is that postmodernist texts seem to eschew straightforward arguments - instead they use rhetorical and poetical patterns in a functional way, to inspire a specific mental stance in the reader. This mental stance might be quite simply described as "emptying the teacup", i.e. questioning and letting go of the "cached thoughts" which comprise one's current understanding of reality and culture. This mental stance happens to be remarkably useful in textual criticism and social science, where one often has to come to terms with (and perhaps reconstruct, at least partially) cultures which are far apart from one's own, so that a "filled cup" would be a significant hindrance.
Oh, and yes, I had quite a bit of trouble with trying to understand the Beaudrillard quote, although I did grok the gist of it, and I also got the similarity wrt. the Nietzsche one. But I'd say that grammar is clearer in Nietzsche's quote, and even his rhetoric seems more direct and to the point here.
Okay, gotcha. Thanks.
Priming? Beaudrillard is associated with humanities, pomo and academic philosophy; Nietzche is associated with atheism, contrarianism and the idea of the ubermensch. The comment doesn't seem to be very strongly downvoted; possibly you're just dealing with detractors here (I daresay LW has more fans of the latter than of the former).
This was roughly my thought as well. I thought there might also have been more substantive differences though and I was curious what those might be. The only thing I could see is that Baudrillard's quote had a tone that's more critical of the masses and the way they do politics, and that Baudrillard's quote could be misread as an injunction to stop trying to make people rational (which it's not).
Nietzsche, The Gay Science
Salman Rushdie, explaining identity politics
I think "identity politics" is a term of art which covers things other than that which aren't bad, like minority struggles.
You've got a point, and it's one that gets into hard issues. It can be quite hard for some people to decide whether they're being unfairly mistreated and to act on it, and the people for whom the decision is easy aren't necessarily sensible. Emotions are not a reliable tool for telling whether acting on a feeling of being unfairly mistreated makes sense.
How do you tell to what extent is a particular instance of people feeling outraged them just getting worked up for the fun of it over something they should endure, and to what extent are they building up enough allies and emotional energy to deal with a problem which (by utilitarian standards?) needs to be dealt with?
I don't know how to tell legitimate movements from illegitimate ones, but the term of art "identity politics" refers to both. ID politics is a specific kind of political advocacy, and there are both good ID politics arguments and bad ones. You'd probably just have to investigate the claims they're making on a case by case basis.
But, I wasn't trying to interrogate whether defining yourself by outrage can be good in some instances, I was trying to point out that the term "ID politics" refers to things outside of defining yourself in relation to outrage. Maybe I just misinterpreted what you were saying, but I thought your comment unintentionally hinted that you were unaware the phrase is a specific term of art. There are many types of identity politics that aren't about outrage or opposition.
You're quite right, I didn't know about it as a term of art.
I suppose I've mostly heard about the outrage variety of identity politics-- it tends to be more conspicuous.
Bill Clinton
Marcus Aurelius
Meh, there are worse things to be than a mean man.
There are considerably more worse things to be than a noble one.
(Baruch Spinoza)
Pierre Proudhon, to Karl Marx
More from Scott Adams:
I cannot tell if this is rationality or anti-rationality:
Steve Ballmer
I'd saying telling an interviewer you have sufficient confidence in your product to not need a backup plan is rational, actually not having one isn't.
See, if instead of "I'm not paid to have doubts." he said "I am paid to address all doubts before a product is released", that would have made more sense.
I'm reminded of a quote in Lords of Finance (which I finished yesterday) which went something like 'Only a fool asks a central banker about the currency and expects an honest answer'. Since confidence is what keeps banks and currencies going...
This comes across as inauthentic and slightly scared to me. At best, he's not great at PR. At worst, he doesn't have any back up plan. So that would support calling it irrationality.
Well. I was thinking about it, and it seems like not having a backup plan is the kind of thing that would send bad signals to investors and whatnot. It's not clear to me that he's better off doing this than explaining how Microsoft is a fantastically professional company that's innovating and reaching into new frontiers, etc.
I don't know specifically what alternate products would potentially be good ideas for them though. I agree that backup plans are good in general but I don't know if they're good for Microsoft specifically, based on the resources they have. Windows is kind of their thing, I don't know if they could execute on anything else.
-- xkcd 667
-William of Ockham
This is an interesting quote for historical reasons but it is not a rationality quote.
It makes a very important reply to anyone who claims that e.g. you should stick with Occam's original Razor and not try to rephrase it in terms of Solomonoff Induction because SI is more complicated.
Humans and their silly ideas of what's complicated or not.
What I find ironic is that SI can be converted into a similarly terse commandment. "Shorter computable theories have more weight when calculating the probability of the next observation, using all computable theories which perfectly describe previous observations" -- Wikipedia.
I read this as a reminder not to add anything to that map that won't help you navigate the territory. How is this not a rationality quote? Are you rejecting it merely because of the third disjunct?
The quote doesn't say that, this is (only) a fact about your reading.
I'm not especially impressed with the first two either, nor the claim to be exhaustive (thus excluding other valid evidence). It basically has very little going for it. It is bad epistemic advice. It is one of many quotes which require abandoning most of the content and imagining other content that would actually be valid. I reject it as I reject all such examples.
— Robert A. Heinlein
I think that quote is much too broad with the modifier "might." If you should procrastinate based on a possibility of improved odds, I doubt you would ever do anything. At least a reasonable degree of probability should be required.
Not to mention that the natural inclination of most people toward procrastination means that they should be distrustful of feelings that delaying will be beneficial; it's entirely likely that they are misjudging how likely the improvement really is.
That's not, of course, to say that we should always do everything as soon as possible, but I think that to the extent that we read the plain meaning from this quote, it's significantly over-broad and not particularly helpful.
There's also natural inclinations towards haste and impatience. (They probably mostly crop up around different things / in different people than procrastinatory urges, but the quote is not specific about what it is you could put off.)
I'm reminded of the saying, "A weed is just a plant in the wrong place." Different people require different improvements to their strategies.
That's certainly a fair point.
I suppose it's primarily important to know what your own inclinations are (and how they differ in different areas) and then try to adjust accordingly.
Do it today, and fix/retry tomorrow on failure?
Perhaps it's a one-time thing.
Edward Tufte, "Beautiful Evidence"
...what else?
Quantum physics
Harry Potter, in Harry Potter and the Methods of Rationality by Eliezer Yudkowsky
I understand people. Imperius! People, I command you to build me a moon rocket!
-- Draco
...That might actually work, as long as he understands which people to Imperius.
That arguably counts as LW/OB.
It is arguably a lot more affiliated to LW than OB is. (We successfully got OB removed from the no-quote list at some stage. Unfortunately someone reverted it.)
If HPMoR isn't allowed, that should be specified in the rules.
I mean that it's a nice quote, but I suspect that's the reason for the downvotes.
I'm not overly impressed with the quote either. Sometimes you can understand things and still not command them. Sometimes you just lose and all understanding you can get will just tell you to go do something else that you can control.
That would be much more convincing coming from literally anyone other than Kanazawa. It takes very little charity to interpret his critics as saying, not "Your theories are inherently racist" but "Your theories are only some of many compatible with your findings; you are privileging them because you are biased in favor of hypotheses that postulate certain races naturally do worse than others".
I don't know what to learn from the quote. It's literally true, but it's also clearly unhelpful, since Kanazawa writes this while following non-truth-seeking algorithms. Maybe the moral is "If someone calls you a mean name, address the content of the criticism and not whether the mean name applies", or maybe "Don't be a giant flaming hypocrite".
He isn't a great scientist in my mind since he seems to often just lazily reverse stupidity, but it was a good quote.
-- Jianzhi Sengcan
Edit: Since I'm not Will Newsome (yet!) I will clarify. There are several useful points in this but I think the key one is the virtue of keeping one's identity small. Speaking it out loud is a sort of primer, meditation or prayer before approaching difficult or emotional subjects has for me proven a useful ritual for avoiding motivated cognition.
For the curious, it's the opening of 信心铭 (Xinxin Ming), whose authorship is disputed (probably not the zen patriarch Jiangzhi Sengcan). In Chinese, that part goes:
(The Wikipedia article lists a few alternate translations of the first verses, with different meanings)
Do I understand you to be saying that you avoid "the struggle between 'for' and 'against'" to an unusual degree compared to the average person? Compared to the average LWer?
No. I'm claiming this helps me avoid it more than I otherwise could. Much for the same reason I try as hard as I can to maintain an apolitical identity. From my personal experience (mere anecdotal evidence) both improve my thinking.
Respectfully, your success at being apolitical is poor.
Further, I disagree with the quote to extent that it implies that taking strong positions is never appropriate. So I'm not sure that your goal of being "apolitical" is a good goal.
Since we've already had exchanges on how I use "being apolitical", could you please clarify your feedback. Are you saying I display motivated cognition when it comes to politically charged subjects or behave tribally in discussions? Or are you just saying I adopt stances that are associated with certain political clusters on the site?
Also like I said it is something I struggle with.
My impression that you are unusually NOT-mindkilled compared to the average person with political positions/terminal values as far from the "mainstream" as your positions are.
You seem extremely sensitive to the facts and the nuances of opposing positions.
Now I feel embarrassed by such flattery. But if you think this an accurate description then perhaps me trying evicting "the struggle between 'for' and 'against'" from my brain might have something to do with it?
I'm not sure I understand what you mean by this then. Let's taboo apolitical. To rephrase my original statement: "I try as hard as I can to maintain an identity, a self-conception that doesn't include political tribal affiliations."
You certainly seem to have succeeded in maintaining a self-identity that does not include a partisan political affiliation. I don't know whether you consider yourself Moldbuggian (a political identity) or simply think Moldbug's ideas are very interesting. (someday, we should hash out better what interests you in Moldbug).
My point when I've challenged your self-label "apolitical" is that you've sometime used the label to suggest that you don't have preferences about how society should be changed to better reflect how you think it should be organized. At the very least, there's been some ambiguity in your usage.
There's nothing wrong with having opinions and advocating for particular social changes. But sometimes you act like you aren't doing that, which I think is empirically false.
I disagree with the quote too. On the other hand, the idea of keeping one's identity small is not the same as being apolitical. It means you have opinions on political issues, but you keep them out of your self-definition so that (a) changing those opinions is relatively painless, (b) their correlations with other opinions don't influence you as much.
(Caricatured example of the latter: "I think public health care is a good idea. That's a liberal position, so I must be a liberal. What do I think about building more nuclear plants, you ask? It appears liberals are against nuclear power, so since I am a liberal I guess I am also against nuclear power.")
I agree with everything you just said - keeping one's identity small does not imply that one cannot be extremely active trying to create some kind of social/political change.
I understand how a position can be correct or incorrect. I don't understand how a position can be strong or weak.
As I was using the term, "strong" is a measure of how far one's political positions/terminal values are from the "mainstream."
I'm very aware that distance from mainstream is not particularly good evidence of the correctness of one's political positions/terminal values.
In a world of uncertainty, numbers between 0 and 1 find quite a bit of use.
I understand what it means to believe that an outcome will occur with probability p. I don't know what it means to believe this very strongly.
A possible interpretation is that the "strength" of a belief reflects the importance one attaches to acting upon that belief. Two people might both believe with 99% confidence that a new nuclear power plant is a bad idea, yet one of the two might go to a protest about the power plant and the other might not, and you might try to express what is going on there by saying that one holds that belief strongly and the other weakly.
You could of course also try to express it in terms of the two people's confidence in related propositions like "protests are effective" or "I am the sort of person who goes to protests". In that case strength would be referring to the existence or nonexistence of related beliefs which together are likely to be action-driving.
They might also differ in just how bad an idea they think it is.
It means that many kinds of observation that you could make will tend to cause you to update that probability less.
E.T. Jaynes' Probability Theory goes into some detail about that in the chapter about what he calls the A_p distribution.
Concretely: Beta(1,2) and Beta(400,800) have the same mean.
I don't understand K to be arguing in favor of high-entropy priors, or T to be arguing in favor of low-entropy priors. My guess is that TimS would call a position a "strong position" if it was accompanied by some kind of political activism.
The claim looks narrower: repeating the poem makes Konkvistador more likely to avoid the struggle.
I like his contributions, but Konkvistador is not avoiding the struggle, when compared to the average LWer.
Sick people for some reason use up more medicine and may end up talking a lot about various kind of treatments.
Case in point:
-- Ro-Man
I don't get it. Is this saying "Don't be prejudiced or push for any overarching principle; take each situation as new and unknown, and then you'll find easily the appropriate response to this situation", or is this the same old stoicist "Don't struggle trying to find food, choose to be indifferent to starvation" platitude?
Edited in a clarification. Though it will not help you since I have shown you the path you can not find it yourself. Sorry couldn't resist teasing or am I? :P
-- 21st c. AI Clippy
Albert Einstein (maybe)
Cf. this and this.
Warning: Your milage may vary.
"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man." -George Bernard Shaw
Sadly, duplicate.
-- Ludwig Wittgenstein, Philosophical Investigations
-- Dienekes Pontikos, Citizen Genetics
Jon Skeet
-- Kaiki Deishū, Episode 7 of Nisemonogatari.
I'll risk a bit of US politics, just because I like the quote:
Scott Adams on one of the two presidential candidates being skilled at the art of winning (with some liberal use of dark arts).
Ambrose Bierce
Ernest Hemingway
Excellent. A shortcut to nobility. One day of being as despicable as I can practically manage and I'm all set.
It does not state which (!) former self, so I would expect some sort of median or mean or summary of your former self and not just the last day. So I'm sorry but there is no shortcut ;-)
Indeed: if you were to be ignoble one day and normal the next, then your nobility would have gone up significantly.
Tenzin Gyatso, 14. Dalai Lama
That's intriguing, but it also sounds like a case of non-apples.
Well, it is a necessary step to find other fruits.
Buckaroo Banzai
...
...
dur....
....
What?
I'll take the new -5 karma hit to point out that this comment shouldn't be downvoted. It is an interesting critique of the post it replies to.
interesting?
Probably it would be even more interesting if I could understand it.
Eliezer posted a comment that's essentially devoid of content. This satirizes the original quote's claim that one should be of "no mind whatsoever" by illustrating that mindlessness isn't particularly useful-- a truly mindless individual (like that portrayed in the comment) would have no useful contributions to make.
"No mind" is ordinary mind.
That went completely over my head. (I guessed he was alluding to some concept whose name began with “dur”, but I couldn't think of any relevant one.)
I interpreted 'mind' as 'opinion', so I didn't get it either.
How is it a critique? The quote is an adequate expression of Eliezer's own third virtue of rationality, and I daresay if anyone had responded as uncharitably as that to his "Twelve Virtues", he would have considered 'dur' to be an adequate summary of that person's intellect.
How is it uncharitable? Eliezer is emptying his mind as recommended by Doctor Banzai. Not sure how it's a "critique" though.
See a priori, No Universally Compelling Arguments.
The critique is of the phrase "but to be of no mind whatsoever."
The uncharitable interpretation is that something without a mind is a rock; the charitable interpretation is to take "mind" as "opinion."
I ended up downvoting the criticism because it doesn't apply to the substance of the quote, but to its word choice, and is itself not as clear as it could be.
My interpretation was that it was advising system 1 rather than system 2 reasoning, thus no mind being no explicit thoughts.
The criticism is that a martial artist or scientist is actually trying to attain a highly specific brain-state in which neurons have particular patterns in them; a feeling of emptiness, even if part of this brain state, is itself a neural pattern and certainly does not correspond to the absence of a mind.
The zeroth virtue or void - insofar as we believe in it - corresponds to particular mode of thinking; it's certainly not an absence of mind. Emptiness, no-mind, the Void of Musashi, all these things are modes of thinking, not the absence of any sort of reified spiritual substance. See also the fallacy of the ideal ghost of perfect emptiness in philosophy.
Cf. Mushin
And this critique I upvoted, because it is both clear and a valuable point. I still think you're using an uncharitable definition of the word "mind," but as assuming charity could lead to illusions of transparency it's valuable to have high standards for quotes.
You've mentioned this before, and I don't really know where it comes from. Do you have any specific philosopher or text in mind, or is this just a habit your perceive in philosophical argument? If so, in whose argument? Professional or historical or amateur philosophers?
Aside from some early-modern empiricists, and maybe Stoicism, I can't think of anything.
I'm amazed how you guys manage to get all that from "dur". My communication skills must be worse than I thought.
Context helps.
I agree that the response was not particularly charitable, but it's nevertheless generally a type of post that I would like to see more of on LessWrong-- I think that style of reply can be desirable and funny. See also this comment.
Memories can be vile, repulsive little brutes. Like children, I suppose. haha.
But can we live without them? Memories are what our reason is based upon, if we can't face them, we deny reason itself! Although, why not? We aren't contractually tied down to rationality!
There is no sanity clause!
-The Joker, A Killing Joke
Obsoleted by sticky notes.
"Junior", FIRE JOE MORGAN
This is my home, the country where my heart is;
Here are my hopes, my dreams, my sacred shrine.
But other hearts in other lands are beating,
With hopes and dreams as true and high as mine.
My country’s skies are bluer than the ocean,
And sunlight beams on cloverleaf and pine.
But other lands have sunlight too and clover,
And skies are everywhere as blue as mine.
-Lloyd Stone
obviously he never visited the British Isles :D
Duplicate, please delete the other.
Michael Lewis, Moneyball, ch. 4 ("Field of Ignorance")
CS Lewis, The Screwtape Letters
"If your plan is for one year plant rice. If your plan is for 10 years plant trees. If your plan is for 100 years educate children" - Confucius
...If your plan is for eternity, invent FAI?
Depends how you interpret the proverb. If you told me the Earth would last a hundred years, it would increase the immediate priority of CFAR and decrease that of SIAI. It's a moot point since the Earth won't last a hundred years.
Sorry, Earth won't last a hundred years?
The idea seems to be that even if there is a friendly singularity, Earth will be turned into computronium or otherwise transformed.
I guess he means “only last a hundred years”, not “last at least a hundred years”.
Just to make sure I understand: you interpret EY to be saying that the Earth will last more than a hundred years, not saying that the Earth will fail to last more than a hundred years. Yes?
If so, can you clarify how you arrive at that interpretation?
“If you told me the Earth would only last a hundred years (i.e. won't last longer than that) .... It's a moot point since the Earth won't only last a hundred year (i.e. it will last longer).” At least that's what I got on the first reading.
I think I could kind-of make sense “it would increase the immediate priority of CFAR and decrease that of SIAI” under either hypothesis about what he means, though one interpretation would need to be more strained than the other.
The idea is that if Earth lasts at least a hundred years, (if that's a given), then the possibility of a uFAI in that timespan severely decreases -- so SIAI (which seeks to prevent a uFAI by building a FAI) is less of an immediate priority and it becomes a higher priority to develop CFAR that will increase the public's rationality for the future generations, so that the future generations don't launch a uFAI.
(The other interpretation would be “If the Earth is going to only last a hundred years, then there's not much point in trying to make a FAI since in the long-term we're screwed anyway, and raising the sanity waterline will make us enjoy more what time there is left.)
EDIT: Also, if your interpretation is correct, by saying that the Earth won't last 100 years he's either admitting defeat (i.e. saying that an uFAI will be built) or saying that even a FAI would destroy the Earth within 100 years (which sounds unlikely to me -- even if the CEV of humanity would eventually want to do that, I guess it would take more than 100 years to terraform another place for us to live and for us all to move there).
So, we can construct an argument that CFAR would rise in relative importance over SIAIif we see strong evidence the world as we know it will end within 100 years, and an argument with the same conclusion if we see strong evidence that the world as we know it will last for at least 100 years.
There is something wrong.
I was just using "Earth" as a synonym for "the world as we know it".
EY does seem in a darker mood than usual lately, so it wouldn't surprise me to see him implying pessimism about our chances out loud, even if it doesn't go so far as "admitting defeat". I do hope it's just a mood, rather than that he has rationally updated his estimation of our chances of survival to be even lower than they already were. :-)
I am surprised that this claim surprises you. A big part of SI's claimed value proposition is the idea that humanity is on the cusp of developing technologies that will kill us all if not implemented in specific ways that non-SI folk don't take seriously enough.
Of course you're right. I guess I haven't noticed the topic come up here for a while, and haven't seen the apocalypse predicted so straightforwardly (and quantitatively) before so am surprised in spite of myself.
Although, in context, it sounds like EY is saying that the apocalypse is so inevitable that there's no need to make plans for the alternative. Is that really the consensus at EY's institute?
I have no idea what the consensus at SI is.
Nanotech and/or UFAI.
-- Tenzin Gyatso, 14th Dalai Lama
Not all that rational. Note that he requires scientific proof before he is willing to change his beliefs. The standards should be much lower than that.
No, the claim is that a scientific proof is sufficient from him to feel the need to change his beliefs. It isn't that it's necessary.
Technically true, but nice though that is, saying that scientific proof would force you to change your beliefs still isn't a very impressive show of rationality. It would be better if he had said "Whenever science and Buddhism conflict, Buddhism should change".
I know, it is good to hear it from a religious figure, but if it were any other subject the same claim would leave you indifferent. "If it were scientifically proven that aliens don't exist I will have to change my belief in them." Sound impressive? No? Then the Dalai Lama shouldn't get any more praise just because it's about religion.
Has anyone taken the time to present to the Dalai Lama a list of things about Buddhism that science proves (or can convincingly demonstrate to be) wrong?
When would you say that science and X is in conflict when there isn't scientific proof that X is wrong?
Science is a method. In itself it's about doing experiments. It's not about the ideology of the scientist that might conflict with X even if there's no proof that X is wrong.
Science and X are in conflict when on the whole there is more scientific evidence that X is wrong than there is evidence that it is right. Saying "I'll change my belief if science proves me wrong" SOUNDS reasonable, but it is the kind of thing homeopaths say to pretend to be scientific while resting secure in the knowledge that they will never have to actually change their mind, because they can always say that it hasn't been "proven" yet.
There is no "scientific proof" that there are no aliens. There is no "scientific proof" that the earth is 4.7 billions of years old. Not in the sense that there is a "proof" of Bayes' theorem. And that's where all the problem is. You can't limit yourself with changing your believes when they are "proven" wrong. You should change your belief when they are at odd with evidence and Occam's Razor.
The Dalaï Lama believes in reincarnation (or at least he officially says so, I don't know what are his true believes and what is political position, but let's assume he's honest). There is no "scientific proof" that reincarnation is not possible, so he can't bolster how much he is open to science. But yet, if you understand science, the evidence that there is no such thing as reincarnation is overwhelming.