calling a fact unlikely is an insult to your prior model, not the fact itself
Not necessarily. Your model could have been quite reasonable, and yet something weird happened in the world. Sometimes, people win the lottery twice on the same day.
I think EY is pointing to the case of somebody winning the lottery twice in a lifetime, which people would think is incredibly weird, despite it being very normal - see http://www.amazon.com/Understanding-Probability-Chance-Rules-Everyday/dp/0521833299. I suspect that the "looks weird" due to having the wrong model is more common than "looks weird" due to being an outlier.
One person who resisted Ronald was Ayn Rand. As one of the young libertarians (Ronald’s friend Murray Rothbard was another) who were invited to her apartment for intellectual discussions, he was cast into oblivion after a difference of opinion about . . . Rachmaninoff. Guests were asked to say who their favorite composers were, and when Rand’s turn came, she said “Rachmaninoff,” with specific reference to his second piano concerto. “Why?” Ronald asked. “Because he was the most rational,” Rand responded. At which Ronald laughed, thinking it must be a joke. He knew that the composer had dedicated that concerto to his psychiatrist — and anyway, rationality had nothing to do with its greatness. But Ronald’s laughter resulted in exile, and the loss of friends who were dear to him.
From an obituary for Ronald Hamowy.
(a) deliberately not rejecting people who disagree with a particular point of mere optimality, and (b) deliberately extending hands to people who show respect for the process and interest in the algorithms even if they're disagreeing with the general consensus.
Do you think Dmytry might be a good case study for this? I thought he had some interesting and novel ideas about processes/algorithms that at least didn't seem obviously wrong as well as some technical understanding of things like Solomonoff Induction, and also had strong disagreements with many of us regarding FAI and AI Risk. Should we have "extended our hands" to him more (at least before he became increasingly trollish), and if so how? (How would you taboo "extend hands" generally and in this specific instance?) If not, do you have someone else in mind who could serve as a concrete example?
If you're writing a post comparing the experimental evidence for four different diets, that's not "Rational Dieting", that's "Optimal Dieting".
Isn't that as wrong and misleading as using Rational Dieting? Wouldn't Optimal imply that this is the very best way to diet when the article is actually on 'Comparing evidence for for diets'? Same as how 'Rational Dieting' carries an implication that your post discusses the cognitive algorithm for dieting, as opposed to four contributing things to keep in mind and thus you should use 'Four Biases Screwing Up Your Diet' for a title, doesn't Optimal imply the wrong thing? Seems to me like you are committing different fallacies (or errors) when you are trying to fix the previous fallacies (or errors) committed due to the misuse of the word 'rational'.
Meta: I suggest creating a sequence index, and putting a link to the next post in the sequence at the bottom of each post, like you already have for all your other sequences.
Supplementing potassium (citrate) hasn't helped me much, but works dramatically for Anna, Kevin, and Vassar.
I can't find mention of this on LW and the first few things Google turns up have to do with treating kidney stones, which doesn't seem relevant. What benefits do you get when this "works dramatically"?
The first time I took supplemental potassium (50% US RDA in a lot of water), it was like a brain fog lifted that I never knew I had, and I felt profoundly energized in a way that made me feel exercise was reasonable and prudent, which resulted in me and the roommate that had just supplemented potassium going for an hour long walk at 2AM.
Experiences since then have not been quite so profound (which probably was so stark for me as I was likely fixing an acute deficiency), but I can still count on a moderately large amount of potassium to give me a solid, nearly side effect free performance boost for a few hours.
I had a similar experience the first time I supplemented magnesium. Long lasting, non-jittery energy spike. I felt stronger (and empirically could in fact lift more weight), felt better, and was extremely happy. The effect decreased the next few times. After 4 doses (of 50% RDA, spread out over 2 weeks) I began to have adverse effects, including heart palpitation, weakness, and "sense of impending doom".
I wonder if there is a general physiological response to a sudden swing in electrolyte balance that causes the positive effect, rather than the removal of a deficiency.
Potassium is a major one of the ions moved around in neuron action potential activation, and the RDA is waaay above what almost everyone gets (you would need to eat 12 bananas/day to meet it). The idea is something like that it helps neuron transmission work.
Good post.
What about using the word 'rational' for alliterative purposes? :)
Does anyone know any good dubstep mixes of classical music
Here's some, but they're not great. As I mentioned in an early draft of How to Fall in Love with Modern Classical Music, Nero's "Doomsday" samples from my favorite piece of contemporary classical music, John Adams' Harmonielehre. Also see Rudebrat's "Amadeus" and this Fur Elise dubstep remix. The best I could find in 5 minutes was Dubstep Beethoven.
As I mentioned in How to Fall in Love with Modern Classical Music,
Wow! Luke, I somehow totally missed that you had an interest in this subject. You even have Ferneyhough on that page! (And Murail -- who was actually a teacher of mine.)
I'm about ready to forgive you for every sin you've ever committed -- maybe even including the use of the word "classical". :-)
SI + $10.
[veering off topic]
So, since you guys know something about music...
I think I have fairly poor taste in music. Perhaps as a result of growing up listening to NES and SNES-era video game music all the time, I have an inordinate fondness for the sound of MIDI files, which are supposed to be one of those things everyone hates. As a matter of fact, I tend to feel that video game music has gotten notably worse as the technical capabilities of game consoles has gotten better. (I have three hypotheses that could explain this. One is that the music has improved but my taste in music sucks. The second is that voice acting competes with music for players' attention, and that it's no coincidence that the music stopped being as interesting at the same time voice acting became more common. The third is that improvements in technology "freed" composers from having to rely on melodic complexity alone to hold gamers' attention, so melodies have gotten less interesting.)
Anyway, what I'm really asking is, are those old game soundtracks actually any good, or do I just have no taste?
The deeper danger is in allowing your de facto sense of rationalist community to start being defined by conformity to what people think is merely optimal, rather than the cognitive algorithms and thinking techniques that are supposed to be at the center.
Thumbs up for this; I might even suggest making it a "tl;dr". In print, I think sometimes "very short abstraction - concrete examples - moderate-length abstraction" works well.
I feel like there's a small inferential gap between the Ayn Rand anecdote and the Objectivist failure mode as you've presented it: the anecdote establishes unjustified personal antipathy on the part of a group leader, but, absent context, that doesn't lead inevitably to its target getting voted off the island.
People who've read about the history of Objectivism would probably be able to fill this gap with their own knowledge. People who've internalized Objectivism's reputation as a cult would probably fill it with an assumption (a correct one, as it turns out). But I don't think those sets cover the space of possible readers all that well.
Does anyone know any good dubstep mixes of classical music, by the way?
Can't say I do. On the other hand, there's always Vocaloid (eg. The Symphonic Pilgrimage of Luka)...
This post should really be (also) a part of the Craft and the Community sequence. The insight in conveys seems very relevant and very valuable, and I don't recall it being stated anywhere near as explicitly.
Coming from a hard-core Objectivist, the Objectivist community is unfortunately rife with all sorts of so-called "schisms". I think this is intrinsic in any community of thinkers who are focused on objectivity/optimality/rationality/etc in general, because inevitably people will feel differently on a given issue, and then everyone goes around blaming the other group that they aren't really objective or rational or optimal, etc.
This leads to me having to qualify a statement about some issue X with something like this:
...As a result of pretty much
Now it should be said of course that one group is actually right
I think this ignores the whole concept of probability.
If one group says tomorrow it will rain, and another group says it will not, of course tomorrow one group will be right and one group will be wrong, but that would be not enough to mark one of those groups irrational today. Even according to best knowledge available, the probabilities of raining and not raining could possibly be 50:50. Then if tomorrow one group is proved right, and another is proved wrong, it would not mean one of them was more rational than the other.
Even if we are not talking about a future event, but about a present or past event, we still have imperfect information, so we are still within the realm of probability. It is still sometimes possible to rationally derive different conclusions.
The problem is that to get perfect opinion about something, one would need not only perfect reasoning, but also perfect information about pretty much everything (or at least a perfect knowledge that those parts of information you don't have are guaranteed to have no influence over the topic you are thinking about). Even if for the sake of discussion we assume that Ayn Rand (or anyone trying to model her) had perfect reasoning, she still could not have perfect information, which is why all her conclusions were necessarily probabilistic. So unless the probability is like over 99%, it is pretty legitimate to disagree rationally.
That being said, identifying optimal, mainstream positions of a given philosophy is absolutely good for the philosophy per se.
Good grief, how can you do that when there is no agreement about what optimal means?
Unilaterally.
You say that paleo-inspired diets "have helped many other people in the community." What percent of people in this community have benefited from those diets how much, and how does this compare with other diets, e.g. DASH?
Moreover, if you want to prevent Less Wrong from becoming a cult like the Objectivists, it may be advisable to absolutely avoid to perform rituals explicitly modeled after religious ceremonies, like this or this.
Mmm. I think it's fairly clear that modeling ceremonies on religious rites (or doing much ritual at all outside a certain narrow scope, for that matter) is more likely than the alternative to lead to undesirable perceptions of LW. And PR is important, yes. But I'm not convinced that they're actually epistemically dangerous to any significant...
One man's modus ponens is another man's modus tollens: I, for one, found the Pledge of Allegiance really frigging creepy as a kid, and I'm not sure how I feel about it even now.
I pledge allegiance to the prime number 2, the prime number 3, and the prime number 5. And to their product, 30, and their sum, 10...
Expecting small children to give a solemn vow filled with patriotic propaganda every weekday morning that they can't even begin to know the ramifications of, OR ELSE, sounds like something you'd find in a totalitarian state.
Maybe, but as one small data point, I was really surprised (and creeped out) to just now infer from MaoShen's comment and check on Wikipedia that the Pledge of Allegiance is recited at the beginning of every school day. In my country, the closest cultural equivalent is done once per year, in "Flag Day", and I had previously assumed the American Pledge was like that, being said on July 4th or similar specially significant moments.
Pledge of Allegiance is recited at the beginning of every school day
[Googles for it and reads it] Whaaaaaat??? O.o
As a child I had to pledge that I will become a law-abiding citizen of my country, and a member of the Communist party.
I have failed to adhere to both parts. The first part, because "my beloved homeland" does not exist anymore. The second part, knowingly and willingly. (Although, as a 6-years old child, I would probably also guess that I will agree with both parts when I grow up. Mostly because of: "if that wouldn't be a good thing, they would not ask me to promise it".)
Or maybe it's just because I had to recite the pledge only once. ;-)
(OK, technically I had to practice it a few times first.)
The pledge is a reasonable way to get kids to understand that they're part of a country,
If the status quo didn't already include the daily recitation of such a pledge, do you think you would suggest it as a way to get kids to understand that?
Data point: My home country, Australia, does not have a pledge of allegiance. Overt demonstrations of patriotism were limited to being expected to sing the national anthem in school assembly once a week. I personally feel that there is still plenty of patriotism to go around. However, a common perception of the US is that you guys are over-patriotic.
Thinking a bit more on this, I can't help wondering how much of this can be traced to free voting versus mandatory voting. How much of encouraging patriotism is an attempt to make people care enough to vote?
I wonder whether the reason why a lot of people don't realise it might be because it's not actually true.
I mean, ESR's argument seems to me incoherent and mostly aimed at finding a way to identify Barack Obama as not only an America-hater but also a freedom-hater. (Step 1: True US patriotism is more about loving the ideal of liberty and less about tribal attachment to the US as such. Step 2: Because for a while Barack Obama chose not to wear a flag pin, he doesn't love his country. Step 3, unstated but I think clearly there: Since true US patriotism means loving liberty and Barack Obama is not a true US patriot, he is opposed not only to the US but to liberty.) It's hard to avoid the suspicion that his characterization of US patriotism may be as much a matter of political convenience as the (absurd) inference he draws from Obama's not wearing a flag pin. Certainly at least one of them must be wrong; it cannot be true both that patriotism for Americans means loving their country "not as a thing in itself, but insofar as it embodies core ideas" and that not wearing a US flag pin indicates "a lack of love for America as it actually is" and therefore a lack of patri...
Requiring someone to make a mandatory pledge to a flag instills the Love of Freedom how...?
By making it Capitalized. Actually having the people loving freedom sounds all sorts of dangerous---people may expect you to let them do stuff. If you make them Love Freedom instead you should be able to keep them in line.
First things first: do people have to be part of a country? If the division of humanity into mutually distrustful camps is ultimately a problem, not a solution. (I think it is. My evidence is history. Nothing specific...just open a page at random..) you might be causally defending something that is as bad, or worse, than religious tribalism.
"It's rational to vote for Party X" deflates to "It's optimal to vote for Party X" or just "I think you should vote for Party X".
I'm starting to get very confused about what Eliezer means by "deflates to". I thought he meant "has the same meaning as" or "conveys the same meaning as", but now I think maybe he means "most of the time when you want to use the former, you should use the latter instead". Sorry if I'm still stuck on the by-now-not-quite-central topic of semantics, but I don...
In comments on this thread, the issue of diet and "consensus" came up. Why I consider this topic important here, quite in line with what EY asserted in his post, is shown in this New York Times column by John Tierney.
The issue is not this or that alleged fact. ("Saturated Fat is Harmful," or "Saturated Fat is Good" or even "We don't know") The issue is how we know what we know, and what we don't know, and how individual and social fallacies lead to possible error.
Tierney writes about cascades, social phenomena that c...
Typo report: there appears to be an underscore rather than a space in the sentence "personal_identity follows individual particles."
Is it the intention here to exclude people from the community who have doubts as to the universal applicability of Rationalism, in the general sense? Or someone who argues (even from a non-Rational standpoint) that a non-Rational method is optimal in a specific case? Or even someone who believes that from a Rational standpoint, a non-Rational method is optimal in general?
Obviously someone who uses non-Rational methods to conclude that non-Rational methods are in general superior has nothing to contribute on that subject. How closely do decisions about Rationality have to conform to the local norm for someone to be a real community member?
Many readers may already have appreciated this point, but one of the Go stones placed to block that failure mode is being careful what we bless with the great community-normative-keyword 'rational'. [...] Nonetheless, the second Go stone placed to block the Objectivist Failure Mode is trying to define ourselves as a community around the cognitive algorithms;
I don't think that Go constitutes good metaphor. Go isn't much a game about preventing certain outcomes, It's a game where you trade territory for influence. It's a game about leaving aji open.
One possible strategy for making this easier is explicitly having sub-communities for each optimal thing, that all explicitly include some non-rationalists and exclude some rationalists. Just based on the naive model that people want to identify their behaviour with a community or it will feel odd, and that there is some pressure not to have overlapping signals of membership in different tribes since it be confusing.
"It should be another matter if someone seems interested in the process, better yet the math, and has some non-zero grasp of it, and are just coming to different conclusions than the local consensus."
I anticipate that helping such people gain a better grasp of the process may well be the best possible demonstration that you care about the process itself. At minimum, providing rationalist adjustments to people's conclusions helps ME feel as though I have regard for the process, even if I'm currently still struggling to implement the process rigorously when deriving conclusions of my own.
[R]ationalist opinion leaders are better able to . . . give up faster when things don't work.
Why is this a good thing? It seems to me that people give up too easily just as much as—if not more than—the opposite, especially when they're trying something that they don't expect to work. You have to stick with it long enough to collect a reasonable amount of data.
...The deeper danger is in allowing your de facto sense of rationalist community to start being defined by conformity to what people think is merely optimal, rather than the cognitive algorithms and
May I ask how many people any of you have seen walking around entirely barefoot, as opposed to wearing minimalist footwear of any kind?
I heard an amazing classical performance of Amon Tobin by the cover group for the proper Amon Tobin recently.
Seth Roberts's "Shangri-La diet", which was propagating through econblogs, led me to lose twenty pounds that I've mostly kept off, and then it mysteriously stopped working...
What does stopped working mean? Your weight got stable? You regained the pounds you lost? You regained more than you lost?
[ Reposting censored comment without the offending material ]
When I saw that Patri Friedman was wearing Vibrams (five-toed shoes) and that William Eden (then Will Ryan) was also wearing Vibrams, I got a pair myself to see if they'd work.
Is Patri Friedman supposed to be a good example of a rational person? From the Wikipedia article on him he appears to be quite a crank.
...And yet nonetheless, I think it worth naming and resisting that dark temptation to think that somebody can't be a real community member if they aren't eating beef livers and supplement
I think preventing "poseurs" requires going to a level deeper in the following ways:
Even though we might think that rationalists should agree if they all have the same information, we need to go deeper by acknowledging that we have no proven method which accomplishes this, and have no clue whether we ever will. It creates pressure to agree. We need to release that pressure and just start where we are, living with the fact that all we can do is keep learning and sharing and improving our methods for these and hope everyone gets on the same page ...
It would be very sad to get lynched by the Greens while discussing the likelihoods of Many Words vs Many Worlds :)
Not all Authority is bad - probability theory is also a kind of Authority
Authority seems like a bad word to use here. I don't understand what you're trying to say. This is partially because:
Followup to: Rationality: Appreciating Cognitive Algorithms (minor post)
There's an old anecdote about Ayn Rand, which Michael Shermer recounts in his "The Unlikeliest Cult in History" (note: calling a fact unlikely is an insult to your prior model, not the fact itself), which went as follows:
Branden recalled an evening when a friend of Rand's remarked that he enjoyed the music of Richard Strauss. "When he left at the end of the evening, Ayn said, in a reaction becoming increasingly typical, 'Now I understand why he and I can never be real soulmates. The distance in our sense of life is too great.' Often she did not wait until a friend had left to make such remarks."
Many readers may already have appreciated this point, but one of the Go stones placed to block that failure mode is being careful what we bless with the great community-normative-keyword 'rational'. And one of the ways we do that is by trying to deflate the word 'rational' out of sentences, especially in post titles or critical comments, which can live without the word. As you hopefully recall from the previous post, we're only forced to use the word 'rational' when we talk about the cognitive algorithms which systematically promote goal achievement or map-territory correspondences. Otherwise the word can be deflated out of the sentence; e.g. "It's rational to believe in anthropogenic global warming" goes to "Human activities are causing global temperatures to rise"; or "It's rational to vote for Party X" deflates to "It's optimal to vote for Party X" or just "I think you should vote for Party X".
If you're writing a post comparing the experimental evidence for four different diets, that's not "Rational Dieting", that's "Optimal Dieting". A post about rational dieting is if you're writing about how the sunk cost fallacy causes people to eat food they've already purchased even if they're not hungry, or if you're writing about how the typical mind fallacy or law of small numbers leads people to overestimate how likely it is that a diet which worked for them will work for a friend. And even then, your title is 'Dieting and the Sunk Cost Fallacy', unless it's an overview of four different cognitive biases affecting dieting. In which case a better title would be 'Four Biases Screwing Up Your Diet', since 'Rational Dieting' carries an implication that your post discusses the cognitive algorithm for dieting, as opposed to four contributing things to keep in mind.
By the same token, a post about Givewell's top charities and how they compare to existential-risk mitigation is a post about optimal philanthropy, while a post about scope insensitivity and hedonic returns vs. marginal returns is a post about rational philanthropy, because the first is discussing object-level outcomes while the second is discussing cognitive algorithms. And either way, if you can have a post title that doesn't include the word "rational", it's probably a good idea because the word gets a little less powerful every time it's used.
Of course, it's still a good idea to include concrete examples when talking about general cognitive algorithms. A good writer won't discuss rational philanthropy without including some discussion of particular charities to illustrate the point. In general, the concrete-abstract writing pattern says that your opening paragraph should be a concrete example of a nonoptimal charity, and only afterward should you generalize to make the abstract point. (That's why the main post opened with the Ayn Rand anecdote.)
And I'm not saying that we should never have posts about Optimal Dieting on LessWrong. What good is all that rationality if it never leads us to anything optimal?
Nonetheless, the second Go stone placed to block the Objectivist Failure Mode is trying to define ourselves as a community around the cognitive algorithms; and trying to avoid membership tests (especially implicit de facto tests) that aren't about rational process, but just about some particular thing that a lot of us think is optimal.
Like, say, paleo-inspired diets.
Or having to love particular classical music composers, or hate dubstep, or something. (Does anyone know any good dubstep mixes of classical music, by the way?)
Admittedly, a lot of the utility in practice from any community like this one, can and should come from sharing lifehacks. If you go around teaching people methods that they can allegedly use to distinguish good strange ideas from bad strange ideas, and there's some combination of successfully teaching Cognitive Art: Resist Conformity with the less lofty enhancer We Now Have Enough People Physically Present That You Don't Feel Nonconformist, that community will inevitably propagate what they believe to be good new ideas that haven't been mass-adopted by the general population.
When I saw that Patri Friedman was wearing Vibrams (five-toed shoes) and that William Eden (then Will Ryan) was also wearing Vibrams, I got a pair myself to see if they'd work. They didn't work for me, which thanks to Cognitive Art: Say Oops I was able to admit without much fuss; and so I put my athletic shoes back on again. Paleo-inspired diets haven't done anything discernible for me, but have helped many other people in the community. Supplementing potassium (citrate) hasn't helped me much, but works dramatically for Anna, Kevin, and Vassar. Seth Roberts's "Shangri-La diet", which was propagating through econblogs, led me to lose twenty pounds that I've mostly kept off, and then it mysteriously stopped working...
De facto, I have gotten a noticeable amount of mileage out of imitating things I've seen other rationalists do. In principle, this will work better than reading a lifehacking blog to whatever extent rationalist opinion leaders are better able to filter lifehacks - discern better and worse experimental evidence, avoid affective death spirals around things that sound cool, and give up faster when things don't work. In practice, I myself haven't gone particularly far into the mainstream lifehacking community, so I don't know how much of an advantage, if any, we've got (so far). My suspicion is that on average lifehackers should know more cool things than we do (by virtue of having invested more time and practice), and have more obviously bad things mixed in (due to only average levels of Cognitive Art: Resist Nonsense).
But strange-to-the-mainstream yet oddly-effective ideas propagating through the community is something that happens if everything goes right. The danger of these things looking weird... is one that I think we just have to bite the bullet on, though opinions on this subject vary between myself and other community leaders.
So a lot of real-world mileage in practice is likely to come out of us imitating each other...
And yet nonetheless, I think it worth naming and resisting that dark temptation to think that somebody can't be a real community member if they aren't eating beef livers and supplementing potassium, or if they believe in a collapse interpretation of QM, etcetera. If a newcomer also doesn't show any particular, noticeable interest in the algorithms and the process, then sure, don't feed the trolls. It should be another matter if someone seems interested in the process, better yet the math, and has some non-zero grasp of it, and are just coming to different conclusions than the local consensus.
Applied rationality counts for something, indeed; rationality that isn't applied might as well not exist. And if somebody believes in something really wacky, like Mormonism or that personal identity follows individual particles, you'd expect to eventually find some flaw in reasoning - a departure from the rules - if you trace back their reasoning far enough. But there's a genuine and open question as to how much you should really assume - how much would be actually true to assume - about the general reasoning deficits of somebody who says they're Mormon, but who can solve Bayesian problems on a blackboard and explain what Governor Earl Warren was doing wrong and analyzes the Amanda Knox case correctly. Robert Aumann (Nobel laureate Bayesian guy) is a believing Orthodox Jew, after all.
But the deeper danger isn't that of mistakenly excluding someone who's fairly good at a bunch of cognitive algorithms and still has some blind spots.
The deeper danger is in allowing your de facto sense of rationalist community to start being defined by conformity to what people think is merely optimal, rather than the cognitive algorithms and thinking techniques that are supposed to be at the center.
And then a purely metaphorical Ayn Rand starts kicking people out because they like suboptimal music. A sense of you-must-do-X-to-belong is also a kind of Authority.
Not all Authority is bad - probability theory is also a kind of Authority and I try to be ruled by it as much as I can manage. But good Authority should generally be modular; having a sweeping cultural sense of lots and lots of mandatory things is also a failure mode. This is what I think of as the core Objectivist Failure Mode - why the heck is Ayn Rand talking about music?
So let's all please be conservative about invoking the word 'rational', and try not to use it except when we're talking about cognitive algorithms and thinking techniques. And in general and as a reminder, let's continue exerting some pressure to adjust our intuitions about belonging-to-LW-ness in the direction of (a) deliberately not rejecting people who disagree with a particular point of mere optimality, and (b) deliberately extending hands to people who show respect for the process and interest in the algorithms even if they're disagreeing with the general consensus.
Part of the sequence Highly Advanced Epistemology 101 for Beginners
Next post: "The Fabric of Real Things"
Previous post: "Rationality: Appreciating Cognitive Algorithms"