Firewalling the Optimal from the Rational
Followup to: Rationality: Appreciating Cognitive Algorithms (minor post)
There's an old anecdote about Ayn Rand, which Michael Shermer recounts in his "The Unlikeliest Cult in History" (note: calling a fact unlikely is an insult to your prior model, not the fact itself), which went as follows:
Branden recalled an evening when a friend of Rand's remarked that he enjoyed the music of Richard Strauss. "When he left at the end of the evening, Ayn said, in a reaction becoming increasingly typical, 'Now I understand why he and I can never be real soulmates. The distance in our sense of life is too great.' Often she did not wait until a friend had left to make such remarks."
Many readers may already have appreciated this point, but one of the Go stones placed to block that failure mode is being careful what we bless with the great community-normative-keyword 'rational'. And one of the ways we do that is by trying to deflate the word 'rational' out of sentences, especially in post titles or critical comments, which can live without the word. As you hopefully recall from the previous post, we're only forced to use the word 'rational' when we talk about the cognitive algorithms which systematically promote goal achievement or map-territory correspondences. Otherwise the word can be deflated out of the sentence; e.g. "It's rational to believe in anthropogenic global warming" goes to "Human activities are causing global temperatures to rise"; or "It's rational to vote for Party X" deflates to "It's optimal to vote for Party X" or just "I think you should vote for Party X".
If you're writing a post comparing the experimental evidence for four different diets, that's not "Rational Dieting", that's "Optimal Dieting". A post about rational dieting is if you're writing about how the sunk cost fallacy causes people to eat food they've already purchased even if they're not hungry, or if you're writing about how the typical mind fallacy or law of small numbers leads people to overestimate how likely it is that a diet which worked for them will work for a friend. And even then, your title is 'Dieting and the Sunk Cost Fallacy', unless it's an overview of four different cognitive biases affecting dieting. In which case a better title would be 'Four Biases Screwing Up Your Diet', since 'Rational Dieting' carries an implication that your post discusses the cognitive algorithm for dieting, as opposed to four contributing things to keep in mind.
By the same token, a post about Givewell's top charities and how they compare to existential-risk mitigation is a post about optimal philanthropy, while a post about scope insensitivity and hedonic returns vs. marginal returns is a post about rational philanthropy, because the first is discussing object-level outcomes while the second is discussing cognitive algorithms. And either way, if you can have a post title that doesn't include the word "rational", it's probably a good idea because the word gets a little less powerful every time it's used.
Of course, it's still a good idea to include concrete examples when talking about general cognitive algorithms. A good writer won't discuss rational philanthropy without including some discussion of particular charities to illustrate the point. In general, the concrete-abstract writing pattern says that your opening paragraph should be a concrete example of a nonoptimal charity, and only afterward should you generalize to make the abstract point. (That's why the main post opened with the Ayn Rand anecdote.)
And I'm not saying that we should never have posts about Optimal Dieting on LessWrong. What good is all that rationality if it never leads us to anything optimal?
Nonetheless, the second Go stone placed to block the Objectivist Failure Mode is trying to define ourselves as a community around the cognitive algorithms; and trying to avoid membership tests (especially implicit de facto tests) that aren't about rational process, but just about some particular thing that a lot of us think is optimal.
Like, say, paleo-inspired diets.
Or having to love particular classical music composers, or hate dubstep, or something. (Does anyone know any good dubstep mixes of classical music, by the way?)
Admittedly, a lot of the utility in practice from any community like this one, can and should come from sharing lifehacks. If you go around teaching people methods that they can allegedly use to distinguish good strange ideas from bad strange ideas, and there's some combination of successfully teaching Cognitive Art: Resist Conformity with the less lofty enhancer We Now Have Enough People Physically Present That You Don't Feel Nonconformist, that community will inevitably propagate what they believe to be good new ideas that haven't been mass-adopted by the general population.
When I saw that Patri Friedman was wearing Vibrams (five-toed shoes) and that William Eden (then Will Ryan) was also wearing Vibrams, I got a pair myself to see if they'd work. They didn't work for me, which thanks to Cognitive Art: Say Oops I was able to admit without much fuss; and so I put my athletic shoes back on again. Paleo-inspired diets haven't done anything discernible for me, but have helped many other people in the community. Supplementing potassium (citrate) hasn't helped me much, but works dramatically for Anna, Kevin, and Vassar. Seth Roberts's "Shangri-La diet", which was propagating through econblogs, led me to lose twenty pounds that I've mostly kept off, and then it mysteriously stopped working...
De facto, I have gotten a noticeable amount of mileage out of imitating things I've seen other rationalists do. In principle, this will work better than reading a lifehacking blog to whatever extent rationalist opinion leaders are better able to filter lifehacks - discern better and worse experimental evidence, avoid affective death spirals around things that sound cool, and give up faster when things don't work. In practice, I myself haven't gone particularly far into the mainstream lifehacking community, so I don't know how much of an advantage, if any, we've got (so far). My suspicion is that on average lifehackers should know more cool things than we do (by virtue of having invested more time and practice), and have more obviously bad things mixed in (due to only average levels of Cognitive Art: Resist Nonsense).
But strange-to-the-mainstream yet oddly-effective ideas propagating through the community is something that happens if everything goes right. The danger of these things looking weird... is one that I think we just have to bite the bullet on, though opinions on this subject vary between myself and other community leaders.
So a lot of real-world mileage in practice is likely to come out of us imitating each other...
And yet nonetheless, I think it worth naming and resisting that dark temptation to think that somebody can't be a real community member if they aren't eating beef livers and supplementing potassium, or if they believe in a collapse interpretation of QM, etcetera. If a newcomer also doesn't show any particular, noticeable interest in the algorithms and the process, then sure, don't feed the trolls. It should be another matter if someone seems interested in the process, better yet the math, and has some non-zero grasp of it, and are just coming to different conclusions than the local consensus.
Applied rationality counts for something, indeed; rationality that isn't applied might as well not exist. And if somebody believes in something really wacky, like Mormonism or that personal identity follows individual particles, you'd expect to eventually find some flaw in reasoning - a departure from the rules - if you trace back their reasoning far enough. But there's a genuine and open question as to how much you should really assume - how much would be actually true to assume - about the general reasoning deficits of somebody who says they're Mormon, but who can solve Bayesian problems on a blackboard and explain what Governor Earl Warren was doing wrong and analyzes the Amanda Knox case correctly. Robert Aumann (Nobel laureate Bayesian guy) is a believing Orthodox Jew, after all.
But the deeper danger isn't that of mistakenly excluding someone who's fairly good at a bunch of cognitive algorithms and still has some blind spots.
The deeper danger is in allowing your de facto sense of rationalist community to start being defined by conformity to what people think is merely optimal, rather than the cognitive algorithms and thinking techniques that are supposed to be at the center.
And then a purely metaphorical Ayn Rand starts kicking people out because they like suboptimal music. A sense of you-must-do-X-to-belong is also a kind of Authority.
Not all Authority is bad - probability theory is also a kind of Authority and I try to be ruled by it as much as I can manage. But good Authority should generally be modular; having a sweeping cultural sense of lots and lots of mandatory things is also a failure mode. This is what I think of as the core Objectivist Failure Mode - why the heck is Ayn Rand talking about music?
So let's all please be conservative about invoking the word 'rational', and try not to use it except when we're talking about cognitive algorithms and thinking techniques. And in general and as a reminder, let's continue exerting some pressure to adjust our intuitions about belonging-to-LW-ness in the direction of (a) deliberately not rejecting people who disagree with a particular point of mere optimality, and (b) deliberately extending hands to people who show respect for the process and interest in the algorithms even if they're disagreeing with the general consensus.
Part of the sequence Highly Advanced Epistemology 101 for Beginners
Next post: "The Fabric of Real Things"
Previous post: "Rationality: Appreciating Cognitive Algorithms"
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (339)
Mmm. I think it's fairly clear that modeling ceremonies on religious rites (or doing much ritual at all outside a certain narrow scope, for that matter) is more likely than the alternative to lead to undesirable perceptions of LW. And PR is important, yes. But I'm not convinced that they're actually epistemically dangerous to any significant degree.
There's a lot of possible reasons to do ritual. The two ceremonies you link seem to mainly fall under the general heading of "affirmation of shared values", which could be used as part of a more general Dark Artsy scheme but don't seem terribly dangerous in themselves; rituals with those aims show up in dozens of secular contexts, from the Boy Scouts to martial arts dojos to the Pledge of Allegiance recited by American schoolchildren. I might have been given pause if it snuck in some seriously controversial content, like a pledge to sign up for cryonics or that believers in the collapse postulate were stupid and also evil, but as long as it limits itself to cheering for a materialistic humanism I don't think there's much to object to. That's way too general to be epistemically risky, and attempts to cast it as such would probably be more funny than menacing.
Now, why pull from a theological source when there's all these other sources available? Well, religions have been doing it the longest, for one thing. In the absence of a deep understanding of the mechanics of something, a good heuristic for getting it done is to find someone that does it well and plagiarize.
Many secular institutions perform ceremonies of typically two kinds:
Celebrating members who enter, leave or change rank within the organization (e.g. academic graduation, martial art belt change, ...). These rituals seem don't seem to pose an hazard to epistemical or instrumental rationality, and can be actually quite useful to ensure that members of the organization know who the other members are and what their role is.
Celebrating anniversaries, typically the founding date of the organization, or some other date related to prominent past members or relevant historical figures (e.g., a physics departement celebrating 100 years from Annus Mirabilis, or a computer science department celebraring the 100th birthday of Alan Turing). Again, these rituals don't seem harmful, and they might be useful to reaffirm the mission of the organization.
Making children recite the Pledge of Allegiance, on the other hand, is indeed a form of essentially religious indoctrination, even without the "under God" bit.
There is a very thin line between government and organized religion, and instances of crossing it are not unknown of, in one direction (theocracies) or the other (political religions, such as North Korea's Juche or Bolshevism in the former Communist states). Even countries which are in general considered to do a good job at keeping the state separated from religion, like the US, occasionally resort to religious techniques of indoctrination.
Religious rituals are effective at indoctrinating people: make people recite your "Truths" until they memorize and chant them mechanically without paying attention to their implications, associate them with all kinds of positive rewards, from large carbohydrate-rich meals to the sense of belonging to a close-knit community, and the effect you get is that you lower people's critical thinking skills and make them more prone to accept your "Truths" without question, and become emotionally attached to them so that they will find rationalizations instead of throwing them away when presented with contrary evidence.
Is this the proper way of disseminating rationalist values? I don't think so.
Even if you were 100% sure that whatever you are endorsing is so indisputable that there will be never the need to change it, people should believe rational arguments because they critically analyzed them and found that they stand on their own merits, not because they have been psychologically conditioned to believe them.
Sure, people believing in truths because they've repeated them a lot rather than because they've digested them and updated all their beliefs correspondingly is a problem. But it's also a problem to see a new belief, agree with it, and then not repeat it enough to update all of your beliefs correspondingly. Getting a skill to the 5-second level takes practice, and often a lot of practice.
Sure, but the way to practice these skills is to apply them to actual problems, not to mindlessly recite their principles.
Recitation and worship can turn even good rational principles into articles of faith, disconnected from anything else, which you just "believe to believe" rather actually understand and apply.
V_V and Vaniver both make really good points, but the fact is that the U.S was not built to be completely rationalist, and people in general are not rationalists.
It's a communal set of rules for a people and a place that's designed to give the members the most freedom while still ensuring stability and order. And it has a really good track record of success in doing that.
I agree that it's not an optimal solution in a future, ideally rationalist world. But it's not a tool for teaching children to think for themselves. It's a tool to get them to follow the social rules. And I'll tell you, children want their own way and DO NOT want to follow rules. And if you let them have their way all the time you WILL spoil them. There's a time to teach rules-following (especially rules that protect liberties and freedoms) and a time to teach mistrust of authority and rules-breaking.
What other device would you propose for a future, ideally rationalist world? I'm not being fecetious here. I'm curious. Spawned by the Wierdtopia idea, can you think of a better solution?
I personally think of it as like teaching an apprentice. Apprentices weren't taught the why's. They were taught the how's. As a journeyman and a master you discovered the why's. Kids are apprentice citizens.
One man's modus ponens is another man's modus tollens: I, for one, found the Pledge of Allegiance really frigging creepy as a kid, and I'm not sure how I feel about it even now.
I would recite the Pledge of Allegiance ending, "With freedom, and justice, for all except the children."
I'm not sure it would be a bad thing if they had a ceremony where those students who wanted to recited the Twelve Virtues once a week, or Frankena's list of terminal values, or the rules of algebra. Repetition is a perfectly good way to install association and thereby skill - you can use it to repeat good things or bad things. It's not much different from printing that way.
Well, sure, repeating the multiplication table or the digits of Pi every day can be quite useful. It helps you memorize products of single-digit numbers and also Pi.
But repeating a pledge to obey an institution is, IMO, irrational at best. Imagine that the United States did something insane (well, more insane than all the things it's doing right now), like starting World War III with no provocation whatsoever. Would it still be deserving of your allegiance ? That is to say, would you still do your best to uphold its principles and further its goals, which now include "kill everyone on Earth with nukes" ?
Now, if the Pledge said something like, "I pledge to support those institutions who have a reasonably good chance of improving the lives of all humans, and to reform or abolish those institutions that hinder progress toward this goal", then I could probably get behind it.
Similarly, I wouldn't pledge to drive my current car to work every day, be it rain, hail, or shine -- but I could reasonably promise to stick with my car while it functions, fix it when it breaks down if fixing it is financially feasible, or get a new car if I can afford it. Such an oath may not sound as solemn, but at least it's practical.
I agree the Pledge sounds a bit creepy in retrospect - I was only disagreeing with the idea that any possible thing you repeat at the start of class is creepy.
Ah, understood, that makes sense. Repeating the multiplication at the beginning of class would not be creepy at all (unless you also pledged allegiance to it, somehow, I suppose).
I pledge allegiance to the prime number 2, the prime number 3, and the prime number 5. And to their product, 30, and their sum, 10...
I don't think anyone has suggested that 'freedom and justice for all' should mean that children be treated as if they were morally responsible and independent agents. I think children probably aren't capable of freedom in any meaningfully political sense, and it seems like it would be enormously cruel to subject them to everything justice demands of an adult. At the very least, we'd be up to our ears in assault trials: kids are both violent and litigious.
If I believe that a subset of the population ought not receive justice, or is not capable of freedom, then it seems I ought not declare universal liberty and justice among the defining characteristics of the Republic I'm pledging allegiance to.
So, I expect we agree that 'freedom and justice for all' isn't only the proper motto of a republic which lets violent criminals do what they like, and prosecutes children for assault whenever they get into a schoolyard fight.
If so, then we agree that 'freedom and justice for all' can be the motto of a republic which makes exceptions of certain classes of people freedom-and-justice-wise. ETA: Clearly not every kind of exception is a fulfillment of the motto. So then the question is just whether or not children should be accorded whatever it is we promise by 'freedom and justice for all'. What do you think?
Well, there's two questions here.
The first is, if we don't believe that everyone is entitled to freedom and justice, is "freedom and justice for all" a proper motto for our republic? To which in principle my answer is "no," though in practice I would not, for example, endorse rewriting it to read "freedom and justice for those entitled to it," even if it turns out that's what we actually mean, because the political signalling effects of doing so would be too expensive.
The second is, to what degree are children entitled to freedom and justice? That's a much more complicated question, and I'm not sure I can answer it in any detailed way. At a very high level, my answer is "to the extent that granting them those things improves both their experience of life and that of the community, rounded to the nearest Schelling point."
That's also my answer for adults, incidentally.
Okay, I think we roughly agree then. Though I'd say that 'freedom and justice for all' can straightforwardly mean 'freedom and justice for everyone capable and worthy of being free and responding to the demands of justice' (where I take that to exclude children, the insane, criminals, etc. but include nearly everyone else).
As to the freedom and justice we accord children and adults, we agree at a general kind of level, though in particular cases I don't think the improvement of someone's experience should settle the question of whether or not (or the extent to which) they're entitled to freedom or justice.
or the prisoners, or the mentally retarded, or the civilians in the freedoms of war, or the soldiers in those of peace, or those merely bound by their necessities in a worse way than what we consider normal.
Actually, now that I think about it, exposing kids to more pragmatism is probably a good idea.
Or those who have too much love for paperclips.
The Pledge being creepy, sure, I can see that. (I wasn't entirely comfortable with reciting it either, after a certain age.) Culty? Not without throwing out any conventional definition of "cult".
I may have been a little hasty in implying that there's no epistemic danger in public avowal of shared values; I'd expect it to be a reinforcer of those values and to contribute to unanimity effects, although probably not very strongly. But I don't think it's anywhere near as much of a red flag as V_V seemed to be suggesting.
Expecting small children to give a solemn vow filled with patriotic propaganda every weekday morning that they can't even begin to know the ramifications of, OR ELSE, sounds like something you'd find in a totalitarian state.
It also sounds like something you would find in all sorts of other states that aren't totalitarian.
One interesting thing to note is that if you're accustomed to pledging your allegiance to something every day as a child, while you're still unable to enter into legal agreements and aren't thinking about them, it may not occur to you that when you go to school on your 18th birthday, you've just pledged your allegiance in a way that... might be legally binding?
Regardless of what sort of government expects it's children to pledge allegiance every day, do you agree with the practice of making people pledge allegiance?
Allegiance is kind of vague. It could be interpreted to mean doing normal responsibilities (not being a criminal, paying your taxes) or it might be interpreted to mean total obedience. I'm not sure whether to agree or disagree with the pledge. Maybe I should disagree with it on the grounds that it is too vague and therefore doesn't protect reciters from feeling obligated to obey a tyrant, were one to end up in power.
Of course not. I was stabbing one of our soldiers in the back. Because, frankly, that metaphorical soldier had it coming.
This is actually has been a problem with real-life examples. I've read that the oaths in NAZI Germany were specifically to Hitler himself, and that many members of the military felt bound by their oaths to obey orders, even when it was clear the orders shouldn't be obeyed. I think the critical danger is in giving oaths to an individual (any of which have a very real chance of being corrupted by power, unless they take action to prevent it).
I see the difference that the U.S. pledge of alliegence is to the republic and it's symbol, the flag. The saving factors to prevent abuses of power are:
The focus on alliegence to the nation as a whole, including all it's members, it's leaders, and it's ideals.
The "with liberty and justice for all" line, which is the guarantee of what the State offers in return. The U.S. has to be worthy of the alliegence.
The extreme other war example is the U.S Civil War, where many military officers left the army to join the Confederacy. They formed ranks and marched right out of West Point because they opposed the U.S. leadership. And the soldiers who stayed let them go, knowing they were going to help the seceding states fight. Even if they disagreed, it was felt the honorable thing to do was to let them go.
This idea shows up specifically in our military training and culture in the definition of lawful orders. The military culture and legal rules define your duty to obey all lawful orders from your chain of command, up to the President. So that if you feel that an order is unlawful it's actually your duty to disobey. Now, of course, that carries with it all the weight of being the first one to be the opposition, so it's no guarantee to prevent abuses of power, but it does exist.
I gues my point is that the danger is in making oaths to a person.
I agree that it's a form of indoctrination for children. But as long as the trade of alliegence and freedom it describes is a true and real one, I think it's a good thing to keep those principles in their minds.
I suppose it could, yet countries don't require you to do anything to place you in such legal binds. They have laws about "treason" that they can apply when people from their population don't act out allegiance, whether they have pledged it or not.
Sure but the people have to enforce those laws (the government is something like 3% of the population from what I understand, which means that the people could overwhelm them easily), so if the concept of allegiance is foreign to them, as opposed to being very familiar and feeling like an obligation, or if they haven't witnessed all the OTHER citizens pledging allegiance, it might feel like an empty word they can safely ignore.
If the concept of allegiance becomes completely foreign to the citizens of a country, than the country effectively ceases to exist.
Fair enough. A more accurate statement would be: "Expecting small children...of, OR ELSE, is a mind-control tactic that I feel is wrong to use on children without the capacity to counter it, which I would expect to find in blatantly controlling nations, and not in a supposedly free nation."
I just thought the first version flowed better.
It does flow well, in as much as the first thing (ridiculous pledge obligation) is already opposed by most of the audience and so they can be expected to applaud when the enemy is associated with the hated thing. Unfortunately it is a crude harnessing of a fallacy.
And I would have got away with it, too, if it weren't for your meddling rationality!
Are there countries generally regarded as non-totalitarian, other than the US, where people do anything like that?
If "anything like that" includes reciting prayers, practically all catholic private schools in Europe will count.
Well, that's what I thought too, but in those schools everyone is (supposed to be) a Catholic, and if not you (well, your parents) can choose a different school, whereas if I understand correctly children are asked to say the Pledge in all American schools, so (short of emigrating) you (and your parents) have no choice.
(Then again, some otherwise non-confessional schools in Italy keep a crucifix in each classroom -- I think it used to be mandated by law, but it no longer is and a few years ago a Muslim sued his son's school for that and managed to have it removed. But keeping around a sculpture that pupils might not even notice --I honestly can't even remember which of certain classrooms in my high school had one and which hadn't-- is a lot less scary than have everyone pledge allegiance every morning, IMO.)
Yes, FWIW catholic schools in the US do that too.
It is highly likely (that there is at least one). It is a kind of insane practice but it isn't quite that out of character for human social groups that I'd expect it to be a quirk unique to the USA.
Do all totalitarian states bother making all the children go to school and recite pledges?
Maybe, but as one small data point, I was really surprised (and creeped out) to just now infer from MaoShen's comment and check on Wikipedia that the Pledge of Allegiance is recited at the beginning of every school day. In my country, the closest cultural equivalent is done once per year, in "Flag Day", and I had previously assumed the American Pledge was like that, being said on July 4th or similar specially significant moments.
[Googles for it and reads it] Whaaaaaat??? O.o
Yep. I'm American. My school did it.
For what it's worth, I only remember doing so until fourth grade, or about nine years of age. I'm not sure if that makes it better or worse.
I now regret using it as an example, though. Evidently I grossly underestimated its potential sensitivity, and I really should have known better.
Likewise (except now I'm only creeped out, the surprise came a long time ago).
I don't recall whether we have one at all. I remember we have a national anthem that we sung occasionally. Something about "wealth for toil" is involved.
I have seen footage of a documentary about Cuba, that used kids reciting their allegiance to the State, Party etc as a way of showing what an evil place it was. To this Londoner, yes, the whole thing of kids reciting the Pledge is very creepy.
It's not actually required that children say it; it would, in fact, violate the Constitution to mandate political speech, even from students. But it's expected that students recite the Pledge, and most do.
Yeah. I can pledge allegiance, now, and when I do, I mean it - but coming out of the mouth of a child, it's as meaningless as they all know it is. When I was a kid, I knew it was all kinds of messed up. I suspected that I would agree with it when I was older, and I was right. That doesn't make it valid.
Yeah, but a lot of stuff is meaningless coming out of the mouth of a child. But you have to start teaching them about things like duty and loyalty at some point. The pledge is a reasonable way to get kids to understand that they're part of a country, and that there's a common moral and political activity that they'll one day be involved in.
If the status quo didn't already include the daily recitation of such a pledge, do you think you would suggest it as a way to get kids to understand that?
I think that's a practical question too complicated for me to answer. I would want some kind of voluntary activity like that in place. The pledge isn't great, but it's likely that something better would require more resources. And while I don't think (and would doubt) that the pledge has any kind of 'brainwashing' effect, it would be worth looking into whatever data we can gather about that.
Data point: My home country, Australia, does not have a pledge of allegiance. Overt demonstrations of patriotism were limited to being expected to sing the national anthem in school assembly once a week. I personally feel that there is still plenty of patriotism to go around. However, a common perception of the US is that you guys are over-patriotic.
Thinking a bit more on this, I can't help wondering how much of this can be traced to free voting versus mandatory voting. How much of encouraging patriotism is an attempt to make people care enough to vote?
Something that a lot of people, both inside and outside the US, don't realize is that what patriotism means in the US is not quite the same as what patriotism means in other countries.
I wonder whether the reason why a lot of people don't realise it might be because it's not actually true.
I mean, ESR's argument seems to me incoherent and mostly aimed at finding a way to identify Barack Obama as not only an America-hater but also a freedom-hater. (Step 1: True US patriotism is more about loving the ideal of liberty and less about tribal attachment to the US as such. Step 2: Because for a while Barack Obama chose not to wear a flag pin, he doesn't love his country. Step 3, unstated but I think clearly there: Since true US patriotism means loving liberty and Barack Obama is not a true US patriot, he is opposed not only to the US but to liberty.) It's hard to avoid the suspicion that his characterization of US patriotism may be as much a matter of political convenience as the (absurd) inference he draws from Obama's not wearing a flag pin. Certainly at least one of them must be wrong; it cannot be true both that patriotism for Americans means loving their country "not as a thing in itself, but insofar as it embodies core ideas" and that not wearing a US flag pin indicates "a lack of love for America as it actually is" and therefore a lack of patriotism.
And it's certainly not only in the US that patriotism tends to involve not only tribal loyalty to one's country but also love of what are taken to be its virtues. (Sometimes grand things like liberty and enterprise in the US, courage and fair play in the UK; sometimes little quirks like apple pie and baseball in the US, pubs and cricket in the UK.)
I'd be very interested in others' opinions: Is US patriotism really as much more "abstracted" than other nations' as ESR suggests? Is it true that "most Americans love their country ... not as a thing in itself, but insofar as it embodies core ideas about liberty"?
That doesn't seem true to me. Do you have anything more solid to back this up? Also, see Esar's comment below that the US is a mess but it's his mess. How is that not regular old tribal patriotism?
I would expect nearly all patriotic people to consider patriotism to their own country to be different in some fundamentally important way to patriotism to another. The other patriots don't care about Better Seating after all.
Requiring someone to make a mndatory pledge to a flag instills the Love of Freedom how...?
As a teaching tool it seems almost useless; the language is antiquated, way past age-appropriate for elementary school, and while the meaning of the Pledge might be the subject of a third-grade civics lesson I don't recall any substantial effort to break down its text in such a way as to integrate it into working knowledge.
Which now strikes me as a fairly clever bit of social engineering. At first I don't think you'd need meaningful content; if you and your classmates are facing the flag, right hand over heart, and quoting from the text in unison, you'd still get group cohesion effects if the text itself was a list of Vedic demons or multilingual translations of the word "pickle". But later, as children learn about concepts like duty by other means, they're supplied with associations left over from their childhood practice. At least in theory; in practice this might be ruined by other associations, as elementary school's usually not a terribly pleasant place for its inmates.
The "under God" bit can lead to some unpleasant cognitive dissonance as a secular child, too.
My thoughts exactly.
And I agree, 'under God' should be removed. But it's not really a big deal. A substantial part of the value of secularism is in the fact that you have to go against the grain a bit.
The whole thing, though, is a giant "under God" of patriotism. A small nod to religion isn't a big deal compared to that.
First things first: do people have to be part of a country? If the division of humanity into mutually distrustful camps is ultimately a problem, not a solution. (I think it is. My evidence is history. Nothing specific...just open a page at random..) you might be causally defending something that is as bad, or worse, than religious tribalism.
You mean metaphysically? No. Practically? Yes. Having no citizenship is pretty serious problem. And these mutually distrustful camps aren't very mutually distrustful at the moment. Most countries are actually very stable, peaceful, and trusting. More so now then at any other time in history. Other kinds of political organization may be feasible, and that's fine, but this one is working pretty well and changing things would probably result in trouble.
From a simple consequentialist perspective, I think it's hard to argue against the present system. Do you have an alternative suggestion?
At one time it was a practical necessity to belong to some religion or other.
Just because everyone believes you need one. But does that pass the PKD test?
Does that prove that nationalism is good...or that its one the way out? Europe went through a period of religious bloodshed...followed by an era of religious tolerance...followed by a period of irreligion. SWIM?
Nations solve the problems created by nations. Up to a point. Does religion "work" when there is a respite in the slaughter?
Maybe things are chainging anyway. I'm a citizen of England, and the Uk, and the EU. If you are a Usian, you are also in a federated superstate.
In any case you don't have to believe in (qua approve of) something just because you believe in (qua note the existence of) it.
Brainwashing success!
I don't agree with everything the country does, that's for sure. But on the broad strokes, I'm willing to stand for it.
As a child I had to pledge that I will become a law-abiding citizen of my country, and a member of the Communist party.
I have failed to adhere to both parts. The first part, because "my beloved homeland" does not exist anymore. The second part, knowingly and willingly. (Although, as a 6-years old child, I would probably also guess that I will agree with both parts when I grow up. Mostly because of: "if that wouldn't be a good thing, they would not ask me to promise it".)
Or maybe it's just because I had to recite the pledge only once. ;-)
(OK, technically I had to practice it a few times first.)
Also because observations contradicted the belief that your country was good.
Wait, there are rituals? And someone deleted the links to them? Anyone want to tell me what they're talking about?
The rituals in question were my rationalist marriage ceremony and Raemon's solstice ritual. Obviously I have no objection to these being discussed, though I would strongly recommend doing so in a separate Discussion thread. As this thread is a part of a new sequence that may be used to introduce newcomers to LW, I am especially interested in keeping it clean.
Sure. I saved my thoughts for a different thread where they'll be more on-topic.
The question you asked me on this thread was deleted too. If you happen to ask it again I'll respond.
What does stopped working mean? Your weight got stable? You regained the pounds you lost? You regained more than you lost?
But of course this whole post is really about playing Go... ;-)
Not dubstep: 9th (The Man Who Swam Through A Speaker)
I think preventing "poseurs" requires going to a level deeper in the following ways:
Even though we might think that rationalists should agree if they all have the same information, we need to go deeper by acknowledging that we have no proven method which accomplishes this, and have no clue whether we ever will. It creates pressure to agree. We need to release that pressure and just start where we are, living with the fact that all we can do is keep learning and sharing and improving our methods for these and hope everyone gets on the same page eventually. It needs to be labeled as the far-off ideal that it is.
I agree with the problem you point out in undiscriminating skeptics, however I disagree with your solution there. Which was (paraphrasing) "The real rationalists are the ones who are able to disagree with the other skeptics and show that they've thought it out."
This encourages people to look for reasons to disagree in order to show off. It might, on the one hand, be a good thing, since it encourages people to think critically and they may find more mistakes that way, but going in the opposite direction of "Believe people are rationalists because they believe X, Y, and Z." is an over-correction. This is relevant because it is likely to promote schisms. A group of thinkers of any stripe are going to have an overwhelming number of disagreements as it is because groups of thinkers grow in their own directions and tend to refuse to conform to promote social cohesion.
When it comes down to it, promoting disagreement is no better aimed at the target of promoting truth-seeking than promoting agreement. If what you want is truth-seekers, promote truth-seekers.
There are nasty pitfalls to using labels at all but "rationalist" might be able to be a healthy and constructive label. Here's what I mean by pitfall and how I think a rationalist label might be made healthy and constructive:
When earning a label (whether that be a job title or a social status title), we're looking at it as a system to game (give the right answers on tests, do posturing) and it is those system-gaming efforts that result in the superficial and empty appearance of belonging (like undiscriminating skeptics who mock what you expect).
People desire to earn status, and that can be a really positive force, but only if it's directed at a goal and the right type of goal. "Disagree sometimes and support your point" will result in disagreements (not necessarily high quality ideas), whereas a goal like "make something that actually works" (a theory that's proven true, a social program that gets results, etc.) would be a wonderful "fire under the ass" to get us moving and figuring out how to do what works.
If status is going to be important to people, the way of determining whether they deserve it should not be quick and it should require that they game reality rather than gaming your model of what a rationalist is.
If you want to use quick mental models as shortcuts to save time while you're navigating the jungle of irrational people out there, this is understandable, everyone does it - but if you publish that here, it's going to become part of the culture, and applying these shortcuts when it comes to which people deserve respect is going to motivate people to value activity over results.
Coming from a hard-core Objectivist, the Objectivist community is unfortunately rife with all sorts of so-called "schisms". I think this is intrinsic in any community of thinkers who are focused on objectivity/optimality/rationality/etc in general, because inevitably people will feel differently on a given issue, and then everyone goes around blaming the other group that they aren't really objective or rational or optimal, etc.
This leads to me having to qualify a statement about some issue X with something like this:
Now it should be said of course that one group is actually right - but schisms are very unhealthy for any community, or any social group in general. The success of a social group per se is based very much on all-inclusiveness. That being said, identifying optimal, mainstream positions of a given philosophy is absolutely good for the philosophy per se.
So I would add something like: "firewall optimal philosophy from optimal community"
That is at least "inevitable" in groups that habitually mistake feelings for something objective.
Good grief, how can you do that when there is no agreement about what optimal means?
People inevitably feel differently on given issues in any group. Blaming the other side for not really being objective/rational/etc happens no more in Objectivism than any other group.
Let me add that there is no inherent propensity in Objectivism to substitute one's feelings for objective evaluations; if that's what you think, you're misunderstanding something. For example, Ayn Rand had an entire branch of her philosophy talking about art, music, and "aesthetics" in general. Her opinion on music wasn't purely based on her trying to pass off her personal feelings for an objective judgment, but rather was indeed a derivative position of her philosophical system. And there's nothing wrong with trying to identify objectively best or optimal music or other things, that's actually perfectly fine to do in philosophy - but if you're going to use differences as a basis for building a community, you're going to produce a horrible mess with schisms and splinter groups galore, which unfortunately hit the Objectivist community pretty badly. Hence: "firewall optimal philosophy from optimal community"
Well each person does it for themselves. Naturally the creators and leaders in the philosophy set the mainstream (er, sort of by definition)...
Unilaterally.
I think this ignores the whole concept of probability.
If one group says tomorrow it will rain, and another group says it will not, of course tomorrow one group will be right and one group will be wrong, but that would be not enough to mark one of those groups irrational today. Even according to best knowledge available, the probabilities of raining and not raining could possibly be 50:50. Then if tomorrow one group is proved right, and another is proved wrong, it would not mean one of them was more rational than the other.
Even if we are not talking about a future event, but about a present or past event, we still have imperfect information, so we are still within the realm of probability. It is still sometimes possible to rationally derive different conclusions.
The problem is that to get perfect opinion about something, one would need not only perfect reasoning, but also perfect information about pretty much everything (or at least a perfect knowledge that those parts of information you don't have are guaranteed to have no influence over the topic you are thinking about). Even if for the sake of discussion we assume that Ayn Rand (or anyone trying to model her) had perfect reasoning, she still could not have perfect information, which is why all her conclusions were necessarily probabilistic. So unless the probability is like over 99%, it is pretty legitimate to disagree rationally.
You entirely missed the point of my including that statement.
My intention was merely to stress that I'm not merely trying to say something like, "nobody can every really know what the right answer is, so we should all just get along," or any such related overly "open-minded" or "tolerationist" nonsense like that.
My point was to say that such differences are perfectly fine and meaningful to fight about philosophically, but that you shouldn't use one's position on whatever derivative philosophical issues as the basis for community membership.
I thought it was ignoring the possibility that everyone involved could be wrong.
Worse, they could all be not even wrong.
Hm. There's an implicit "...iff the disagreeer has access to better information than she had" here, right?
If the disagreer has access to different information. Or just has different priors.
(I want to avoid the connotation "better information" = "strict superset of information".)
Point.
Authority seems like a bad word to use here. I don't understand what you're trying to say. This is partially because:
The right kind of authority is based on logic and evidence. Most of most people' beliefs have not been personally verified by them. You probably havent' personally proven probability theory from scratch
Fortunately, English allows us to qualify nouns with adjectives. Which allows us to distinguish between X-ish and Y-ish authorities.
Probability allows you to do things with clearly defined idea and evidence. Getting them clearly defined is the underwater part of the iceberg. That probability theory doens't help with.
You say that paleo-inspired diets "have helped many other people in the community." What percent of people in this community have benefited from those diets how much, and how does this compare with other diets, e.g. DASH?
When I switched to a mostly paleo diet this spring, I stopped needing willpower to prevent weight gain. I suspect I experienced other advantages, but don't have good evidence for them.
It will be hard to tell what fraction have benefited, because people who found it hard or ineffective are less likely to write about it.
Maybe I should be more clear.
The anecdotes of a few people on this site mean very little to me in regards to the efficacy of a particular diet. There doesn't seem to be any experimental evidence with a reasonable sample size to suggest that Paleo diets actually lead to weight loss (there is evidence that DASH leads to weight loss). The paleo diet is relatively high in saturated fat (and there is a scientific consensus that high saturated fat intake causes heart disease) while DASH is not. Omitting grain and dairy eliminates the sources of some nutrients and I would hypothesize that a significant percent of people switching to a paleo diet don't actually compensate for that loss by getting those nutrients from other sources.
It just doesn't make sense to advocate for a paleo diet when there is no evidence of it performing better in the aggregate than diets which are supported by the scientific consensus. If I'm mistaken please link me to some good quality studies.
There is a lot of really bad "science" out there on diet, there was a political decision made in the 1970s to promote low-fat diets, in spite of what most scientists thought. For a detailed story on this, and on what is known about fat and carbohydrates in diet, I suggest Gary Taubes' Good Calories, Bad Calories.
While little about diet is certain, the bulk of the scientific evidence is that "high saturated fat intake," in the context of a low-carbohydrate diet, does not increase real cardiac risk. On the contrary, high-fat low-carb diets, like the Atkins diet, lower cardiac risk factors.
The "scientific consensus" described above isn't.
This isn't about "paleo diet," as such, except that paleo diets do tend to be high-fat and low-carb. We did not evolve eating grain, and then the grain may be highly processed to remove most fiber, creating rapid absorption of glucose into the bloodstream, requiring, then, fast insulin release to avoid toxic levels.
We can eat carbohydrrates, but they were a small part of our diet, generally mixed with fiber, which slows digestion. Fat also does this. It's being claimed with substantial evidence that the "diseases of civilization," i.e., heart disease, diabetes, and cancer, are largely caused by diets with high carb content, especially highly processed carbs. High natural fat content does not seem to be a problem, the opposite.
I did the research and am bettiing my life on this. And I wish we knew more than we do. Taubes has started a Nutrition Science Initiative.
Would you mind linking to this research that shows low carb diets lower cardiac risk factors? All I really know about the matter is that in the aggregate people who actually study diet generally conclude that Atkins-like diets are not optimal for health. In particular, the US Department of Health and Human Services, the Centers for Disease Control, the American Heart Association, and the World Health Organization all seem to conclude that saturated fats directly increase cardiovascular risk.
You're also arguing against anything said by these organizations when discussing highly processed carbs. DASH specifically recommends making at least half of grains consumed whole, and the implication seems to be that the ideal would be eating no refined grains.
Okay, read Taubes' article in the New York Times, "What if it's all been a big fat lie?". That's ten years old, there has been research published since then, but nothing to change the basic conclusions.
I suggest reading it before the rest here!
The organizations are not "scientific." They are largely political creatures, and how they are funded can be an issue. If cholesterol is not the problem, what happens to the statin drug market? But I don't know that recommendations are driven by funding.
Taubes is a thorough science writer, a skeptic, and it is indeed science that he's interested in. He is not selling a diet.
Taubes covers the history of diet recommendations in the U.S. It's shocking.
Something brief: In 1957, the American Heart Association opposed Ansel Keys (the author of the epidemiological study that got the whole fat=bad thing going), with a 15-page report, saying there was no evidence for the fat/heart disease hypothesis. Less than four years later, a 2-page report from the AHA totally reversed that, and, according to Taubes, that report included a half-page of "recent scientific references on dietary fat and atherosclerosis," many of which contradicted the conclusions of the report, which recommended reducing the risk of heart disease by reducing dietary fat..
What happened? Did the science change that quickly? Read Taubes! (i.e, read the book, "Good Calories, Bad Calories." Taubes also has a recent book, less technical, more popular, I think, but I haven't read it.)
I could point to studies; the Atkins diet in particular has been studied independently, and it improves cardiac risk factors, it does not make them worse. Yet it's a high-fat diet. So what is the risk?
Yes. I'm arguing against a commonly-recommended diet. I'm suggesting that relying on these agencies and their recommendations, without understanding the science, is very dangerous.
Taube had written a book about salt, and when he was doing the research, he noticed nutritional "expert" after "expert" who had no clue how science works, who used extremely poor reasoning, conclusion-driven. And he noticed the same when he started working on fat.
When I started reading in the field, out of personal necessity, I could see it myself, really poor "science" being commonly asserted as if it were simple fact.
Such as "a calorie is a calorie." I.e., it's said there is no difference between fat calories and carb calories, and the claim of Atkins that fat had a "metabolic advantage" was allegedly preposterous, this would supposedly violate the laws of thermodynamics.
However:
various foods take different amounts of energy to metabolize, and some calories are excreted.
food calories are not thermodynamic calories, and this is not merely the "kilocalorie" thing, they are modified according to metabolic factors estimated from studies that were done about a century ago, and that may not be accurate under various dietary conditions.
carb metabolism (burning glucose) runs the body in a different way, and has behavioral effects, compared to fat metabolism. Appetite shifts (fat suppresses appetite, generally).
There never was good evidence that saturated fats increased cardiovascular risk, that was speculation from the highly flawed Keys study. It was thought "well, to really know will take very expensive trials, we can't do that, so why not reduce fat? It can't hurt!"
But it could and probably did hurt. Lower fat in the diet, you almost certainly raise carbs, and quite possibly increase obesity, diabetes, heart disease, and there is an effect on cancer, apparently.
Bottom line, the officially-recommended diets have very little science behind them.
This really is not the place to debate the issue. Read the literature! Taubes is an excellent door into it, the book GCBC is about a fourth footnotes.
Or look at the Wikipedia article Saturated fat and cardiovascular disease controversy, (Do not trust Wikipedia articles to be neutral. They frequently are not. Use them to find other sources.)
It's tempting to sit back and trust the official organizations. It's a lot of work to actually read the evidence. However, is this important?
I thought it was, like, my life depends on it.
The AHA is a $600 million/year organization. If fat/heart disease hypothesis is as wrong as it appears to be, they may have cost Americans, in damage to health, a great deal more than that. Now, consider what we know about human organizations. When they get it spectacularly wrong, but before there is absolute proof, do they back up easily?
No. Their business is to be the experts, remember that $600 million per year.
This pattern-matches exactly to everything else conspiracy theory related I have ever read, and by that I mean it misinterprets the relative incentives. You speak of organizations that apparently face financial loss if they turn out to be wrong, but you provide no convincing reason for why they would lose funding if they revised their positions due to new evidence. You also don't mention the huge profits an organization would surely make if it provided compelling evidence for how to actually lower the risk of the largest cause of death in the United States. In particular:
-I'm not going to read a book rather than reading the results of randomized, controlled trials or meta-analyses of many such studies.
-You say you "could point to studies." Then do it.
I pointed to sources that contain huge lists of sources, including such studies. Some of what I pointed to is free. There is no need to reproduce this here. The relevance here is to cascades, which occur without "conspiracies."
A common response to a cascade being pointed out is to call the observer a "conspiracy theorist," and that happens even if no conspiracy has been alleged. That people might be unconsciously motivated by issues of reputation and "face" is just what's so for human beings.
I mentioned funding and was explicit that I did not know if this had an actual effect on recommendations.
Taubes has laid out the history of the "official dietary recommendations," and he makes a persuasive case that some serious errors were made, and that some are persisting in beliefs that are not consistent with what is scientifically known.
Anyway, aceofspades asks for studies. He didn't specify the context, but it was that he had written
I linked to extensive coverage of that research, by science journalists. However, specifically, and just what I picked up quickly:
Weight loss with a low-carbohydrate, Mediterranean, or low-fat diet. (Blood lipids, i.e., cardiac risk factors, were studied.)
Comparison of the Atkins, Zone, Ornish, and LEARN diets for change in weight and related risk factors among overweight premenopausal women: the A TO Z Weight Loss Study: a randomized trial. (Lipid profile was studied.)
Systematic review of randomized controlled trials of low-carbohydrate vs. low-fat/low-calorie diets in the management of obesity and its comorbidities. (This is a "systematic review," very much on point as to cardiac risk.)
Part of my own experience:
I was under forty when my doctor, whom I trusted greatly, recommended that I go on a low-fat diet because I had mildly elevated cholesterol. Over 20 years later, the results: I'd gained about 30 lbs, my cholesterol levels were a lot higher. Sure, I wasn't terribly compliant, but I'd shifted the balance greatly toward low-fat. Turns out my experience was typical. Compliance with low-fat diets is commonly poor, and the effect of the recommendation is often weight gain and worsening lipids. So then statins are prescribed....
My new doctor suggested the South Beach diet (kind of a compromised lower-fat or lower-sat-fat Atkins diet, also by a famous cardiologist), but I did the research this time, and found that the science was stronger behind Atkins. I told him, and he led me into his office and handed me the standard textbook on diabetes, written in the 1920s, that described what was then the standard treatment for type II diabetes. A low-carb diet. Insulin had just been discovered, and insulin was considered a miracle drug for the rest, who didn't respond to low-carb diets. Fast forward, the American Diabetes Association discourages low-carb diets. Why? It's really a good question!
Well, why hadn't this doctor told me straight out about low-carb, that my high cholesterol was not necessarily a problem? It's a little thing called "standard of practice." He could lose his job and/or his license. However, he could smile at me and tell me "whatever you are doing, keep it up." (Because my lipids and other indicators of heart health improved greatly.)
And then I found from a biopsy that I have prostate cancer. Taubes describes a plausible mechanism for how high-carb diets can increase the incidence of prostate cancer.
My story is anecdotal, and there is much we don't know about diet, but "experts" still confidently tell us what to eat and what not to eat, and it's entirely possible that the advice given to me, in full good faith, 30 years ago, led me into a potentially fatal disease. And similar may be true for many others. And it is still going on.
I was referred to a radiation oncologist who advised radiation treatment, if not surgery. So, again, I did the research, and found that the latest advice for someone exactly my age and situation was "watchful waiting." I'm still more likely to die from something else than prostate cancer.
So why the recommendation from the oncologist? Well, it's what he does. Go to a carpenter, you are likely to get some advice that involves a hammer. But is he aware of the latest research? Probably, though possibly not. But he's not about to recommend something based on that, because it is not yet the "standard of practice," and he can get his ass sued. Even if the advice was right as to risks.)
Cascades are a real problem that dumb down social structures, and especially when they create a "scientific consensus" that isn't rooted in science and the scientific method. Cascades, however, occur in all kinds of social situations.
For some of the other side, see a review of Taube's latest book, "Why We Get Fat".
The author is Harriet Hall, supposedly a skeptic, but what I can see in the review is a set of assumptions that are, for her, unchallenged. Small example: salt. A few people with high blood pressure may benefit from salt reduction. Most people don't. Some people may be harmed.
Taubes again in the New York Times, Salt, We Misjudged You.
The summary:
This attitude that studies that go against prevailing beliefs should be ignored on the basis that, well, they go against prevailing beliefs, has been the norm for the anti-salt campaign for decades. Maybe now the prevailing beliefs should be changed. The British scientist and educator Thomas Huxley, known as Darwin’s bulldog for his advocacy of evolution, may have put it best back in 1860. “My business,” he wrote, “is to teach my aspirations to conform themselves to fact, not to try and make facts harmonize with my aspirations.”
What Taubes encounters:
Gary Taubes is a Blowhard
Center for Science in the Public Interest
These critics have in common that they misrepresent Taubes. He's raising possibilities, not claiming proof.
However, what Taubes points to is the possibility that what they have been advocating for decades might be harming people. This is unthinkable.
He must be wrong, so they will find every flaw, real or imagined, ignoring the central problem, that sound research has never done more than imply possible harm, and that at best reduced salt, for normal people, may have a tiny effect on longevity, and, in the other direction, may have serious consequences, increasing mortality.
People whose entire livelihoods, long-term, depend on the "consensus" that they created and pushed, often against the evidence, often against strong scientific opposition, with retaliation against those with contrary opinions, then imply that Taubes is making it up to make money.
When an old pot calls the new kettle black, we may need to stand back and develop some perspective.
I can't find mention of this on LW and the first few things Google turns up have to do with treating kidney stones, which doesn't seem relevant. What benefits do you get when this "works dramatically"?
About the same as drinking a cup of coffee - i.e., it works as a perker-upper, somehow. I'm not sure, since it doesn't do anything for me except possibly mitigate foot cramps.
Might be just placebo effect.
I don't have a reference at hand, but I recall hearing of a study where they gave the same sport drink to two groups of atletes and then measured their performances. Researchers told one group that they were testing a new, high quality super-duper sport drink, while they told the other group that it was a somewhat subpar product. Guess what happened to the performances of the two groups?
Potassium is a major one of the ions moved around in neuron action potential activation, and the RDA is waaay above what almost everyone gets (you would need to eat 12 bananas/day to meet it). The idea is something like that it helps neuron transmission work.
Is that like putting more petrol in your car to make it go faster?
Does "low sodium" table salt (mostly potassium chloride) give the same results (whatever they are) as potassium citrate?
Potassium chloride works, but it tastes worse and the chlorine can do bad things if you take too much of it at once.
Where the extra citrate is mostly just good for you, though too much potassium is still harmful. It's also possible that potassium citrate is more mentally enhancing and potassium chloride is more physically enhancing.
If the effect just depends on the citrate group, then take citric acid, which naturally occurs, along with vitamins and dietary fiber, in most fruits and vegetables. This saves your money and prevents you from messing with your electrolyte balance. Vitamin and mineral overdose is almost impossible if you eat whole foods (other than the livers of certain species), but it's relatively easy if you mess around with supplements.
Anyway, until I see some evidence that citrate has cognitive enhancing properties, I'm going with the placebo effect hypothesis.
Well, if you're interested, I recently ordered a package of 00 gel caps and 1lb of potassium citrate powder. If it works for me as it seemed to work for Kevin, I can then do one of my double-blind experiments with dual n-back or something.
Sounds interesting.
A sample size of one would be too small for scientific significance, but it still seems worth trying.
A sample size of one is more than enough for internal validity, assuming I take enough data points to detect an effect. (I typically do 20+ pairs; the stronger the effect, the less you need.)
What n=1 threatens is any kind of external validity, that is, whether one would observe the effect or lack of effect in someone other than me. I may find that potassium is fantastic for me, but that tells me very little about whether potassium would help you or anyone else on this page.
Agree.
The company's website seems to indicate that it is actually potassium iodide (under nutritional facts). Are you aware of this? Do you know if it's accurate?
The product I received matches the Wikipedia article; the labeling on the bag is 'potassium citrate', and the nutritional breakdown per 100 grams matches what the Wikipedia article claims (eg. the bag claims 36.2g of potassium per 100g of powder while WP says "Pure potassium citrate contains 38.28% potassium.", which makes sense if the formula is C6H5K3O7 - as compared to KI). I haven't noticed any of the side-effects which are listed for potassium iodide despite taking what would by now have been a serious dose of potassium iodide. Finally, that description is identical to their informational page for potassium iodide crystals. So my guess is that it's some sort of copy and paste error, but definitely worth me emailing them to ask...
I'd think it'd be more like like gold-plating the car's electrical system (imagine you could do that without disassembling the whole thing)
The first time I took supplemental potassium (50% US RDA in a lot of water), it was like a brain fog lifted that I never knew I had, and I felt profoundly energized in a way that made me feel exercise was reasonable and prudent, which resulted in me and the roommate that had just supplemented potassium going for an hour long walk at 2AM.
Experiences since then have not been quite so profound (which probably was so stark for me as I was likely fixing an acute deficiency), but I can still count on a moderately large amount of potassium to give me a solid, nearly side effect free performance boost for a few hours.
How much is that in milligrams? Googling, I see different values for RDA. Also, I take it you were using some sort of bulk powder? (The gel caps all seem to be ridiculously small, like 99mg when the RDA is >3000mg!)
I think I've been going with 4.7 grams as RDA. And yes, bulk powder. Due to potential digestive upset, I typically don't administer more than 25% RDA now.
I had a similar experience the first time I supplemented magnesium. Long lasting, non-jittery energy spike. I felt stronger (and empirically could in fact lift more weight), felt better, and was extremely happy. The effect decreased the next few times. After 4 doses (of 50% RDA, spread out over 2 weeks) I began to have adverse effects, including heart palpitation, weakness, and "sense of impending doom".
I wonder if there is a general physiological response to a sudden swing in electrolyte balance that causes the positive effect, rather than the removal of a deficiency.
Is there a typo/something I'm missing, or who the hell set the RDA that high?
Thanks! Did you have any other indication at that time that you might have had a potassium deficiency, such as recent blood work?
(I had blood work done relatively recently, which turned up excessive levels of ferritin, but normal potassium at 4.2 compared to a 3.5-5.0 normal range.)
No, but I had been doing Bikram yoga on and off, and I think I wasn't keeping up the practice because I wasn't able to properly rehydrate myself.
Isn't that as wrong and misleading as using Rational Dieting? Wouldn't Optimal imply that this is the very best way to diet when the article is actually on 'Comparing evidence for for diets'? Same as how 'Rational Dieting' carries an implication that your post discusses the cognitive algorithm for dieting, as opposed to four contributing things to keep in mind and thus you should use 'Four Biases Screwing Up Your Diet' for a title, doesn't Optimal imply the wrong thing? Seems to me like you are committing different fallacies (or errors) when you are trying to fix the previous fallacies (or errors) committed due to the misuse of the word 'rational'.
And, if you want to get technical, optimal implies both an objective function to measure the solution by, and a proof that no solutions are superior. "Optimize your diet" seems better than "optimal diets," but even then "four proven diets" seems superior to both of those.
"A directed search of the space of diet configurations" just doesn't have the same ring to it.
Depends. If it's an article describing how to evaluate different diets to pick the optimal one, then it is indeed an article about optimal dieting, even if it doesn't identify an optimal diet.
Meta: I suggest creating a sequence index, and putting a link to the next post in the sequence at the bottom of each post, like you already have for all your other sequences.
Through use of the "seq_epistemology" tag this is possible via the "Article Navigation". Maybe this tag was only added after the comment? However, it works quite well!
Thanks for pointing that out. I forgot to check for tags, so I'm not sure whether it was already there. I still think it should be made more direct, though.
Do you think Dmytry might be a good case study for this? I thought he had some interesting and novel ideas about processes/algorithms that at least didn't seem obviously wrong as well as some technical understanding of things like Solomonoff Induction, and also had strong disagreements with many of us regarding FAI and AI Risk. Should we have "extended our hands" to him more (at least before he became increasingly trollish), and if so how? (How would you taboo "extend hands" generally and in this specific instance?) If not, do you have someone else in mind who could serve as a concrete example?
It's my impression that yes, more hand extension would have been good, but I didn't follow his threads that closely.
I wonder if the trivial inconvenience of him not being that great of a communicator might have put people off from following his threads.
Does somebody want to post one part of Dmytry that seems new and true? My impression on a quick skim was not favorable.
This comment on a drawback of donating primarily to the charities you think is best lest you make it profitable to invest in being or appearing better by your standards, and various empirical parameters (availablility of honest signals, your ability to distinguish different signals, the quantity of funds allocated by decision rules like yours, the costs of dishonest signals) fall in a narrow region. I am skeptical that this is a real issue in practice (e.g. GiveWell channels to a top charity, rather than diversifying), separate from the problem of assessing evidence (which is normally focused on finding signals that are costly to fake in any case), but it's still an interesting theoretical point which I hadn't seen made on Less Wrong before.
From an obituary for Ronald Hamowy.
Thank you for explaining that, I had no idea where he got Ayn Rand from.
Not necessarily. Your model could have been quite reasonable, and yet something weird happened in the world. Sometimes, people win the lottery twice on the same day.
I think the point is that if something happens, it has probability 1 of having happened, so it doesn't make sense to call it "unlikely." A perfect model could have predicted it with probability 1. If you failed to predict it, it's because your model was imperfect.
I think, however, that plenty of reasonable models of group interactions given our current knowledge would have failed to predict the rise of Objectivism.
Indeed. The impression I get is that in calling Objectivism "the unlikeliest cult in the world", the intent of "unlikeliest" isn't as a further insult to Objectivism. Rather, it's to show that the author is discussing something exceptional, and therefore interesting.
I think EY is pointing to the case of somebody winning the lottery twice in a lifetime, which people would think is incredibly weird, despite it being very normal - see http://www.amazon.com/Understanding-Probability-Chance-Rules-Everyday/dp/0521833299. I suspect that the "looks weird" due to having the wrong model is more common than "looks weird" due to being an outlier.
It would be very sad to get lynched by the Greens while discussing the likelihoods of Many Words vs Many Worlds :)
In comments on this thread, the issue of diet and "consensus" came up. Why I consider this topic important here, quite in line with what EY asserted in his post, is shown in this New York Times column by John Tierney.
The issue is not this or that alleged fact. ("Saturated Fat is Harmful," or "Saturated Fat is Good" or even "We don't know") The issue is how we know what we know, and what we don't know, and how individual and social fallacies lead to possible error.
Tierney writes about cascades, social phenomena that can afflict scientists, whom we might imagine would know better, creating the appearance of a "scientific consensus" that is not rooted in science and the scientific method.
Usually, most scientists get it right most of the time, but I've seen several such cascades create a false "scientific consensus" that is almost invulnerable, and it can take generations for that false consensus to unravel, so strong are the social mechanisms that maintain it. A few who are willing to risk their careers in pursuit of real science eventually prevail -- the scientific method is ultimately powerful --, but the cost can be enormous to all of us in terms of poor decisions and delayed benefits.
We might consider creating some case studies. Unless we reach back to old controversies, these will be, by nature, controversial-in-the-present. The goal would not be an answer about "the truth." The value would be in examining the reasoning, the sources and processes of what people (including experts) believe or trust.
Many people readily fix on conclusions, and politics ("importance") easily leads to belief that anyone with a contrary conclusion -- or even who only presents contrary evidence -- is a positive danger, a menace to health or science, to be condemned and sanctioned.
Why is this a good thing? It seems to me that people give up too easily just as much as—if not more than—the opposite, especially when they're trying something that they don't expect to work. You have to stick with it long enough to collect a reasonable amount of data.
This is true. Wouldn't it be beneficial, though, for any particular community to focus on upholding rationalist principles? If the LW community is specifically committed to rationality, other communities should be committed to rationality as a side effect—as an optimization heuristic.
The effective altruism community, for example, already does a pretty good job of this. Effective altruists tend to be aware of the sorts of biases that get in the way of effective giving. On the other hand, most charities and charity-based communities don't have this focus on rationality. The breast cancer movement, for instance, does not give the same attention to rationality as the effective altruism movement.
Of course, if the breast cancer movement did give attention to rationality, it probably wouldn't be the breast cancer movement anymore—if would be the effective altruism movement. If you're looking for the optimal method for preventing breast cancer, why not generalize that and just look for the optimal method for helping people (which is almost certainly not breast cancer research)?
Is it the intention here to exclude people from the community who have doubts as to the universal applicability of Rationalism, in the general sense? Or someone who argues (even from a non-Rational standpoint) that a non-Rational method is optimal in a specific case? Or even someone who believes that from a Rational standpoint, a non-Rational method is optimal in general?
Obviously someone who uses non-Rational methods to conclude that non-Rational methods are in general superior has nothing to contribute on that subject. How closely do decisions about Rationality have to conform to the local norm for someone to be a real community member?
This community shouldn't be for rational people, it should be for irrational people.
That's true in a shallow sense because we're all irrational to some degree. But I think that there needs to be a division between sites that take irrational people and get them on the path of rationality, and sites that take people on the path of rationality and spur them along. I've seen LessWrong as the first kind of site.
Thumbs up for this; I might even suggest making it a "tl;dr". In print, I think sometimes "very short abstraction - concrete examples - moderate-length abstraction" works well.
I don't think that Go constitutes good metaphor. Go isn't much a game about preventing certain outcomes, It's a game where you trade territory for influence. It's a game about leaving aji open.
I feel like there's a small inferential gap between the Ayn Rand anecdote and the Objectivist failure mode as you've presented it: the anecdote establishes unjustified personal antipathy on the part of a group leader, but, absent context, that doesn't lead inevitably to its target getting voted off the island.
People who've read about the history of Objectivism would probably be able to fill this gap with their own knowledge. People who've internalized Objectivism's reputation as a cult would probably fill it with an assumption (a correct one, as it turns out). But I don't think those sets cover the space of possible readers all that well.
True. I didn't understand how the anecdote related to the article, although Daniel Burfoot's comment helped to clarify.
This post should really be (also) a part of the Craft and the Community sequence. The insight in conveys seems very relevant and very valuable, and I don't recall it being stated anywhere near as explicitly.
I'm starting to get very confused about what Eliezer means by "deflates to". I thought he meant "has the same meaning as" or "conveys the same meaning as", but now I think maybe he means "most of the time when you want to use the former, you should use the latter instead". Sorry if I'm still stuck on the by-now-not-quite-central topic of semantics, but I don't see how "rational" has the same meaning as "should", either according to my own understanding, or according to definitions given by Eliezer in the past. (My understanding is that "should" conveys some hard-to-define sense of normativity, whereas "rationality" is a subset of normativity that seems more objective than the other parts, which we usually call "morality".)
I had in mind, "I think you were really trying to say X" which is closer to your second meaning, not "This means X under all possible circumstances even when actually used correctly".
I think, in general, these statements amount to the same thing. "It's rational to __" generally means the same thing as the "deflated" statement; the key difference is its use of the word "rational" or "rationality" that ends up weakening the term.
FWIW, I understood "X deflates to Y" in this context to mean something like "most of the time when people say X, their beliefs about the world are such that if their goal was to express those beliefs maximally accurately they should instead say Y."
I also expect that Eliezer would say that the bundle of computation to which "should" properly refers includes but is not limited to what "rational" refers to, but I don't think that's relevant to what we're looking at here.
My own thoughts about these and related words are here.
One possible strategy for making this easier is explicitly having sub-communities for each optimal thing, that all explicitly include some non-rationalists and exclude some rationalists. Just based on the naive model that people want to identify their behaviour with a community or it will feel odd, and that there is some pressure not to have overlapping signals of membership in different tribes since it be confusing.
I like that idea, but i think there can be too much granularity. The feeling of 'People who agree with me on X also agree with me on completely unrelated Y' is awesome.
The halo effect may be awesome ... but it's deadly!
The halo effect is not necessarily either a cause or a consequence of the quoted phenomenon.
Do you agree then that it is a potential explanation? If so, what's a more plausible one? It may limitations of my imagination, but I don't see one.
Try.
I smell a recommender system. Think of what sites like amazon.com do with "people who like X also liked Y".
This is just an observation. I'm not saying that we should go out and build a system to match these people and these Xs and Ys.
I posted a comment with a similar sentiment. I think it's not necessarily important to explicitly include non-rationalists in communities (although I'm not sure that's what you're saying, so forgive me if I misinterpreted you). But I do think it's a good idea to promote rationalist leanings in groups that don't necessarily identify as rationalist.
In fact, that's how I discovered LW. I participate in the utilitarianism community, and a large proportion of utilitarians (on the internet, at least) also identify as rationalist. I started reading LW as an indirect result of my reading about utilitarianism. Utilitarians certainly seem to perform better as rationalists, and other communities should, too.
Good post.
What about using the word 'rational' for alliterative purposes? :)
Here's some, but they're not great. As I mentioned in an early draft of How to Fall in Love with Modern Classical Music, Nero's "Doomsday" samples from my favorite piece of contemporary classical music, John Adams' Harmonielehre. Also see Rudebrat's "Amadeus" and this Fur Elise dubstep remix. The best I could find in 5 minutes was Dubstep Beethoven.
The "Dubstep Beethoven" is very funny, but doesn't actually appear to be dubstep or even slightly dubstep-like. Admittedly, everything I know about dubstep I know from having just read the Wikipedia article about it, so maybe I'm just confused.
Wow! Luke, I somehow totally missed that you had an interest in this subject. You even have Ferneyhough on that page! (And Murail -- who was actually a teacher of mine.)
I'm about ready to forgive you for every sin you've ever committed -- maybe even including the use of the word "classical". :-)
SI + $10.
</affective spiral>
[veering off topic]
So, since you guys know something about music...
I think I have fairly poor taste in music. Perhaps as a result of growing up listening to NES and SNES-era video game music all the time, I have an inordinate fondness for the sound of MIDI files, which are supposed to be one of those things everyone hates. As a matter of fact, I tend to feel that video game music has gotten notably worse as the technical capabilities of game consoles has gotten better. (I have three hypotheses that could explain this. One is that the music has improved but my taste in music sucks. The second is that voice acting competes with music for players' attention, and that it's no coincidence that the music stopped being as interesting at the same time voice acting became more common. The third is that improvements in technology "freed" composers from having to rely on melodic complexity alone to hold gamers' attention, so melodies have gotten less interesting.)
Anyway, what I'm really asking is, are those old game soundtracks actually any good, or do I just have no taste?
I think a lot of other people have good points. I DO still think video game music is often excellent, but not universally. I think modern video game music is higher variance - games where someone obviously cared about the music have both good melodies AND good instrumentation. But there's a lot of games where nobody cared at all.
The best video-game music I've heard recently was from Braid and Bastion. In both cases, the music is clearly a central "character" of the game, obviously cared deeply about by the creator. Braid falls into the "silent protagonist" category. Bastion oddly enough has a lot of dialog, but the narration is intertwined with the audio in a pretty deliberate fashion.
I grew up listening to classical music because of the influence of my parents, and I was heavily involved in the classical music subculture because I shared a house with a music student.
I gradually stopped listening to classic music as I realized I didn't really enjoy it and I merely associated it with high status. Now I almost exclusively listen to chiptunes and 90s electronic dance music. This music is much simpler than music I previously listened to (I also listened to metal and jazz), but I've made a conscious decision to listen to music purely for enjoyment. I now spend far less effort on thinking about music, but I'm equally happy with it, so I think it's a win.
Were you listening to classical music of all periods, or just to "modern" classical music? I personally believe that the latter doesn't cause much pleasure in most people who listen to it, and its (limited) appeal is instead explained largely in terms of self- and public signaling. At the same time, I find that certain works of a few classical composers of earlier periods (such as Bach, Mozart, Chopin, Debussy and Vaughan Williams [click for examples]) induce in me intensely pleasant experiences.
All periods.
I still like some of J. S. Bach's keyboard works (especially as MIDI played with FM synthesis), and some minimalist compositions (Steve Reich etc.).
IIRC, J.S. Bach wrote his keyboard pieces for the harpsichord, which, unlike the modern piano, can't change it's volume based on how the performer presses the keys. MIDI is usually played with a similar constant volume, so the MIDI version may actually be closer to how it was intended to sound than the same piece being played by a concert pianist.
If you are fond of synthesized versions of Bach, you should check out Wendy Carlos's "Switched-On Bach" albums. (Wendy Carlos was formerly Walter Carlos and you might possibly find old copies under that name.)
To what extend are you aware of the wonderful world of chiptune music and video game music rearrangements? By video game music rearrangements I mean thinks like symphonic concerts (my favorite is "Symphonic Fantasies", which you can listen to on youtube and is much better than some others, like "Video Games Live" ), the Final Fantasy piano collections, ocremix.org, and the works fans put on youtube.
If you have no taste, you are not alone. Nabuo Uematsu is quite popular. Personally I found those exact videos quite annoying, though that might have to do with sound quality and some filters might improve them. I do, however, like the piano arrangements this one guy made to pieces from the same series: Burning Blood and a Legend 1 medley. I also love the Link's Awakening OST and this album, which has a similar (Game Boy) sound.
What do you think is a good taste in music?
My first answer would be that the concept is quite silly. (That also seems politically correct to me)
My second answer would be that it's not about what music you like but about how varied your taste is. It seems to me that one gets more total amounts of enjoyment and different kinds of experiences from liking different kinds of music, and different qualities in music.
If you are interested in developing your taste, I'd suggest you listen to this Final Fantasy medley (I'm assuming you are familiar with the Final Fantasy music. For me the Chrono medley from the same concert was the first time I really appreciated symphonic music). Try to recognize the themes. That should be fun (at least it's for me) and you might like the piece better after having concentrated on it. You could then pay attention to different aspects like the variation in loudness (sometimes the music whispers, sometimes it's loud and impressive), which is something pop-music or chiptune don't really have. If you take a liking to this type of music you can then listen to something like Beethoven's 5th and notice that it's really the same type of music.