In Defense of Moral Investigation

-5 MTGandP 04 November 2012 04:26AM

Cross-posted from my blog.

Some argue that certain claims about the nature of reality could cause people to become more immoral. Examples of such suppositions include:

1. People should follow Christianity because we will be more moral if we have to avoid eternal damnation.
2. The theory of evolution says that since people evolved from bacteria and have no immortal souls, human lives are worthless. Therefore, we can rape and kill each other and there’s nothing wrong with that.
3. The theory of evolution says that people should act selfishly all the time.
4. If free will doesn’t exist, people will be free to hurt and kill each other and won’t be held responsible.

Such arguments are bogus. Any new information about reality, if properly understood (that part is important), can only cause people to become more ethical. Morality is contingent upon the nature of the universe; the better we understand the universe, the better we understand morality.

Some people fear that if we investigate reality, we will discover truths that cause us to behave unethically. In some cases, people even wish to discount discoveries that already have been made—such as natural selection or the nonexistence of free will [1]—on the basis that these discoveries may lead to immoral behavior.

Someone may take the theory of evolution and use that as evidence that it is morally justified to behave selfishly at the expense of others. However, such a person would be misinterpreting evolution. Nowhere does the theory of evolution say that we should attempt to propagate our genes at the expense of every other living being; it merely explains that beings that do do that tend to survive and reproduce. Evolution tells us nothing about what we ought to value.

On the other hand, evolution (and, in fact, all branches of science) does tell us something about how to achieve what we do value. Once we understand how the world works, we can take it into account and more effectively work towards our goals. (This is Sam Harris’ thesis in The Moral Landscape.) For example, positive psychology provides insights into how best to make ourselves happy; and biology tells us which animals can feel pain and therefore deserve moral consideration.

It is possible to make a discovery that changes our conceptions about what is or is not moral. If such a discovery is made, what was previously thought to be immoral may be found to be moral, or vice versa. Some Europeans justified slavery by claiming that Africans were stupid or unable to take care of themselves, and that having a master was good for them; when science proved such claims to be false, it was impossible to scientifically support slavery.

Race and Intelligence

When anthropologist Samuel Morton found that Africans had smaller craniums than Europeans, he stirred up considerable controversy. Evolutionary biologist Stephen Jay Gould argued that Morton’s findings were the result of bias, but later studies affirmed Morton’s results and concluded that Gould had in fact been biased by his desire to affirm racial equality.

As modern neuroscience has shown, there is no correlation between cranial size and intelligence (at least between individuals of the same species). But imagine that it were discovered that Africans and those of African descent are indeed less intelligent on average than Europeans. What would that say about how we should treat them?

It certainly would not justify slavery: a person’s moral worth has nothing to do with her intelligence. If people took an enlightened perspective about this new discovery, it could only serve to improve the world. There would be no question that people of African descent could still be happy and contribute to society. If brains truly functioned differently for different races, a strong understanding of those differences could empower us to improve the education system by teaching in different and more appropriate learning styles. An outcome where a particular race becomes less happy could only arise because the science was not properly understood.

As I write this, I feel some stigma attached to discussing the possibility that people of African descent are less intelligent. I see three main reasons for this. The first is that, not so long ago, African-Americans were considered unintelligent by the large portion of Western society and currently it is taboo even to raise a hypothetical scenario in which they are less intelligent. The second reason is that they are probably not. Races tend to score differently on IQ tests (with Asians scoring the highest), but (a) this could be the result of environmental influences and biases (including stereotype threat) and (b) IQ is a very limited measure of intelligence (I find myself continually surprised when news articles use the terms “IQ” and “intelligence” interchangeably). No robust evidence has ever demonstrated that one race is more or less intelligent than another. If we treated black people as though they were less intelligent than other races, that would clearly be a problem. The third reason is that even if some races are more or less intelligent on average, there would still be a large amount of overlap. Africans tend to be taller than Asians, for example, but there are plenty of tall Asians and plenty of short Africans. Therefore, it would be unfair to treat all Asians as though they are short and all Africans as though they are tall. (Obviously this example is a bit silly since one can immediately assess how tall a person is, but it is meant only as an illustrative analogy.)

But when people are truly different, treating them differently is not a bad thing. Consider dyslexia. People with dyslexia generally perform more poorly on certain tasks than people without dyslexia. However, they are not stigmatized or oppressed (for the most part, at least); instead, they are given specialized education programs designed to help them learn more effectively. Such programs help dyslexics more easily perform certain tasks that they would otherwise have difficulty performing. And although dyslexics are treated differently, it would not make sense to create separate bathrooms for them or require them to sit at the back of the bus. People with dyslexia are normal in every way, and where they are not, society does not stigmatize but helps them (for the most part, anyway). And where society does fail to help them, it is not because we know too much about them; indeed, it is often because we know too little.

Irrationality

Sometimes knowing the truth makes things worse, but only if one holds irrational beliefs. For example, one may believe that the theory of evolution dictates that people should act selfishly all the time. If one held such a belief, it may be better to ignore the evidence in favor of evolution. Of course, such a belief has no rational basis.

Unfortunately, even mostly-rational people may have difficulty avoiding irrational emotional reactions to facts [2]. A rational person can sometimes override an emotional response, but even the best of us cannot behave completely rationally. Given what we know about human irrationality, how should we adjust our behavior?

Even if we acknowledge that humans behave in predictably irrational ways, we should still err on the side of investigating truth too much rather than too little. Knowing the truth rarely hurts; when it does, it is because we are behaving irrationally; when we are, we can often overcome our irrationality. Indeed, uncovering the truth may actually help us overcome our irrationality.

A particular truth can only hurt someone if he holds a false belief. For example, if he believes that if African-Americans are less intelligent then slavery is justified, it is better for him to believe that black people are not less intelligent. However, the best solution is to rectify the false premise: even if African-Americans are less intelligent, slavery is not justified.

As Eliezer Yudkowsky put it, “Doing worse with more knowledge means you are doing something very wrong.”

To close, here is a quote by Richard Feynman:

Poets say science takes away from the beauty of the stars—mere globs of gas atoms. Nothing is ‘mere’. I too can see the stars on a desert night, and feel them. But do I see less or more? The vastness of the heavens stretches my imagination—stuck on this carousel my little eye can catch one-million-year-old light. A vast pattern—of which I am a part… What is the pattern, or the meaning, or the why? It does not do harm to the mystery to know a little about it. For far more marvelous is the truth than any artists of the past imagined it. Why do the poets of the present not speak of it? What men are poets who can speak of Jupiter if he were a man, but if he is an immense spinning sphere of methane and ammonia must be silent?

Notes

[1] Free will is a persistent illusion, and many readers may doubt me when I claim that it does not exist. Sam Harris offers an eloquent and accessible explanation of free will, found here and continued here. I have also written on the subject.

[2] This is not to say that emotions are always irrational, or that rationality is opposed to emotion. Rather, some particular emotional responses can arise for irrational reasons. See Eliezer Yudkowsky, “Feeling Rational”.

An anecdote about names

25 Manfred 30 October 2012 03:40AM

I've always been bad at names. But this semester, as part of my duties as a physics teacher, I tried to learn the names of 100 students. It went alright - two months later and there are still some I have trouble with. At the start, I was a bit worried. What if I've just read to many books and things, and used up most of my abnormally small allotment of memorized names? Then wasting names on students would be a really bad idea.

But things turned out not to work that way (as might be expected from how big the human brain is, viz people with eidetic memories). I've started remembering peoples' names after just one introduction, sometimes two. And the reasonable culprit just seems to be practice. You practice learning names, you get better at doing so. Before, I made occasional conversational detours if I couldn't remember someone's name. Now, I ask them again, because I'm confident that I'll remember it without tons of further awkwardness. If I'm really having trouble, sometimes I've written down a name with a short description, and that usually cements it. Remembering names isn't the most important social skill I've learned, but it's a surprisingly dramatic one - it makes me feel closer to people, and vice versa. And it only took having to learn the names of 100 people to get started.

The lesson from this is not necessarily just about learning names, though that might be useful to you. The lesson is about how ordinary people get a lot of practice at doing things like remembering names - if you can't do something that someone else can do, practice may be an effective place to start.  What seems like a property of yourself ("bad at names") may turn out to be quite mutable with the kind of practice those other people are doing.

Incentives to Make Money More Effectively, Should We List Them?

0 diegocaleiro 30 October 2012 12:28AM

There is a bit of nice and recent discussion going on about money here, "money" in the sense of the common peasant "I want more money" not in the senses of "Unit of caring" "Utilon buyer" "Trade-able entity unavailable for acausal trade" etc...

There is also, on the web, and in your local store, hundreds of thousands of advices on how to make money: fast, more, passively, selling your body, or brain, or virtual structures. 

People who have money (entrepreneurs, owners), study money (economists) or focus on money (stockbrokers) have a lot to tell you about something closely related. Incentives. They studied and understand when incentives work and when they don't. Some have become masters of creating the right incentives, for employees, customers, CEOs, everyone.

I wont lie, usually, the incentive is money. Pure, abstract, trade-able, feeling-of-power causing, money. Sometimes (Citibank, Safra Bank) also travelling.

What I don't see around a lot, and would like to ask you to help me brainstorm, and list (later I'll edit by adding suggestions and maybe commenting) are the incentives to make money themselves that work, and those that don't.

There are a lot of LWers, I know, who excel at domain X and are emotionally averse to making money, out of X, or sometimes, at all. Money-making is, in a word, impure! Item 6 will deal with that.

I'll say some random unconventional things and open the discussion for those with more experience:

1)Belonging to a society that lacks money and money making altogether is a very powerful incentive for wanting money. The Indians of Brazil (my homeland) frequently engage in juridic battles and bureaucracy to obtain more money from governmental organs, more time than they would dedicate to get medical attention, for instance.

2)I've heard that having a lot of money is an incentive to make more, by making it easier to do so. Spiteful Billionaire meme makes fun of this hypothesis

3)Also heard that not having money is a great incentive to make more. This seems more logical. When you see the empty plate in the horizon, or being thrown in the streets, some sort of alarm must trigger. (If you have experience with this, relate it please)

Yet, I find that even though there is a strong correlation (within the brazilian cultural elite I hang around with) between having a job and not having money at ages 18-28, there is no correlation at all between not having money and efficiency of making money (by either already be making a lot, or in a career with such prospect). It is as if the emergency button can get you to hike a hill, but not to climb a mountain.

4)Morals against munching on others. Traditional, moral or religious frameworks of mind will make an individual believe (probably truly) that munching on friends, family, couchsurfers, franciscans, altruists and other exploitable folk is a bad/undesirable thing to do. One who doesn't want to munch has a stronger incentive to get his own money.

5)Pride. Lots of people are very very proud of having money, theirs or familial. I never understood that, but here it is, just a fact, stated.

6)Nobility times. My mum points out frequently that in our day and age (in Brazil, but also in the states) there is an idealization of making money. Sometimes it is great, sometimes it is nauseating. Her point though is that during the medieval ages when social class was fundamentally determined from birth, working was seen as a lower activity. It was nearly opposite of Self-Madesmanship. Only the needy, with little status shall work. Same view was held by Aristotle, who, in his conception of working, never worked a day in his life. Something like how we see manual tough labour nowadays, but looking down on every worker. Some people, her and me included, were raised somehow in this anomic (mislocated) environment which persists in many European colonies and perhaps among the descendants, within Europe, of past nobles. Worse for them in a changing world.

7) Your suggestions will be listed here soon...

I ask you to suggest things which are incentives for, or against making money. And I don't mean "Good Reasons" and "Bad Reasons" I mean which incentives, in economic jargon, work effectively as an incentive for people to make more money. Then after that you can write about their goodness and badness.

 

 

Argument by lexical overloading, or, Don't cut your wants with shoulds

6 PhilGoetz 23 October 2012 12:03AM

I used the word "cut" in the title to mean the Prolog operator "cut", an operator which halts the evaluation of a statement in predicate logic.

Fiction writers often complain, "I keep procrastinating from writing," and, "Nobody reads what I write."  These complaints are usually the result of shoulds stopping them from thinking about their wants.

I've never heard anyone say, "I keep putting off playing baseball," or, "I keep putting off eating ice cream."  People who keep putting off writing don't want to write, they want to have written.  If you have to try to write more often than you have to try not to write, you've probably told yourself that you should write in order to attain some reward.  There's nothing wrong with that, but writers who complain that they keep putting off writing are often writing things with little potential payoff, like fan-fiction.  They don't stop and think how to improve the payoff that they want, because they get stuck on the should that they've cached in their heads.

I've repeatedly tried to help writers who complain that not enough people read what they write.  I explain that if you want to be read by a lot of people, you need to write something that a lot of people want to read.  This seems obvious to me, but I'm always immediately attacked by indignant writers saying that they want to write great fiction, and that one should write only to please oneself in order to write great fiction.  Sometimes these are the same people who complained that they want more people to read what they write.

Why does their desire to write great fiction take complete precedence over their desire to have readers?  Because they have cached that desire as a should.  (They haven't cached a should for their goal to get more readers because that goal arose much later, after they had already learned to write well and discovered, to their horror, that just writing well doesn't bring you readers.)  For a moral agent, shoulds trump wants, by definition.

I've explained before that I don't think there is any deep difference between wants and shoulds.  The English language doesn't pretend there is; we say "I should do X" both to mean "I have a moral obligation to do X" and "I need to do X to satisfy my goals."  The problem is that most people think there is a difference, and that shoulds are more important.  They have a want, they figure out what they need to do to satisfy it, they think aloud to themselves that they should do it, and boom, they have lexically convinced themselves that they have a moral obligation to do it.

The raw-experience dogma: Dissolving the “qualia” problem

2 metaphysicist 16 September 2012 07:15PM

[Cross-posted.]

1. Defining the problem: The inverted spectrum

Philosophy has been called a preoccupation with the questions entertained by adolescents, and one adolescent favorite concerns our knowledge of other persons’ “private experience” (raw experience or qualia). A philosophers’ version is the “inverted spectrum”: how do I know you see “red” rather than “blue” when you see this red print? How could we tell when we each link the same terms to the same outward descriptions? We each will say “red” when we see the print, even if you really see “blue.”

The intuition that allows us to be different this way is the intuition of raw experience (or of qualia). Philosophers of mind have devoted considerable attention to reconciling the intuition that raw experience exists with the intuition that inverted-spectrum indeterminacy has unacceptable dualist implications making the mental realm publicly unobservable, but it’s time for nihilism about qualia, whose claim to exist rests solely on the strength of a prejudice.

A. Attempted solutions to the inverted spectrum.

One account would have us examine which parts of the brain are activated by each perception, but then we rely on an unverifiable correlation between brain structures and “private experience.” With only a single example of private experience—our own—we have no basis for knowing what makes private experience the same or different between persons.

A subtler response to the inverted spectrum is that red and blue as experiences are distinct because red looks “red” due to its being constituted by certain responses, such as affect. Red makes you alert and tense; blue, tranquil or maybe sad. What we call the experience of red, on this account, just is the sense of alertness, and other manifestations. The hope is that identical observable responses to appropriate wavelengths might explain qualitative redness. Then, we could discover we experience blue when others experience red by finding that we idiosyncratically become tranquil instead of alert when exposed to the long wavelengths constituting physical red. This complication doesn’t remove the radical uncertainty about experiential descriptions. Emotion only seems more capable than cognition of explaining raw experience because emotional events are memorable. The affect theory doesn't answer how an emotional reaction can constitute a raw subjective experience.

B. The “substitution bias” of solving the “easy problem of consciousness” instead of the “hard problem.”

As in those examples, attempts at analyzing raw experience commonly appeal to the substitution process that psychologist Daniel Kahneman discovered in many cognitive fallacies. Substitution is the unthoughtful replacement of an easy for a related hard question. In the philosophy of mind, the distinct questions are actually termed the “easy problem of consciousness” and the “hard problem of consciousness,” and errors regarding consciousness typically are due to substituting the “easy problem” for the “hard,” where the easy problem is to explain some function that typically accompanies “awareness.” The philosopher might substitute knowledge of one’s own brain processes for raw experience; or, as in the previous example, experience’s neural accompaniments or its affective accompaniments. Avoiding the “substitution bias” is particularly hard when dealing with raw awareness, an unarticulated intuition; articulating it is a present purpose.

2. The false intuition of direct awareness

A. Our sense that the existence of raw experience is self-evident doesn’t show that it is true.

The theory that direct awareness reveals raw experience has long been almost sacrosanct in philosophy. According to the British Empiricists, direct experience consists of sense data and forms the indubitable basis of all synthetic knowledge. For Continental Rationalist Descartes, too, my direct experience—“I think”—indubitably proves my existence.
We do have a strong intuition that we have raw experience, the substance of direct awareness, but we have other strong intuitions, some turn out true and others false. We have an intuition that space is necessarily flat, an intuition proven false only with non-Euclidean geometries in the 19th century. We have an intuition that every event has a cause, which determinists believe but indeterminists deny. Sequestered intuitions aren’t knowledge.

B. Experience can’t reveal the error in the intuition that raw experience exists.

To correct wayward intuitions, we ordinarily test them against each other. A simple perceptual illusion illustrates: the popular Muller-Lyer illusion, where arrowheads on a line make it appear shorter than an identical line with the arrowheads reversed. Invoking the more credible intuition that measuring the lines finds their real length convinces us of the intuitive error that the lines are unequal. In contrast, we have no means to check the truth of the belief in raw experience; it simply seems self-evident, but it might seem equally self-evident if it were false. 

C. We can’t capture the ineffable core of raw experience with language because there’s really nothing there.

One task in philosophy is articulating the intuitions implicit in our thinking, and sometimes rejecting the intuition should result from concluding it employs concepts illogically. What shows the intuition of raw experience is incoherent (self-contradictory or vacuous) is that the terms we use to describe raw experience are limited to the terms for its referents; we have no terms to describe the experience as such, but rather, we describe qualia by applying terms denoting the ordinary cause of the supposed raw experience. The simplest explanation for the absence of a vocabulary to describe the qualitative properties of raw experience is that they don’t exist: a process without properties is conceptually vacuous.

D. We believe raw experience exists without detecting it.

One error in thinking about the existence of raw experience comes from confusing perception with belief, which is conceptually distinct. When people universally report that qualia “seem” to exist, they are only reporting their beliefs—despite their sense of certainty. Where “perception” is defined as a nervous system’s extraction of a sensory-array’s features, people can’t report their perceptions except through beliefs the perceptions sometimes engender: I can’t tell you my perceptions except by relating my beliefs about them. This conceptual truth is illustrated by the phenomenon of blindsight, a condition in  patients report complete blindness yet, by discriminating external objects, demonstrate that they can perceive them. Blindsighted patients can report only according to their beliefs, and they perceive more than they believe and report that they perceive. Qualia nihilism analyzes the intuition of raw experience as perceiving less than you believe and report you perceive, the reverse of blindsight.

3. The conceptual economy of qualia nihilism pays off in philosophical progress

Eliminating raw experience from ontology produces conceptual economy. A summary of its conceptual advantages:

   A. Qualia nihilism resolves an intractable problem for materialism: physical concepts are dispositional, whereas raw experiences concern properties that seem, instead, to pertain to noncausal essences. If raw experience was coherent, we could hope for a scientific insight, although no one has been able to define the general character of such an explanation. Removing a fundamental scientific mystery is a conceptual gain.
 
    B. Qualia nihilism resolves the private-language problem. There seems to be no possible language that uses nonpublic concepts. Eliminating raw experience allows explaining the absence of a private language by the nonexistence of any private referents.

    C.  Qualia nihilism offers a compelling diagnosis of where important skeptical arguments regarding the possibility of knowledge go wrong. The arguments—George Berkeley’s are their prototype—reason that sense data, being indubitable intuitions of direct experience, are the source of our knowledge, which must, in consequence, be about raw experience rather than the “external world.” If you accept the existence of raw experience, the argument is notoriously difficult to undermine logically because concepts of “raw experience” truly can’t be analogized to any concepts applying to the external world. Eliminating raw experience provides an effective demolition; rather than the other way around, our belief in raw experience depends on our knowledge of the external world, which is the source of the concepts we apply to fabricate qualia.

4. Relying on the brute force of an intuition is rationally specious.

Against these considerations, the only argument for retaining raw experience in our ontology is the sheer strength of everyone’s belief in its existence. How much weight should we attach to a strong belief whose validity we can't check? None. Beliefs ordinarily earn a presumption of truth from the absence of empirical challenge, but when empirical challenge is impossible in principle, the belief deserves no confidence.

Elitism isn't necessary for refining rationality.

-20 Epiphany 10 September 2012 05:41AM

  Note:  After writing this post, I realized there's a lot I need to learn about this subject.  I've been thinking a lot about how I use the word "elitism" and what it meant to me.  I was unaware that there are a large number of people who use the word to describe themselves and mean something totally different from the definition that I had.  This resulted in my perception that people who were using the word to describe themselves were being socially inept.  I now realize that it's not a matter of social ineptness, that it may be more of a matter of political sides.  I also realized that mind-kill reactions may be influencing us here (myself included).  So, now my goal is to make sure I understand both sides thoroughly to transcend these mind-kill reactions and explain to others how I accomplished this so that none of us has to have them.  I think these sides can get along better.  That is what I ultimately want - for the gifted population and the rest of the world to understand one another better, for the privileged and the disadvantaged to understand one another better, and for the tensions between those groups to be reduced so that we can work together effectively.  I realize that this is not a simple undertaking, but this is a very important problem to me.  I see this being an ongoing project in my life.  If I don't seem to understand your point of view on this topic, please help me update.  I want to understand it.

 

TLDR: OMG a bunch of people seem to want to use the word "elitist" to describe LessWrong but I know that this can provoke hatred.  I don't want to be smeared as an elitist.  I can't fathom why it would be necessary for us to call ourselves "elitists".

 

I have noticed a current of elitism on LessWrong.  I know that not every person here is an elitist, but there are enough people here who seem to believe elitism is a good thing (13 upvotes!?) that it's worth addressing this conflict.  In my experience, the word "elitism" is a triggering word - it's not something you can use easily without offending people.  Acknowledging intellectual differences is a touchy subject also, very likely to invite accusations of elitism.  From what I've seen, I'm convinced that using the word "elitism" casually is a mistake, and referring to intellectual differences incautiously is also risky.

Here, I analyze the motives behind the use of the word elitism, make a suggestion for what the main conflict is, mention a possible solution, talk about whether the solution is elitist, what elitism really means, and what the consequences may be if we allow ourselves to be seen as elitists.

The theme I am seeing echoed throughout the threads where elitist comments surfaced is "We want quality" and "We want a challenging learning environment".  I agree that quality goals and a challenging environment are necessary for refining rationality, but I disagree that elitism is needed.

I think the problem comes in at the point where we think about how challenging the environment should be.  There's a conflict between the website's main vision: spreading rationality (detailed in: Rationality: Common Interest of Many Causes) and striving for the highest quality standards possible (detailed in Well-Kept Gardens Die By Pacifism).

If the discussions are geared for beginners, advanced people will not learn.  If the discussions are geared for advanced people, beginners are frustrated.  It's built into our brains.  Psychologist Mihaly Csikszentmihalyi, author of "Flow: The psychology of optimal experience" regards flow, the feeling of motivation and pleasure you get when you're appropriately challenged, to be the secret to happiness and he explains that if you aren't appropriately challenged, you're either going to feel bored or frustrated depending on whether the challenge is too small or too great for your ability level.

Because our brains never stop rewarding and punishing us with flow, boredom and frustration, we strive for that appropriate challenge constantly.  Because we're not all at the same ability level, we're not all going to flow during the same discussions.  We can't expect this to change, and it's nobody's fault.

This is a real conflict, but we don't have to choose between the elitist move of blocking everyone that's not at our level vs. the flow killing move of letting the challenge level in discussions decrease to the point where it results in everyone's apathy - we can solve this.

Why bother to solve it?  If your hope is to raise the sanity waterline, you cannot neglect those who are interested in rational thought but haven't yet gotten very far.  Doing so would limit your impact to a small group, failing to make a dent in overall sanity.  If you neglect the small group of advanced rationalists, then you've lost an important source of rational insights that people at every level might learn from and you will have failed to attract the few and precious teachers who will assist the beginners in developing further faster.

And there is a solution; summarized in one paragraph:  Make several areas divided by their level of difficulty.  Advanced learners can learn in the advanced area, beginners in the beginner area.  That way everyone learns.  Not every advanced person is a teacher, but if you put a beginner area and an advanced area on the same site, some people from the advanced area will help get the beginners further.  One-on-one teaching isn't the only option - advanced people might write articles for beginners and get through to thousands at once.  They might write practice quizzes for them to do (not hard to implement from a web developer's perspective).  There are other things.  (I won't get into them here.)

This brings me to another question: if LessWrong separates the learning levels, would the separation qualify as elitism?

I think we can all agree that people don't learn well in classes that are too easy for them.  If you want advanced people to improve, it's an absolute necessity to have an advanced area.  I'm not questioning that.  I'm questioning whether it qualifies under the definition of elitism:

e·lit·ism

noun
1.  practice of or belief in rule by an elite.
2.  consciousness of or pride in belonging to a select or favored group.

(dictionary.com)

Spreading rationality empowers people. If you wanted to take power over them, you'd horde it.  By posting our rational insights in public, we share them.  We are not hoarding them and demanding to be made rulers because of our power.  We are giving them away and hoping they improve the world. 

Using rationality as a basis for rule makes no sense anyway.  If you have a better map of the territory, people should update because you have a better map (assuming you overcome inferential distances).  Forcing an update because you want to rule would only amount to an appeal to authority or coercion.  That's not rational.  If you show them a more complete map and they update, that isn't about you - you should be updating your map when the time comes, too.  It's the territory that rules us all.  You are only sharing your map.

For the second definition, there are two pieces.  "Consciousness of or pride in" and "select or favored group".  I can tell you one thing for certain: if you form a group of intellectual elitists, they will not be considered "select or favored" by the general population.  They will be treated as the scum on the bottom of scum's shoe.

For that reason, any group of intellectual elitists will quickly become an oxymoron.  First, they'll have to believe that they are "select and favored" when they are not, and perhaps justify this with "we are so deserving of being select and favored that no one can see it but us" (which may make them hopelessly unable to update).  Second, the attitude of superiority is likely to provoke such anti-intellectual counter-prejudice that the resulting oppression could make them ineffectual.  Powerless to get anywhere because they are so hated, their "superiority" will make them into second class citizens.  You don't achieve elite status by being an intellectual elitist.

In the event that LessWrong was considered "select" or "favored" by the outside population, would "consciousness" of that qualify the members as elitists?  If you use the literal definition of "consciousness", you can claim a literal "yes" - but it would mean that simply acknowledging a (hypothetical) fact (independent market research surveys, we'll say) should be taken as automatic proof that you're an arrogant scumbag.  That would be committing Yvain's "worst argument in the world", guilt by association.  We can't assume that everyone who acknowledges popularity or excellence is guilty of wrongdoing.

So let's ask this: Why does elitism have negative connotations? What does it REALLY mean when people call a group of intellectuals "elitists"?

I think the answer to this is in Jane Elliot's brown eyes, blue eyes experiment.  If you're not familiar with it, a school teacher named Jane Elliot, horrified by the assassination of Martin Luther King, Jr. decided to teach her class a lesson about prejudice.  She divided the class into two groups - brown eyes and blue eyes.  She told them things like brown eyed kids are smarter and harder-working than blue eyed kids.  The children reacted dramatically:

"When several of the brown-eyed kids who had problems reading went to their primer that morning, they whizzed through sentences"

"A smart blue-eyed girl, who had never had problems with her multiplication tables, started making all kinds of mistakes."

"During afternoon recess, the girl came running back to Room 10, sobbing. Three brown-eyed girls had ganged up on her, and one had hit her, warning, “You better apologize to us for getting in our way because we’re better than you are."

When people complain of elitism, what they seem to be reacting to is a concern that feeling "better than others" will be used as an excuse for abuse - either via coercion, or by sabotaging their sense of self-worth and intellectual performance.

The goal of LessWrong is to spread rationality in order to make a bigger difference in the world.  This has nothing to do with abusing people.  Just because some people with advanced abilities choose to use them as an excuse to abuse other people, it doesn't mean that anybody here has to do that.  Just because some of us might have advanced abilities and are also aware of them does not mean we need to commit Yvain's "the worst argument in the world" by assuming the guilt that comes with elitism.  We can reject this sort of thinking.  If people tell you that you're an elitist because you want a challenging social environment to learn in, or because you want to make the project that is the LessWrong blog as high quality as it can be, you can refuse to be labeled guilty.

Refusing to be guilty by association takes more work than accepting the status quo but what would happen if we allowed ourselves to be disrespected for challenging ourselves and striving for quality?  If we agree with them, we're viewing positive character traits as part of a problem.  That encourages people to shoot themselves in the foot - and they can point that same gun at all of humanity's potential, demanding that nobody seeks the challenging social environment they need to grow, that nobody sets any learning goals to strive for because quality standards are elitist.  To allow a need for challenges and standards to be smeared as elitist will only hinder the spread of rationality.

How many may forgo refining rationality because they worry it will make them look like an elitist?

These are the reasons I choose to be non-abusive and to send a message to the world that non-abusive intellectuals exist.

What do you think of this?

 

A Beginner's Guide to Irrational Behavior - free Coursera class

1 Utopiah 17 July 2012 03:26PM

"learn about some of the many ways in which people behave in less than rational ways, and how we might overcome these problems."

starts 25 March 2013

cf https://www.coursera.org/course/behavioralecon

see also http://lesswrong.com/lw/d3w/coursera_behavioural_neurology_course/

Critical Thinking in Global Challenges - free Coursera class

-5 Utopiah 17 July 2012 03:23PM

"develop and enhance your ability to think critically, assess information and develop reasoned arguments in the context of the global challenges facing society today."

starts 28 January 2013

cf https://www.coursera.org/course/criticalthinking

see also http://lesswrong.com/lw/dni/a_beginners_guide_to_irrational_behavior_free/
and http://lesswrong.com/lw/d3w/coursera_behavioural_neurology_course/

[link] Cargo Cult Debugging

-5 MarkL 09 July 2012 04:05PM

[...] Here is the right way to address this bug:

  1. Learn more about manifests, so I know what a good one looks like.
  2. Take a look at the one we’re generating for Kiln; see if anything obvious screams out.
  3. If so, dive into the build system [blech] and have it fix up the manifest, or generate a better one, or whatever’s involved here. This part’s a second black box to me, since the Kiln Storage Service is just a py2exe executable, meaning that we might be hitting a bug in py2exe, not our build system.
  4. If not, burn a Microsoft support ticket so I can learn how to get some more debugging info out of the error message.

Here’s the first thing I actually did:

  1. Look at the executable using a dependency checker to see what DLLs it was using, then make sure they were present on Windows 2003.

This is not the behavior of a rational man. [...]

http://bitquabit.com/post/cargo-cult-debugging/

 

Let's all learn stats!

13 mstevens 12 June 2012 03:31PM

I want to become Stronger!

Udacity are running an Introduction to Statistics course starting on the 25th June 2012.

Many of us could stand to learn some more stats, I certainly could. This seems like a great opportunity!

It is mandatory for all LWers to enroll in this course.

Update: the last line was a joke. Obviously people are not finding it funny. Sorry.

View more: Prev | Next