Rationality Quotes April 2012
Here's the new thread for posting quotes, with the usual rules:
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself
- Do not quote comments/posts on LW/OB
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (858)
In recent years, I've come to think of myself as something of a magician, and my specialty is pulling the wool over my own eyes.
--Kip W
Leonid: Without a purpose, a man is nothing.
Newton: Yes. But we wonder...do you share our gift? Do you have the necessary vision? Do you know the final fate of man?
Leonid: How could anyone know things like that?
Council: The Greater Science. The Quiet Math. The Silent Truth. The Hidden Arts. The Secret Alchemy.
Newton: Every question has an answer. Every equation has a solution.
Isn't one of the implications of Gödel's incompleteness theorem that there will always be unanswerable questions?
Only if the questioner is consistent.
And there's no way to tell whether the questioner is inconsistent, or there exist unanswerable questions, right? [In any case, I would be greatly astonished if "What is the final fate of man?" was found to be isomorphic to a human Godel sentence ;-) ]
The point of this one isn't clear.
I guess it probably should have been broken up into a couple of shorter ones, but it was a single, short exchange and I just couldn't resist. That the question of the final fate of man, can, like any question, be answered with a greater science, with the hidden arts... this is essentially the message of transhumanist rationality, and it was beautifully phraseds here. "Without a purpose, a man is nothing"... this really should have been off on its own, in retrospect, but its meaning is a little bit less obscure, I think.
Human beings have been designed by evolution to be good pattern matchers, and to trust the patterns they find; as a corollary their intuition about probability is abysmal. Lotteries and Las Vegas wouldn't function if it weren't so.
-Mark Rosenfelder (http://zompist.com/chance.htm)
-- Bjork
--Razib Khan, source
-- Farenheit 451
I'll be sticking around a while, although I'm not doing too well right now (check the HPMOR discussion thread for those of you interested in viewing the carnage, it's beautiful). It's not really a rationality problem, but I need to learn how to deal with other people who have big egos, because apparently only two or three people received my comments the way I meant them to come across. Plus, I like the idea of losing so much karma in one day and then eventually earning it all back and being recognized as a super rationalist. Gaining the legitimate approval of a group who now have a lot against me will be a decent challenge.
Also I doubt that I would be able to resist commenting even if I wanted to. That's probably mostly it.
Tips for dealing with people with big egos:
On politeness:
People who are exempted:
Regardless of whether or not this is compatible with being a "complete jerk" in your sense, I wish to point out that wedrifid is in many respects an exemplary Less Wrong commenter. There are few others I can think of who are simultaneously as (1) informative, including about their own brain state, (2) rational, especially in the sense of being willing and able to disagree within factions/alliances and agree across them, and (3) socially clueful, in the sense of being aware of the unspoken interpersonal implications of all discourse and putting in the necessary work to manage these implications in a way compatible with one's other goals (naturally the methods used are community-specific but that is more than good enough).
In saying this, I don't know whether I'm expanding on your point or disagreeing with it.
I would be interested in having wedrifid write a post systematically explaining his philosophy of how to participate on LW, because the bits and pieces of it that I've seen so far (your comment, TheOtherDave's, this comment by wedrifid) are not really forming into a coherent whole for me.
That would be an interesting thing to do, too. It is on the list of posts that I may or may not get around to writing!
I appreciate your kind words komponisto! You inspire me to live up to them.
I'll add to this that actually paying attention to wedrifid is instructive here.
My own interpretation of wedrifid's behavior is that mostly s/he ignores all of these ad-hoc rules in favor of:
1) paying attention to the status implications of what's going on,
2) correctly recognizing that attempts to lower someone's status are attacks
3) honoring the obligations of implicit social alliances when an ally is attacked
I endorse this and have been trying to get better about #3 myself.
The phrase "social alliances" makes me uneasy with the fear that if everyone did #3, LW would degenerate into typical green vs blue debates. Can you explain a bit more why you endorse it?
If Sam and I are engaged in some activity A, and Pat comes along and punishes Sam for A or otherwise interferes with Sam's ability to engage in A...
...if on reflection I endorse A, then I endorse interfering with Pat and aiding Sam, for several reasons: it results in more A, it keeps me from feeling like a coward and a hypocrite, and I establish myself as a reliable ally. I consider that one of the obligations of social alliance.
...if on reflection I reject A, then I endorse discussing the matter with Sam in private. Ideally we come to agreement on the matter, and either it changes to case 1, or I step up alongside Sam and we take the resulting social status hit of acknowledging our error together. This, too, I consider one of the obligations of social alliance.
...if on reflection I reject A and I can't come to agreement with Sam, I endorse acknowledging that I've unilaterally dissolved the aspect of our social alliance that was mediated by A. (Also, I take that status hit all by myself, but that's beside the point here.)
I agree with you that if I instead skip the reflective step and reflexively endorse A, that quickly degenerates into pure tribal warfare. But the failure in this case is not in respecting the alliance, it's failing to reflect on whether I endorse A. If I do neither, then the community doesn't degenerate into tribal warfare, it degenerates into chaos.
Admittedly, chaos can be more fun, but I don't really endorse it.
All of that said, I do recognize that explicitly talking about "social alliances" (and, indeed, explicitly talking about social status at all) is a somewhat distracting thing to do, and doesn't help me make myself understood especially well to most audiences. It was kind of a self-indulgent comment, in retrospect, although an accurate one (IMO).
(I feel vaguely like Will_Newsome, now. I wonder if that's a good thing.)
Start to worry if you begin to feel morally obliged to engage in activity 'Z' that neither you, Sam or Pat endorse but which you must support due to acausal social allegiance with Bink mediated by the demon X(A/N)th, who is responsible for UFOs, for the illusion of stars that we see in the sky and also divinely inspired the Bhagavad-Gita.
Been there, done that. (Not specifically. It would be creepy if you'd gotten the specifics right.)
I blame the stroke, though.
Battling your way to sanity against corrupted hardware has the potential makings of a fascinating story.
It wasn't quite as dramatic as you make it sound, but it was certainly fascinating to live through.
The general case is here.
The specifics... hm.
I remain uncomfortable discussing the specifics in public.
I really like your illustration here. To the extent that this is what you were trying to convey by "3)" in your analysis of wedrifid's style then I endorse it. I wouldn't have used the "alliances" description since that could be interpreted in a far more specific and less desirable way (like how Wei is framing it). But now that you have unpacked your thinking here I'm happy with it as a simple model.
Note that depending on the context there are times where I would approve of various combinations of support or opposition to each of "Sam", "Pat" and "A". In particular there are many behaviors "A" that the execution of will immediately place the victim of said behavior into the role of "ally that I am obliged to support".
Yeah, agreed about the distracting phrasing. I find it's a useful way for me to think about it, as it brings into sharp relief the associated obligations for mutual support, which I otherwise tend to obfuscate, but talking about it that way tends to evoke social resistance.
Agreed that there are many other scenarios in addition to the three I cite, and the specifics vary; transient alliances in a multi-agent system can get complicated.
Also, if you have an articulable model of how you make those judgments I'd be interested, especially if it uses more socially acceptable language than mine does.
Edit: Also, I'm really curious as to the reasoning of whoever downvoted that. I commit to preserving that person's anonymity if they PM me about their reasoning.
For what it is worth, sampling over time suggests multiple people - at one point there were multiple upvotes.
I'm somewhat less curious. I just assumed it people from the 'green' social alliance acting to oppose the suggestion that people acting out the obligations of social allegiance is a desirable and necessary mechanism by which a community preserves that which is desired and prevents chaos.
Is establishing yourself as a reliable ally an instrumental or terminal goal for you? If the former, what advantages does it bring in a group blog / discussion forum like this one? The kind of alliance you've mentioned so far are temporary ones formed implicitly by engaging someone in discussion, but people will discuss things with you if they think your comments are interesting, with virtually no consideration for how reliable you are as an ally. Are you hoping to establish other kinds of alliances here?
Instrumental.
Trust, mostly. Which is itself an instrumental goal, of course, but the set of advantages that being trusted provides in a discussion is so ramified I don't know how I could begin to itemize it.
To pick one that came up recently, though, here's a discussion of one of the advantages of trust in a forum like this one, related to trolley problems and similar hypotheticals.
Another one that comes up far more often is other people's willingness to assume, when I say things that have both a sensible and a nonsensical interpretation, that I mean the former.
Yes, I agree that when people form implicit alliances by (for example) engaging someone in discussion, they typically give virtually no explicit consideration for how reliable I am as an ally.
If you mean to say further that it doesn't affect them at all, I mostly disagree, but I suspect that at this point it might be useful to Taboo "ally."
People's estimation of how reliable I am as a person to engage in discussion with, for example, certainly does influence their willingness to engage me in discussion. And vice-versa. There are plenty of people I mostly don't engage in discussion, because I no longer trust that they will engage reliably.
Not that I can think of, but honestly this question bewilders me, so it's possible that you're asking about something I'm not even considering. What kind of alliances do you have in mind?
It's not clear to me that these attributes are strongly (or even positively) correlated with willingness to "stick up" for a conversation partner, since typically this behavioral tendency has more to do with whether a person is socially aggressive or timid. So by doing that, you're mostly signaling that you're not timid, with "being a good discussion partner" a much weaker inference, if people think in that direction at all. (This is the impression I have of wedrifid, for example.)
I didn't have any specific kind of alliances in mind, but just thought the question might be worth asking. Now that I think about it, it might be for example that you're looking to make real-life friends, or contacts for advancing your career, or hoping to be recruit by SIAI.
This model of the world does an injustice to a class of people I hold in high esteem (those who are willing to defend others against certain types of social aggression even at cost to themselves) and doesn't seem to be a very accurate description of reality. A lot of information - and information I consider important at that - can be gained about a person simply by seeing who they choose to defend in which circumstances. Sure, excessive 'timidity' can serve to suppress this kind of behavior and so information can be gleaned about social confidence and assertiveness by seeing how freely they intervene. But to take this to the extreme of saying you are mostly signalling that you're not timid seems to be a mistake.
In my own experience - from back when I was timid in the extreme - the sort of "sticking up for", jumping to the defense against (unfair or undesirable) aggression is one thing that could break me out of my shell. To say that my defiance of my nature at that time was really just me being not timid after all would be to make a lie of the battle of rather significant opposing forces within the mind of that former self.
Merely that I am bold and that my behavioral tendencies and strategies in this kind of area are just signals of that boldness? Dave's model seems far more accurate and useful in this case.
I find that my brain doesn't automatically build detailed models of LW participants, even the most prominent ones like yourself, and I haven't found a strong reason to do so consciously, using explicit reasoning, except when I engage in discussion with someone, and even then I only try to model the part of their mind most relevant to the discussion at hand.
I realize that I may be engaging in typical mind fallacy in thinking that most other people are probably like me in this regard. If I am, I'd be curious to find out.
Fair enough; it may be that I overestimate the value of what I'm calling trust here.
Just for my own clarity, when you say that what I'm doing is signaling my lack of timidity, are you referring to my actual behavior on this site, or are you referring to the behavior we've been discussing on this thread (or are they equivalent)?
I'm not especially looking to make real-life friends, though there are folks here who I wouldn't mind getting to know in real life. Ditto work contacts. I have no interest in working for SI.
I was talking about the abstract behavior that we were discussing.
Might be too advanced for someone who just learned that saying "Please stop being stupid." is a bad idea.
Well... I've seen people nearly that exact phrase to great effect at times... But that's not the sort of thing you'd want to include in a 'basics' list either.
Just as with fashion, it is best to follow the rules until you understand the rules well enough to know exactly how they work and why a particular exception applies!
Sure. Then again, if you'd only intended that for chaosmosis' benefit, I assume you'd have PMed it.
It is what we would call an "instrumental rationality" problem. And one of the most important ones at that. Right up there with learning how to deal with our own big egos... which you seem to be taking steps towards now!
This discussion is off-topic for the "Rationality Quotes" thread, but...
If you're interested in an easy way to gain karma, you might want to try an experimental method I've been kicking around:
Take an article from Wikipedia on a bias that we don't have an article about yet. Wikipedia has a list of cognitive biases. Write a top-level post about that bias, with appropriate use of references. Write it in a similar style to Eliezer's more straightforward posts on a bias, examples first.
My prediction is that such an article, if well-written, should gain about +40 votes; about +80 if it contains useful actionable material.
No, I want this to be harder than that. It needs to be a drawn out and painful and embarrassing process.
Maybe I'll eventually write something like that. Not yet.
One day I will write "How to karmawhore with LessWrong comments" if I can work out how to do it in such a way that it won't get -5000 within an hour.
I know how you could do it. You need to come up with a detailed written strategy for maximizing karma with minimal actual contribution. Have some third party (or several) that LW would trust hold on to it in secrect.
Then, for a week or two, apply that strategy as directly and blatantly as you think you can get away with, racking up as many points as possible.
Once that's done, compile a list of those comments and post it into an article, along with your original strategy document and the verification from the third party that you wrote the strategy before you wrote the comments, rather than ad-hocing a "strategy" onto a run of comments that happened to succeed.
Voila: you have now pulled a karma hack and then afterwards gone white-hat with the exploit data. LW will have no choice but to give you more karma for kindly revealing the vulnerability in their system! Excellent. >:-)
Create a dozen sockpuppet accounts and use them to upvote every single one of your posts. Duh.
That's like getting a black belt in karate by buying one from the martial arts shop. It isn't karmawhoring unless you're getting karma from real people who really thought your comments worth upvoting.
“Getting karma from real people who really thought your comments worth upvoting” sounds like a good thing, so why the (apparently) derogatory term karmawhoring?
It is good to have one's comments favourably appreciated by real people. Chasing after that appreciation, not so much. Especially, per an ancestor comment, trying to achieve that proxy measure of value while minimizing the actual value of what you are posting. The analogy with prostitution is close, although one difference is that the prostitute's reward -- money -- is of some actual use.
Not as straightforward as it sounds. Irrelevant one-sentence comments upvoted to +10 will attract more downvotes than they would otherwise.
This would indeed count as "minimal contribution", but still sounds like a lot of work...
Nitpick: cryptography solves this much more neatly.
Of course, people could accuse you of having an efficient way of factorising numbers, but if you do karma is going to be the least of anyone's concerns.
Factorization doesn't enter into it - to precommit to a message that you will later reveal publically, publish a hash of the (salted) message.
But somewhat less transparently. The cryptographic solution still requires that an encrypted message is made public prior to the actions being taken and declaring an encrypted prediction has side effects. The neat solution is to still use trusted parties but give the trusted parties only the encrypted strategy (or a hash thereof).
What kind of side effects ? I have no formal training in cryptography, so please forgive me if this is a naive question.
I mean you still have to give the encrypted data to someone. They can't tell what it is but they can see you are up to something. So you still have to use some additional sort of trust mechanism if you don't want the act of giving encrypted fore-notice to influence behavior.
Ah ok, that makes sense. In this case, you can employ steganography. For example, you could publish an unrelated article using a pretty image as a header. When the time comes, you reveal the algorithm and password required in order to extract your secret message from the image.
My actual strategy was just to post lots. Going through the sequences provided a target-rich environment ;-)
IME, per-comment EV is way higher in the HP:MoR discussion threads.
It so is. Karmawhoring in those is easy.
This suggests measuring posts for comment EV.
Now that is an interesting concept. I like where this subthread is going.
Interesting comparisons to other systems involving currency come to mind.
EV-analysis is the more intellectually interesting proposition, but it has me thinking. Next up: black-market karma services. I will facilitate karma-parties... for a nominal (karma) fee, of course. If you want to maintain the pretense of legitimacy, we will need to do some karma-laundering, ensuring that your posts appear that they could be worth the amount of karma they have received. Sock-puppet accounts to provide awful arguments that you can quickly demolish? Karma mines. And then, we begin to sell LW karma for Bitcoins, and--
...okay, perhaps some sleep is in order first.
It is clear we need to start work on a distributed, decentralised, cryptographically-secure Internet karma mechanism.
Oh, you want a Quest, not a goal. :-)
In that case, try writing an article that says exactly the opposite of something that somebody with very high (>10,000) karma says, even linking to their statement to make the contrast clear. Bonus points if you end up getting into a civil conversation directly with that person in the comments of your article.
Note: I believe that it is not only possible, but even easy, for you to do this and get a net karma gain. All you need is (a) a fairly good argument, and (b) a friendly tone.
And now I realize I just did exactly that, and your prediction is absolutely correct. No bonus points for me, though.
I nominate this as the Less Wrong Summer Challenge, for everybody.
(One modification I'd make: it shouldn't necessarily be the exact opposite: precisely reversed intelligence usually is stupidity. But your thesis should be mutually incompatible with any charitable interpretation of the original claim.)
That actually sounds fun now that you put it like that!
You just need a reasonably friendly tone. I have a bunch of karma, and I haven't posted any articles yet (though I'm working on it).
Indeed, that would work if karma was merely the goal. But chaosmosis expressed a desire for a "painful and embarrasing process", meaning that the ante and risk must be higher.
This is actually a really worthwhile skill to learn, independently of any LW-related foolishness. And it is actually a rationality problem.
You mean to the extent that any problem at all is a rationality problem, or something else?
Dealing with others' irrationality is very much a rationality problem.
It's a bias, as far as I'm concerned, and something that needs to be overcome. People with egos can be right, but if one can't deal with the fact that they're either right or wrong regardless of their egotism, then one is that much slower to update.
-- Mark Rippetoe, Starting Strength
He's ignoring that people might not like how larger muscles look.
And personally (though I don't care much) I would only care about practical athletic ability, not weight lifting.
I understand this line of thought, but.. strength doesn't have to be developed through weights, strength increase doesn't necessarily mean much hypertrophy, and most importantly strength is a prerequisite/accelerator for increasing pretty much all athletic abilities (power, flexibility, endurance..)
I guess the relation between muscle mass and physical attractiveness is non-monotonic, so a marginal increase in muscle mass would make some people look marginally better and other people look marginally worse. (I suspect the median Internet user is in the former group, though.)
ETA: Judging from the picture on Wikipedia, Rippetoe himself looks like someone who would look better if he lost some weight (but I'm a heterosexual male, so my judgement might be inaccurate).
I'm somewhat annoyed that the comments on this thread are vapid, but this might be worth responding to. It doesn't particularly matter whether or not Rippetoe is himself currently ripped -- see this Wikipedia article of yours for his domain expert credentials:
Secondly, notice that he was a competitive powerlifter thirty years ago. Senescence is a bitch.
Why “of yours”? I've never edited it.
I didn't dispute them. The grandparent and great-grandparent are about “how larger muscles look”. I can't see how the passage you quote is relevant to the fact that I think he's ugly.
Hmm. This sort of thing seems plausible, but I wonder how much of it is strength-specific? I've heard of eudaimonic effects for exercise in general (not necessarily strength training) and for mastering any new skill, and I doubt he's filtering those out properly.
Why was this downvoted?
Sample: men who come to this guy to get stronger, I assume?
--1943 Disney cartoon
--Alan Belkin From the Stock Market to Music, via the Theory of Evolution
This was just the first bit that stood out as LW-relevant; he also briefly mentions cognitive bias and touches on the possible benefits of cognitive science to the arts.
--Jonathan Haidt, source
He also talks about how sacredness is one of the fundamental values for human communities, and how liberal/left-leaning theorists don't pay enough attention to it (and refuse to acknowledge their own sacred/profane areas).
I have more to say about his values theory, I'll post some thoughts later.
UPD: I wrote a little something, now I'm just gonna ask Konkvistador whether he thinks it's neutral enough or too political for LW.
Please make sure you do. I suspect it will be interesting. :)
Aaron Sloman
— Poe, The Purloined Letter
— Waiting for God (TV Series)
Is there a point to this quote, besides that this diana character doesn't understand the term 'moral dilemma'?
That the kind of "moral dilemmas" philosophers tend to contemplate, tend to be very different to the kind of dilemmas people encounter in practice.
Perhaps that it requires significant time and cognitive energy to make difficult decisions in general or reflectively modify one's moral system in particular?
ETA: can someone explain the downvote?
— Jack Vance, The Languages of Pao
Shorter version:
-- Terence, Phormio
My favorite:
The other day I was thinking about Discworld, and then I remembered this and figured it would make a good rationality quote...
-- Terry Pratchett, Feet of Clay
Sounds like Vimes doesn't like Sherlock Holmes much.
Gee, you think?
Well, the quote made me think of this. Now that I looked up that post I notice that it is downvoted, so perhaps it isn't relevant. But the behavior that Vimes expresses distrust of in the Pratchett quote is pretty much the exact behavior that is used to show off how intelligent/perceptive Holmes is, and which the poster wants to use as an example for rationalists.
It is relevant and obvious. I suppose it was downvoted for the latter.
Reminded of a quote I saw on TV Tropes of a MetaFilter comment by ericbop:
Maybe this song won't get downvoted? It's a little more on-topic for LessWrong, even if it does get political at the end. ;)
-- Pete Seeger, "Waist Deep in the Big Muddy"
Quick question: Is this getting downvoted because of the quote or because I talked about downvoting?
(The song itself is a rather amusing lesson in escalation of commitment and sunk cost fallacy, among other things...)
It's too long. This thread is about quotes, not about making others read a whole piece of work you like. Perhaps use the monthly media thread for that purpose?
For this thread you could have perhaps reduced the quotable to this:
or perhaps possibly even two verses would be acceptable like this:
and just linked to some other page where one could see the whole song.
But not the whole damn thing.
Thanks.
Bruce Sterling
A shortcut for making less-biased predictions, taking base averages into account.
Regarding this problem: "Julie is currently a senior in a state university. She read fluently when she was four years old. What is her grade point average (GPA)?"
If you know the scores of two different golfers on day 1, then you know more than if you know the score of only one golfer on day 1. You can't predict the direction in which regression to the mean will occur if your data set is a single point.
The following all have different answers:
(The answer is 39700; I'm probably not going to improve with practice, and you have no way to know if 39700 is unusually good or unusually bad.)
(The answer is some number less than 39700; knowing that my friend got a lower score gives you a reason to believe that 39700 might be higher than normal.)
(The answer is some number higher than 39700, because I'm no longer an absolute beginner.)
True, a single data point can't give you knowledge of regression effects. In the context of the original problem, Kahneman assumed that you had access to the average score of all the golfers on the first day.
I'm not sure it's true that the answer is higher than 39700, in this case. It depends on if you have knowledge of how people generally improve, and if your score is higher than average for an absolute beginner. Since unknown factors could adjust the score either up or down, I would probably just guess that it will be the same the next day.
The existence of factors which could adjust the score either up or down does not indicate which factors dominate. In this case, you have no information which suggests that 39700 is either above or below the median, and therefore these two cases must be assigned equal probability - canceling out any "regression to the mean" effects you could have predicted. Similar arguments apply to other effects which change the score.
Not quite, you have some background information about the range of scores video games usually employ.
And, I suppose, information about the probability of people mentioning average scores. I concede that either factor could justify arguing that the score should decrease.
So you estimate "regression to the mean" effects as zero, and base your estimate on any other effects you know about and how strong you think they are. That makes sense. Thanks for the correction!
It reminds me of E.T. Jaynes' explanation of why time-reversible dynamic laws for (say) sugar molecules in water lead to a time-irreversible diffusion equation.
--Nietzsche
--Oswald Spengler, The Decline of the West
That sounds deep, but it has nothing to to with rationality
Not really, for example it is actually pretty clearly connected to fun theory.
Chinese proverb, meaning "the onlooker sees things more clearly", or literally, "the player lost, the spectator clear"
In personal development workshops, the saying is, "the one with the mike in their hand is the last to see it." Of doctors and lawyers it is said that one who treats himself, or acts in court for himself, has a fool for a client.
Chinese proverb, "three men make a tiger", referring to a semi-mythological event during the Warring States period:
-- Wikipedia
--Joseph Conrad, Heart of Darkness
On specificity and sneaking on connotations; useful for the liberal-minded among us:
-celandine13
How about:
Someone who, following an honest best effort to evaluate the available evidence, concludes that some of the beliefs that nowadays fall under the standard definition of "racist" nevertheless may be true with probabilities significantly above zero.
Someone who performs Bayesian inference that somehow involves probabilities conditioned on the race of a person or a group of people, and whose conclusion happens to reflect negatively on this person or group in some way. (Or, alternatively, someone who doesn't believe that making such inferences is grossly immoral as a matter of principle.)
Both (1) and (2) fall squarely under the common usage of the term "racist," and yet I don't see how they would fit into the above cited classification.
Of course, some people would presumably argue that all beliefs in category (1) are in fact conclusively proven to be false with p~1, so it can be only a matter of incorrect conclusions motivated by the above listed categories of racism. Presumably they would also claim that, as a well-established general principle, no correct inferences in category (2) are ever possible. But do you really believe this?
That (1) only makes sense if there is a “standard” definition of racist (and it's based on what people believe rather than/as well as what they do). The point of the celandine13 was indeed that there's no such thing.
The evidence someone's race constitutes about that person's qualities is usually very easily screened off, as I mentioned here. And given that we're running on corrupted hardware, I suspect that someone who does try to “performs Bayesian inference that somehow involves probabilities conditioned on the race of a person” ends up subconsciously double-counting evidence and therefore end up with less accurate results than somebody who doesn't. (As for cases when the evidence from race is not so easy to screen off... well, I've never heard anybody being accused of racism for pointing out that Africans have longer penises than Asians.)
Minor note, this appears to actually not be the case. Most studies have no correlation between race and penis size. See for example here. The only group that there may be some substantial difference is that Chinese babies may have smaller genitalia after birth but this doesn't appear to hold over to a significant difference by the time the children have reached puberty. Relevant study.
Huh, according to this map the average Congolese penis is nearly twice as long as the average South Korean penis. (ISTR that stretched flaccid length doesn't perfectly correlate with erect length.)
Oddly salient for such a trivial result. Should a study qualify for an Ig Nobel if you can use it to settle bar bets?
I have seen accusations for racism as responses to people pointing that out.
Also, according to the U.S. Supreme Court even if race is screened off, you're actions can still be racist or something.
In real life, you don't have the luxury of gathering forensic evidence on everyone you meet.
I'm not talking about forensic evidence. Even if white people are smarter in average than black people, I think just talking with somebody for ten minutes would give me evidence about their intelligence which would nearly completely screen off that from skin colour. Heck, even just knowing what their job is would screen off much of it.
What if verbal ability and quantitative ability are often decoupled?
I wasn't talking about "verbal ability" (which, to the extent that can be found out in ten minutes, correlates more with where someone grew up than with IQ), but about what they say, e.g. their reaction to finding out that I'm a physics student (though for this particular example there are lots of confounding factors), or what kinds of activities they enjoy.
If you're able to drive the conversation like that, you can get information about IQ, and that information may have a larger impact than race. But to "screen off" evidence means making that evidence conditionally independent- once you knew their level of interest in physics, race would give you no information about their IQ. That isn't the case.
Imagine that all races have Gaussian IQ distributions with the same standard deviation, but different means, and consider just the population of people whose IQs are above 132 ('geniuses' for this comment). In such a model, the mean IQ of black geniuses will be smaller than the mean IQ of white geniuses which will be smaller than the mean IQ of Jewish geniuses- so even knowing a lower bound for IQ won't screen off the evidence provided by race!
Huh, sure, if the likelihood is a reversed Heaviside step. If the likelihood is itself a Gaussian, then the posterior is a Gaussian whose mean is the weighed average of that of the prior and that of the likelihood, weighed by the inverse squared standard deviations. So even if the st.dev. of the likelihood was half that of the prior for each race, the difference in posterior means would shrink by five times.
Right- there's lots of information out there that will narrow your IQ estimate of someone else more than their race will, like that they're a professional physicist or member of MENSA, but evidence only becomes worthless when it's independent of the quantity you're interested in given the other things you know.
Also, as Eric Raymond discusses here, especially in the comments, you sometimes need to make judgements without spending ten minutes talking to everyone you see.
There's this thing called Affirmative Action, as I mentioned elsewhere in this thread.
...
I facepalmed. Really, Eric? Sorry, I don't think that a moral realist is perceptive enough to the nuances and ethical knots involved to be a judge on this issue. I don't know, he might be an excellent scientist, but it's extremely stupid to be so rash when you're attempting serious contrarianism.
Yep, let's all try to overcome bias really really hard; there's only one solution, one desirable state, there's a straight road ahead of us; Kingdom of Rationality, here we come!
(Yvain, thank you a million times for that sobering post!)
You know, there are countries where the intentional homicide rate is smaller than in John Derbyshire's country by nearly an order of magnitude.
That thing doesn't exist in all countries. Plus, I think the reason why you don't see that many two-digit-IQ people among (say) physics professors is not that they don't make it, it's that they don't even consider doing that, so even if some governmental policy somehow made it easier for black people with an IQ of 90 to succeed than for Jewish people with the same IQ, I would still expect a black physics professor to be smarter than (say) a Jewish truck driver.
That's not the point. The point is that the black physics professor is less smart than the Jewish physics professor.
But the difference is smaller than for the median black person and the median Jewish person. (I said "even just knowing what their job is would screen off much of it", not "all of it".)
The bell curve has both the mean and the deviation, you can have a 'race' with lower mean and larger standard deviation, and then you can e.g. filter by reliable accomplishment of some kind, such as solving some problem that smartest people in the world attempted and failed, you may end up with situation that the population with lower mean and larger standard deviation will have fewer people whom attain this, but those whom do, are on average smarter. Set bar even higher, and the population with lower mean and larger standard deviation has more people attaining it. Also, the Gaussian distribution can stop being good approximation very far away from the mean.
edit: and to reply to grand grand parents: I bet i can divide the world into category that includes you, and a category that does not include you, in such a way that the category including you has substantially higher crime rate, or is otherwise bad. Actually if you are from US, I have a pretty natural 'cultural' category where your murder rate is about 5..10x of normal for such average income. Other category is the 'racists', i.e. the people whom use skin colour as evidence. Those people also have substantially bad behaviour. You of course want to use skin colour as evidence, and don't want me to use your qualities as evidence. See if I care. If you want to use the skin colour as evidence, lumping together everyone that's black, I want to use 'use of skin colour as evidence', lumping you together with all the nasty racists.
Apart from race, isn't this a problem with English or language in general? We use the same words for varying degrees of a certain notion, and people cherry pick the definitions that they want to cogitate for response. If I call someone a conservative, is it a compliment or an insult? That depends on both of our perceptions of the word conservative as well as our outlook on ourselves as political beings; however, beyond that, I could mean to say that the person is fiscally conservative, but as the current conservative candidates are showing conservatism to be far-right extremism, the person may think, "Hey! I'm not one of those guys."
I think if someone wants to argue with you, you'd be hard-pressed to speak eloquently enough to provide an impenetrable phrase that does not open itself to a spectrum of interpretation.
Sure. "Conservative" isn't a fixed political position. Quite often, it's a claim about one's political position: that it stands for some historical good or tradition. A "conservative" in Russia might look back to the good old days of Stalin whereas a "conservative" in the U.S. would not appreciate the comparison. It's also a flag color; your "fiscal conservative" may merely not want to wave a flag of the same color as Rick Santorum's.
This is missing Racist4:
Someone whose preferences result in disparate impact.
So if a minority takes the Implicitly Association Test and finds out their biased against the dominant "race" in their area, are they a Racist1, or not?
I would also really question the validity of the Implicit Association Test. It says "Your data suggest a slight implicit preference for White People compared to Black People.", which given that blacks have been severely under-represented my social sub-culture for the last 27 years(Punk/Goth), the school I graduated from (Art School), and my professional environments (IT) for the last 20 years is probably not inaccurate.
However, it also says "Your data suggest a slight implicit preference for Herman Cain compared to Barack Obama." Which is nonsense. I have a STRONG preference for Herman Cain over Barack Obama.
Looks like we need more "racism"s :D A common definition of racism that reflects the intuitions you bring up is "racism is prejudice plus power," (e.g., here) which isn't very useful from a decision-making point of view but which is very useful when looking at this racism as a functional thing experienced by the some group.
Where would someone like Steve Sailer fit in this classification?
Indeed as strange as it might sound (but not to those who know what he usually blogs about) Steve Sailer seems to genuinely like black people more than average and I wouldn't be surprised at all if a test showed he wasn't biased against them or was less biased than the average white American.
He also dosen't seem like racist2 from the vast majority of his writing, painting him as racist3 is plain absurd.
What evidence leads to this conclusion?
He published his IAT results and he's proposed policies that play to the strengths of blacks.
Historically, proposing policies that are set to help the specific strengths of a minority group are not generally indicative of actually positive feelings about those groups.
The IAT is the best measure of 'genuinely like X people' we have now, though that's not saying much. (I believe the only place he published it is VDare, which is currently down.)
What are the competing hypotheses and competing observations, here?
...for a particular value of genuine. (See this, BTW.)
It seems to me the natural interpretation for "genuine" is "unconscious," and if that post is relevant, it seems that it argues for more relative importance for the IAT over stated positions and opinions.
What about a "Racist4", someone who assign different moral values to people of different races all other things being equal?
Depends if the differences in assigned moral values are large enough they can easily approach Nazi pretty quickly. As a thought experiment consider how many dolphins would you kill to save a single person?
That would be a paleo-nazi. Not many of them around, anymore, and those that are don't get away with much.
Why make up a new word? Paleoconservatives and smarter white nationalists (think Jared Taylor ) seem to often fit the bill.
Based on a couple interviews I've seen with unabashed Racist3s, I think that they would tend to fulfill that criterion.
Edit: Requesting clarification for downvote?
Surely one of the definitions of "racist" should contain something about thinking that some races are better than others. Or is that covered under "neo-Nazi"?
I'm pretty sure that's covered under Racist1. Note the word "negative".
Though it's odd that Racist1 specifically refers to "minorities". The entire suite seems to miss folks that favor a "minority" race.
Not really it is perfectly possible to be explicitly aware of one's racial preferences and not really be bothered by having such preferences, at least no more than one is bothered by liking salty food or green parks, yet not be a Nazi or prone to violence.
Indeed I think a good argument can be made not only that large number of such people lived in the 19th and 20th century, but that we probably have millions of them living today in say a place like Japan.
And that they are mostly pretty decent and ok people.
Edit: Sorry! I didn't see the later comments already covering this. :)
Negative subconscious attitudes aren't the same thing as (though they might cause or be caused by) conscious opinions that such-and-such people are inferior in some way.
Indeed. For some reason I'm not sure of, I instinctively dislike Chinese people, but I don't endorse this dislike and try to acting upon it as little as possible (except when seeking romantic partners -- I think I do get to decide what criteria to use for that).
Can you expand on the difference you see between acting on your (non-endorsed) preferences in romantic partners, and acting on those preferences in, for example, friends?
As for this specific case, I don't happen to have any Chinese friend at the moment, so I can't.
More generally, see some of the comments on this Robin Hanson post: not many of them seem to agree with him.
I don't understand how not having any Chinese friends at the moment precludes you from expanding on the differences between acting on your dislike of Chinese people when seeking romantic partners and acting on it in other areas of your life, such as maintaining friendships.
Yes, the commenters on that post mostly don't agree with him.
That said, I would summarize most of the exchange as:
"Why are we OK with A, but we have a problem with B?"
"Because A is OK and B is wrong!"
Which isn't quite as illuminating as I might have liked.
Since I'm not maintaining any friendships with Chinese people, I can't see what it would even mean for me to act on my dislike of Chinese people in maintaining friendships. As for ‘other areas of my life’, this means that if I attempt to interact with a Chinese-looking beggar the same way I'd behave I'd interact with an European-looking beggar, to read a paper by an author with a Chinese-sounding name the same way I'd read one by an author with (say) a Polish-sounding name, and so on. (I suspect I might have misunderstood your question, though.)
Ah yes - it's extra-weird that someone isn't allowed in that framework to have conscious racist opinions but not be a jerk about it.
If one has conscious racist opinions, or is conscious that one has unconscious racist opinions (has taken the IAT but doesn't explicitly believe negative things about blacks) but doesn't act on them, it's probably because one doesn't endorse them. I'd class such a person as a Racist1.
I don't think not being an "insensitive jerk" is the same as not acting on one's opinions.
For example, if I think that people who can't do math shouldn't be programmers, and I make sure to screen applicants for math skills, that's acting on my opinions. If I make fun of people with poor math skills for not being able to get high-paying programmer jobs, that's being an insensitive jerk.
That's true. I was taking "racist opinions" to mean "incorrect race-related beliefs that favor one group over another". If people who couldn't do math were just as good at programming as people who could, and you still screened applicants for math skills, that would be a jerk move. If your race- or gender- or whatever-group-related beliefs are true, and you act on them rationally (e.g. not discriminating with a hard filter when there's only a small difference), then you aren't being any kind of racist by my definition.
ETA: did anyone downvote for a reason other than LocustBeamGun's?
(ETA: I didn't downvote, but) I wouldn't call gender differences in math "small" - the genders have similar average skills but their variances are VERY different. As in, Emmy Noether versus ~everyone else.
And if there is a great difference between groups it would be more rational to apply strong filters (except for example people who are bad at math, conveniently, aren't likely to become programmers). Perhaps the downvoter(s) thought you only presented the anti-discrimination side of the issue.
I think in most cases the average is more important in deciding how much to discriminate. But I deleted the relevant phrase because I'm not sure about that specific case and my argument holds about the same amount of water without it as with it.
EDIT:
Huh, I was intending to say that it's acceptable to discriminate on real existing differences, to the extent that those differences exist. Not sure how to fix my comment to make that less ambiguous, so just saying it straight out here.
Not to mention a bad business decision.
That too, thanks for pointing it out.
Depends on what you mean by "better". There's a difference between taking the data on race and IQ seriously, and wanting to commit genocide.
(blink)
Can you unpack the relationship here between some available meaning of "better" and wanting to commit genocide?
Most obvious plausible available meaning for 'better' that fits: "Most satisfies my average utilitarian values".
(Yes, most brands of simple utilitarianism reduce to psychopathy - but since people still advocate them we can consider the meaning at least 'available'.)
Fair enough.
That's the question I was implicitly asking Oscar.
Sure, I just thought it was weird that the definitions given barely even mentioned race.