Why Are Individual IQ Differences OK?
Idang Alibi of Abuja, Nigeria writes on the James Watson affair:
A few days ago, the Nobel Laureate, Dr. James Watson, made a remark that is now generating worldwide uproar, especially among blacks. He said what to me looks like a self-evident truth. He told The Sunday Times of London in an interview that in his humble opinion, black people are less intelligent than the White people...
An intriguing opening. Is Idang Alibi about to take a position on the real heart of the uproar?
I do not know what constitutes intelligence. I leave that to our so-called scholars. But I do know that in terms of organising society for the benefit of the people living in it, we blacks have not shown any intelligence in that direction at all. I am so ashamed of this and sometimes feel that I ought to have belonged to another race...
Darn, it's just a lecture on personal and national responsibility. Of course, for African nationals, taking responsibility for their country's problems is the most productive attitude regardless. But it doesn't engage with the controversies that got Watson fired.
Later in the article came this:
As I write this, I do so with great pains in my heart because I know that God has given intelligence in equal measure to all his children irrespective of the colour of their skin.
This intrigued me for two reasons: First, I'm always on the lookout for yet another case of theology making a falsifiable experimental prediction. And second, the prediction follows obviously if God is just, but what does skin colour have to do with it at all?
To reduce astronomical waste: take your time, then go very fast
While we dither on the planet, are we losing resources in space? Nick Bostrom has an article on astronomical waste, talking about the vast amounts of potentially useful energy that we're simply not using for anything:
As I write these words, suns are illuminating and heating empty rooms, unused energy is being flushed down black holes, and our great common endowment of negentropy is being irreversibly degraded into entropy on a cosmic scale. These are resources that an advanced civilization could have used to create value-structures, such as sentient beings living worthwhile lives.
The rate of this loss boggles the mind. One recent paper speculates, using loose theoretical considerations based on the rate of increase of entropy, that the loss of potential human lives in our own galactic supercluster is at least ~1046 per century of delayed colonization.
On top of that, galaxies are slipping away from us because of the exponentially accelerating expansion of the universe (x axis in years since Big Bang, cosmic scale function arbitrarily set to 1 at the current day):

At the rate things are going, we seem to be losing slightly more than one galaxy a year. One entire galaxy, with its hundreds of billions of stars, is slipping away from us each year, never to be interacted with again. This is many solar systems a second; poof! Before you've even had time to grasp that concept, we've lost millions of times more resources than humanity has even used.
So it would seem that the answer to this desperate state of affairs is to rush thing: start expanding as soon as possible, greedily grab every hint of energy and negentropy before they vanish forever.
Not so fast! Nick Bostrom's point was not that we should rush things, but that we should be very very careful:
However, the lesson for utilitarians is not that we ought to maximize the pace of technological development, but rather that we ought to maximize its safety, i.e. the probability that colonization will eventually occur.
How I Became More Ambitious
Follow-up to How I Ended Up Non-Ambitious
Living with yourself is a bit like having a preteen and watching them get taller; the changes happen so slowly that it's almost impossible to notice them, until you stumble across an old point of comparison and it becomes blindingly obvious. I hit that point a few days ago, while planning what I might want to talk about during an OkCupid date. My brain produced the following thought: "well, if this topic comes up, it might sound like I'm trying to take over the world, and that's intimidating- Wait. What?"
I'm not trying to take over the world. It sounds like a lot of work, and not my comparative advantage. If it seemed necessary, I would point out the problems that needed solving and delegate them to CFAR alumni with more domain-specific expertise than me.
However, I went back and reread the post linked at the beginning, and I no longer feel much kinship with that person. This is a change that happened maybe 25-50% deliberately, and the rest by drift, but I still changed my mind, so I will try to detail the particular changes, and what I think led to them. Introspection is unreliable, so I'll probably be at least 50% wrong, but what can you do?
1. Idealism versus practicality
I would still call myself practical, but I no longer think that this comes at the expense of idealism. Idealism is absolutely essential, if you want to have a world that changes because someone wanted it to, as opposed to just by drift. Lately in the rationalist/CFAR/LW community, there's been a lot of emphasis on agency and agentiness, which basically mean the ability to change the world and/or yourself deliberately, on purpose, through planned actions. This is hard. The first step is idealism-being able to imagine a state of affairs that is different and better. Then comes practicality, the part where you sit down and work hard and actually get something done.
It's still true that idealism without practicality doesn't get much done, and practicality without idealism can get a lot done, but it matters what problems you're working on, too. Are you being strategic? Are you even thinking, at all, about whether your actions are helping to accomplish your goals? One of the big things I've learned, a year and a half and two CFAR workshops later, is how automatic and easy this lack of strategy really is.
I had a limited sort of idealism in high school; I wanted to do work that was important and relevant; but I was lazy about it. I wanted someone to tell me what was important to be doing right now. Nursing seemed like an awesome solution. It still seems like a solution, but recently I've admitted to myself, with a painful twinge, that it might not be the best way for me, personally, to help the greatest number of people using my current and potential skill set. It's worth spending a few minutes or hours looking for interesting and important problems to work on.
I don't think I had the mental vocabulary to think that thought a year and a half ago. Some of the change comes from having dated an economics student. Come to think of it, I expect some of his general ambition rubbed off on me, too. The rest of the change comes from hanging out with the effective altruism and similar communities.
I'm still practical. I exercise, eat well, go to bed on time, work lots of hours, spend my money wisely, and maintain my social circle mostly on autopilot; it requires effort but not deliberate effort. I'm lucky to have this skill. But I no longer think it's a virtue over and above idealism. Practical idealists make the biggest difference, and they're pretty cool to hang out with. I want to be one when I grow up.
2. Fear of failure
Don't get me wrong. If there's one deep, gripping, soul-crushing terror in my life, one thing that gives me literal nightmares, it's failure. Making mistakes. Not being good enough. Et cetera.
In the past few years, the main change has been admitting to myself that this terror doesn't make a lot of sense. First of all, it's completely miscalibrated. As Eliezer pointed out during a conversation on this, I don't fail at things very often. Far from being a success, this is likely a sign that the things I'm trying aren't nearly challenging enough.
My threshold for what constitutes failure is also fairly low. I made a couple of embarrassing mistakes during my spring clinical. Some part of my brain is convinced that this equals permanent failure; I wasn't perfect during the placement, and I can't go back and change the past, thus I have failed. Forever.
I passed the clinical, wrote the provincial exam (results aren't in but I'm >99% confident I passed) (EDIT: Passed! YEAAHHH!!!), and I'm currently working in the intensive care unit, which has been my dream since I was about fifteen. The part of my brain that keeps telling me I failed permanently obviously isn't saying anything useful.
I think 'embarrassing' is a keyword here. The first thing I thought, on the several occasions that I made mistakes, was "oh my god did I just kill someone... Phew, no, no harm done." The second thought was "oh my god, my preceptor will think I'm stupid forever and she'll never respect me and no one wants me around, I'm not good enough..." This line of thought never goes anywhere good. It says something about me, though, that "I'm not good enough" is very directly connected to people wanting me around, to belonging somewhere. For several personality-formative years of my life, people didn't want me around. Probably for good reason; my ten-year-old self was prickly and socially inept and miserable. I think a lot of my determination not to seek status comes from the "uncool kids trying to be cool are pathetic" meme that was so rampant when I was in sixth grade.
Oh, and then there's the traumatic swim team experience. Somewhere, in a part of my brain where I don't go very often nowadays, there a bottomless whirlpool of powerless rage and despair around the phrase "no matter how hard I try, I'll never be good enough." So when I make an embarrassing mistake, my ten-year-old self is screaming at me "no wonder everyone hates you!" and my fourteen-year-old self is sadly muttering that "you know, maybe you just don't have enough natural talent," and none of it is at all useful.
The thing about those phrases is that they refer to complex and value-laden concepts, in a way that makes them seem like innate attributes, à la Fundamental Attribution Error. "Not good enough" isn't a yes-or-no attribute of a person; it's a magical category that only sounds simple because it's a three-word phrase. I've gotten somewhat better at propagating this to my emotional self. Slightly. It's a work in progress.
During a conversation about this with Anna Salamon, she noted that she likes to approach her own emotions and ask them what they want. It sounds weird, but it's helpful. "Dear crushing sense of despair and unworthiness, what do you want? ...Oh, you're worried that you're going to end up an outcast from your tribe and starve to death in the wilderness because you accidentally gave an extra dose of digoxin? You want to signal remorse and regret and make sure everyone knows you're taking your failure seriously so that maybe they'll forgive you? Thank you for trying to protect me. But really, you don't need to worry about the starving-outcast thing. No one was harmed and no one is mad at you personally. Your friends and family couldn't care less. This mistake is data, but it's just as much data about the environment as it is about your attributes. These hand-copied medication records are the perfect medium for human error. Instead of signalling remorse, let's put some mental energy into getting rid of the environmental conditions that led to this mistake."
Rejection therapy and having a general CoZE [Comfort Zone Expansion] mindset helped remove some of the sting of "but I'll look stupid if I try something too hard and fail at it!" I still worry about the pain of future embarrassment, but I'm more likely to point out to myself that it's not a valid objection and I should do X anyway. Making "I want to become stronger" an explicit motto is new to the last year and a half, too, and helps by giving me ammunition for why potential embarrassment isn't a reason not to do something.
In conclusion: failure still sucks. I'm a perfectionist. But I failed in a lot of small ways during my spring clinical, and passed/got a job anyway, which seems to have helped me propagate to my emotional self that it's okay to try hard things, where I'm almost certain to make mistakes, because mistakes don't equal instant damnation and hatred from all of my friends.
3. The morality of ambition
While I was in San Francisco a month ago, volunteering at the CFAR workshop and generally spending my time surrounded by smart, passionate, and ambitious people (thus convincing my emotional system that this is normal and okay), I had a conversation with Eliezer. He asked me to list ten areas in which I was above average.
This was a lot more painful than it had any reason to be. After bouncing off various poorly-formed objections in my mind, I said to myself "you know, having trouble admitting what you're good at doesn't make you virtuous." This was painful; losing a source of feeling-virtuous always is. But it was helpful. Yeah, talking all the time about how awesome you are at X, Y, Z makes you a bit of a bore. People might even avoid you (oh! the horror!). However, this doesn't mean that blocking even the thought of being above average makes you a good person. In fact, it's counterproductive. How are you supposed to know what problems you're capable of solving in the world if you can't be honest with yourself about your capabilities?
This conversation helped. (Even if some of the effect was "high status person says X -> I believe X," who cares? I endorsed myself changing my mind about this a year and a half ago. It's about time.)
HPMOR helped, too; specifically, the idea that there are four houses which have different positive qualities. Slytherins are demonized in canon, but in HPMOR their skills are recognized as essential. I can easily recognize the Ravenclaw and Hufflepuff and even the Gryffindor in myself, but not much of Slytherin. Having a word for the ambition-cunning-strategic concept cluster is helpful. I can ask myself "now what would a Slytherin do with this information:?" I can think thoughts that feel very un-virtuous. "I'm young and prettier than average. What's a Slytherin way to use this... Oh, I suppose I can leverage it to get high-status men to pay attention to me long enough for me to explain the merits of an idea I have." This thought feels yuck, but the universe doesn't explode.
Probably the biggest factor was going to the CFAR workshops in the first place. Not from any of the curriculum, particularly, although the mindset of goal factoring helped me to realize that the mental action of "feeling unvirtuous for thinking in ambitious or calculating ways" wasn't accomplishing anything I wanted. Mostly the change came from social normalization, from hanging out with people who talked openly about their strengths and weaknesses, and no one got shunned.
[Silly plan for taking over the world: Arrange to meet high status-people and offer to give their children swimming lessons. Gain their trust. Proceed from there.]
4. Laziness
Nope. Still lazy. If anything, akrasia and procrastination are more of a problem now that I'm trying to do harder things more deliberately.
I've been keeping written goals for about a year now. This means I actually notice when I don't accomplish them.
I use Remember the Milk as a GTD system, and some other productivity/organization software (rescuetime, Mint.com, etc). I finally switched to Gmail, where I can use Boomerang and other useful tools. My current openness to trying new organization methods is high.
My general interest in trying things is higher, mainly because I have lots of community-endorsed-warm-fuzzies positive affect around that phrase. I want to be someone who's open to new experiences; I've had enough new experiences to realize how exhilarating they can be.
Conclusion
I now have a wider range of potentially high-value personal projects ongoing. I now have an explicit goal of being well-known for non-fiction writing, probably in a blog form, in the next five years. (Do I have enough interesting things to say to make this a reality? We'll see. Is this goal vague? Yes. Working on it. I used to reject goals if they weren't utterly concrete, but even vague goals are something to build on).
I'm more explicit with myself about what I want from CFAR curriculum skills. (The general problem of critical thinking in nursing? Solvable! Why not?)
I think I've finally admitted to myself that "well, I'll just live in a cozy little house near my parents and work in the ICU and raise kids for the next forty years" might not be particularly virtuous or fun. There are things I would prefer to be different in the world, even if I can only completely specify a few of them. There are exciting scary opportunities happening all the time. I'm lucky enough to belong to a community of people that can help me find them.
I don't have plans for much beyond the next year. But here's to the next decade being interesting!
Learned Blankness
Related to: Semantic stopsigns, Truly part of you.
One day, the dishwasher broke. I asked Steve Rayhawk to look at it because he’s “good with mechanical things”.
“The drain is clogged,” he said.
“How do you know?” I asked.
He pointed at a pool of backed up water. “Because the water is backed up.”
We cleared the clog and the dishwasher started working.
I felt silly, because I, too, could have reasoned that out. The water wasn’t draining -- therefore, perhaps the drain was clogged. Basic rationality in action.[1]
But before giving it even ten seconds’ thought, I’d classified the problem as a “mechanical thing”. And I’d remembered I “didn’t know how mechanical things worked” (a cached thought). And then -- prompted by my cached belief that there was a magical “way mechanical things work” that some knew and I didn’t -- I stopped trying to think at all.
“Mechanical things” was for me a mental stopsign -- a blank domain that stayed blank, because I never asked the obvious next questions (questions like “does the dishwasher look unusual in any way? Why is there water at the bottom?”).
When I tutored math, new students acted as though the laws of exponents (or whatever we were learning) had fallen from the sky on stone tablets. They clung rigidly to the handed-down procedures. It didn’t occur to them to try to understand, or to improvise. The students treated math the way I treated broken dishwashers.
Martin Seligman coined the term "learned helplessness" to describe a condition in which someone has learned to behave as though they were helpless. I think we need a term for learned helplessness about thinking (in a particular domain). I’ll call this “learned blankness”[2]. Folks who fall prey to learned blankness may still take actions -- sometimes my students practiced the procedures again and again, hired a tutor, etc. But they do so as though carrying out rituals to an unknown god -- parts of them may be trying, but their “understand X” center has given up.
How to understand people better
Don't Revere The Bearer Of Good Info
Follow-up to: Every Cause Wants To Be A Cult, Cultish Countercultishness
One of the classic demonstrations of the Fundamental Attribution Error is the 'quiz study' of Ross, Amabile, and Steinmetz (1977). In the study, subjects were randomly assigned to either ask or answer questions in quiz show style, and were observed by other subjects who were asked to rate them for competence/knowledge. Even knowing that the assignments were random did not prevent the raters from rating the questioners higher than the answerers. Of course, when we rate individuals highly the affect heuristic comes into play, and if we're not careful that can lead to a super-happy death spiral of reverence. Students can revere teachers or science popularizers (even devotion to Richard Dawkins can get a bit extreme at his busy web forum) simply because the former only interact with the latter in domains where the students know less. This is certainly a problem with blogging, where the blogger chooses to post in domains of expertise.
Specifically, Eliezer's writing at Overcoming Bias has provided nice introductions to many standard concepts and arguments from philosophy, economics, and psychology: the philosophical compatibilist account of free will, utility functions, standard biases, and much more. These are great concepts, and many commenters report that they have been greatly influenced by their introductions to them at Overcoming Bias, but the psychological default will be to overrate the messenger. This danger is particularly great in light of his writing style, and when the fact that a point is already extant in the literature, and is either being relayed or reinvented, isn't noted. To address a few cases of the latter: Gary Drescher covered much of the content of Eliezer's Overcoming Bias posts (mostly very well), from timeless physics to Newcomb's problems to quantum mechanics, in a book back in May 2006, while Eliezer's irrealist meta-ethics would be very familiar to modern philosophers like Don Loeb or Josh Greene, and isn't so far from the 18th century philosopher David Hume.
If you're feeling a tendency to cultish hero-worship, reading such independent prior analyses is a noncultish way to diffuse it, and the history of science suggests that this procedure will be applicable to almost anyone you're tempted to revere. Wallace invented the idea of evolution through natural selection independently of Darwin, and Leibniz and Newton independently developed calculus. With respect to our other host, Hans Moravec came up with the probabilistic Simulation Argument long before Nick Bostrom became known for reinventing it (possibly with forgotten influence from reading the book, or its influence on interlocutors). When we post here we can make an effort to find and explicitly acknowledge such influences or independent discoveries, to recognize the contributions of Rational We, as well as Me.
Train Philosophers with Pearl and Kahneman, not Plato and Kant
Part of the sequence: Rationality and Philosophy
Hitherto the people attracted to philosophy have been mostly those who loved the big generalizations, which were all wrong, so that few people with exact minds have taken up the subject.
Bertrand Russell
I've complained before that philosophy is a diseased discipline which spends far too much of its time debating definitions, ignoring relevant scientific results, and endlessly re-interpreting old dead guys who didn't know the slightest bit of 20th century science. Is that still the case?
You bet. There's some good philosophy out there, but much of it is bad enough to make CMU philosopher Clark Glymour suggest that on tight university budgets, philosophy departments could be defunded unless their work is useful to (cited by) scientists and engineers — just as his own work on causal Bayes nets is now widely used in artificial intelligence and other fields.
How did philosophy get this way? Russell's hypothesis is not too shabby. Check the syllabi of the undergraduate "intro to philosophy" classes at the world's top 5 U.S. philosophy departments — NYU, Rutgers, Princeton, Michigan Ann Arbor, and Harvard — and you'll find that they spend a lot of time with (1) old dead guys who were wrong about almost everything because they knew nothing of modern logic, probability theory, or science, and with (2) 20th century philosophers who were way too enamored with cogsci-ignorant armchair philosophy. (I say more about the reasons for philosophy's degenerate state here.)
As the CEO of a philosophy/math/compsci research institute, I think many philosophical problems are important. But the field of philosophy doesn't seem to be very good at answering them. What can we do?
Why, come up with better philosophical methods, of course!
Scientific methods have improved over time, and so can philosophical methods. Here is the first of my recommendations...
Leave a Line of Retreat
"When you surround the enemy
Always allow them an escape route.
They must see that there is
An alternative to death."
—Sun Tzu, The Art of War, Cloud Hands edition
"Don't raise the pressure, lower the wall."
—Lois McMaster Bujold, Komarr
Last night I happened to be conversing with a nonrationalist who had somehow wandered into a local rationalists' gathering. She had just declared (a) her belief in souls and (b) that she didn't believe in cryonics because she believed the soul wouldn't stay with the frozen body. I asked, "But how do you know that?" From the confusion that flashed on her face, it was pretty clear that this question had never occurred to her. I don't say this in a bad way—she seemed like a nice person with absolutely no training in rationality, just like most of the rest of the human species. I really need to write that book.
Most of the ensuing conversation was on items already covered on Overcoming Bias—if you're really curious about something, you probably can figure out a good way to test it; try to attain accurate beliefs first and then let your emotions flow from that—that sort of thing. But the conversation reminded me of one notion I haven't covered here yet:
"Make sure," I suggested to her, "that you visualize what the world would be like if there are no souls, and what you would do about that. Don't think about all the reasons that it can't be that way, just accept it as a premise and then visualize the consequences. So that you'll think, 'Well, if there are no souls, I can just sign up for cryonics', or 'If there is no God, I can just go on being moral anyway,' rather than it being too horrifying to face. As a matter of self-respect you should try to believe the truth no matter how uncomfortable it is, like I said before; but as a matter of human nature, it helps to make a belief less uncomfortable, before you try to evaluate the evidence for it."
Why safety is not safe
June 14, 3009
Twilight still hung in the sky, yet the Pole Star was visible above the trees, for it was a perfect cloudless evening.
"We can stop here for a few minutes," remarked the librarian as he fumbled to light the lamp. "There's a stream just ahead."
The driver grunted assent as he pulled the cart to a halt and unhitched the thirsty horse to drink its fill.
It was said that in the Age of Legends, there had been horseless carriages that drank the black blood of the earth, long since drained dry. But then, it was said that in the Age of Legends, men had flown to the moon on a pillar of fire. Who took such stories seriously?
The librarian did. In his visit to the University archive, he had studied the crumbling pages of a rare book in Old English, itself a copy a mere few centuries old, of a text from the Age of Legends itself; a book that laid out a generation's hopes and dreams, of building cities in the sky, of setting sail for the very stars. Something had gone wrong - but what? That civilization's capabilities had been so far beyond those of his own people. Its destruction should have taken a global apocalypse of the kind that would leave unmistakable record both historical and archaeological, and yet there was no trace. Nobody had anything better than mutually contradictory guesses as to what had happened. The librarian intended to discover the truth.
Forty years later he died in bed, his question still unanswered.
The earth continued to circle its parent star, whose increasing energy output could no longer be compensated by falling atmospheric carbon dioxide concentration. Glaciers advanced, then retreated for the last time; as life struggled to adapt to changing conditions, the ecosystems of yesteryear were replaced by others new and strange - and impoverished. All the while, the environment drifted further from that which had given rise to Homo sapiens, and in due course one more species joined the billions-long roll of the dead. For what was by some standards a little while, eyes still looked up at the lifeless stars, but there were no more minds to wonder what might have been.
Identity Isn't In Specific Atoms
Continuation of: No Individual Particles
Followup to: The Generalized Anti-Zombie Principle
Suppose I take two atoms of helium-4 in a balloon, and swap their locations via teleportation. I don't move them through the intervening space; I just click my fingers and cause them to swap places. Afterward, the balloon looks just the same, but two of the helium atoms have exchanged positions.
Now, did that scenario seem to make sense? Can you imagine it happening?
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)