Q&A with Harpending and Cochran

26 Post author: MBlume 10 May 2010 11:01PM

Edit: Q&A is now closed. Thanks to everyone for participating, and thanks very much to Harpending and Cochran for their responses.

In response to Kaj's reviewHenry Harpending and Gregory Cochran, the authors of the The 10,000 Year Explosion, have agreed to a Q&A session with the Less Wrong community.

If you have any questions for either Harpending or Cochran, please reply to this post with a question addressed to one or both of them. Material for questions might be derived from their blog for the book which includes stories about hunting animals in Africa with an eye towards evolutionary implications (which rose to Jennifer's attention based on Steve Sailer's prior attention).

Please do not kibitz in this Q&A... instead go to the kibitzing area to talk about the Q&A session itself. Eventually, this post will be edited to note that the process has been closed, at which time there should be no new questions.

 

Comments (103)

Comment author: Kaj_Sotala 17 May 2010 07:27:35AM 6 points [-]

The discussion's been going on for a while and it's been slowing down, so I think it's time to close down the official Q&A session. Henry and Gregory, you're of course still free to check the post and write comments if you want to, but there's no "official" expectation for that. Of course, you're also free to familiarize yourself with the rest of the site, if you think it's interesting enough, but that's entirely up to you. :)

I would like to take the chance to thank you for your excellent answers. There was a lot of interesting stuff in there, and I greatly enjoyed reading both your answers and the discussion that followed. I hope you did not find the initial cross-interrogation by Shulman and Salamon too horrible. :)

If you happen to have any closing statements you'd like to make, replying to this comment is probably the best way of doing it.

Also, special thanks to JenniferRM for arranging this.

Comment author: RobinHanson 14 May 2010 07:28:40PM 4 points [-]

[I waited until I could get a copy of the book and read it before making my point here.]

In the book you say that foragers had little reason to fight wars or to to be patient for long term investments. But forager wars are often about grabbing women, and they might also make long term investments in particular women or in developing skills, like singing, that can attract women.

Comment author: harpend 16 May 2010 08:17:45PM 8 points [-]

I don't agree with you except a little bit. And there are foragers who do have some low time preference, like on the US Northwest Coast where they harvested lots of salmon that they smoked and stored. Interior Eskimo slaughtered migrating caribou herds and stored the meat by freezing.

But in general forager life has been almost literally hand to mouth. I have spent a lot of wasted time pulling my hair out about this. We have had lots of Bushman employees in the Kalahari, well compensated. We have spent hours pointing out that we would go back to America, they should invest in goats or cattle, build up a herd, so they will have something to live on after we left. Everyone agreed with us, but they minute Aunt Nellie got sick everything was slaughtered. Again and again and again. Aargghh......

Henry

Comment author: RobinHanson 17 May 2010 02:37:52PM 3 points [-]

My point was theoretical, not empirical. If you say that foragers often seem remarkably uninterested in making sacrifices for the future I'll believe you. But I'm questioning how well we understand that data, by noting that there are some aspects of their lives where they seem to make long term investments. Maybe they just don't have a consistent time preference, maybe it varies by type of behavior; for some areas like learning an art they evolved behaviors that respect future consequences, and for other areas like food storage they did not.

Comment author: harpend 22 May 2010 02:56:18PM 3 points [-]

Yes, of course, I will give you that. You are suggesting that "time preference" is way too global and vague a concept and I can't disagree.

HCH

Comment author: NancyLebovitz 16 May 2010 10:27:04PM 1 point [-]

Is your point that they couldn't imagine investing for the future, or that they had so little slack that they couldn't afford to?

Comment author: harpend 17 May 2010 01:56:33PM 6 points [-]

They could certainly imagine investing: they have been invaded by cattle people over the last half century and they see husbandry all around. And they certainly could have afforded to keep their animals. But they just didn't (seem to) have it in them to "delay gratification". I think that our ability to invest and save resources must be new and different in our evolution.

Comment author: NancyLebovitz 17 May 2010 04:18:31PM 2 points [-]

My impression is that hunter-gatherers have a huge amount of social pressure towards short-term sharing.

You mentioned "Aunt Nettie getting sick" as a reason to slaughter cattle. Was it food for her? Expensive medical care or rituals? Something else?

Comment author: harpend 22 May 2010 02:58:13PM 3 points [-]

Food for her and to support a ritual gathering of folks for support. There is no medical care out in the bush, but if there were people would certainly chip in to help pay for it.

HCH

Comment author: Jack 14 May 2010 03:53:08AM 1 point [-]

For both/either: What are you working on right now?

Comment author: harpend 15 May 2010 05:52:48PM 4 points [-]

I am trying to think about the genesis and maintenance of social class and about the dimensionality of class. We know from the biometricians at the end of the nineteenth century that cognitive ability is essentially a single dimension while athletic ability, for example, is multidimensional. I want to start with a pure inductive approach to class in North America and do the same thing with class. Fat chance, I have found, since every time I get started I get sucked back into genetics.

Henry

Comment author: LauraABJ 12 May 2010 04:46:36PM 1 point [-]

I have always been curious about the effects of mass-death on human genetics. Is large scale death from plague, war, or natural-disaster likely to have much effect on the genetics of cognitive architecture, or are outcomes generally too random? Is there evidence for what traits are selected for by these events?

Comment author: gcochran 14 May 2010 04:02:17AM 7 points [-]

Too random to have much effect, I should think. And at the same time, not awful enough to reduce the population to the point where drift would become important. Unless we're talking asteroid impacts.

One can imagine exceptions. For example, if alleles that gave resistance to some deadly plague had negative side effects on intelligence, then you'd see an effect. Note that negative side effects are much more likely than positive side effects.

I know of some neat anecdotal exceptions. Von Neumann got out of Germany in 1930, while the getting was good. When a friend said that Germany was oh-so-cultured and that there was nothing to worry about, Von Neumann didn't believe it. He started quoting the Melian dialogue - pointed out that the Athenians had been pretty cultured. High intelligence helped save his life.

Comment author: Nanani 13 May 2010 03:02:26AM 0 points [-]

Seconded, but with a request for contrast, if possible, with human-caused mass-death such as invasion by conquering hordes. What effect do such phenomena have at the genetic level wrt cognition, as opposed to cultural or lingustic transmission?

Comment author: Yvain 14 May 2010 09:22:31PM 4 points [-]

And what about human-caused mass death selecting for specific characteristics? For example, the Cambodian purges of intellectuals or the Communist purges of successful businesspeople. Are these too tenuous a proxy for genes to cause long-term change in alleles, or did the Cambodians and Communists do long-term harm to their genetic legacy?

Comment author: gcochran 15 May 2010 07:24:45AM *  5 points [-]

Purges in Cambodia might have changed average genotypes because they hit such a high fraction of the population. Generally it's hard to change things much in one generation, though - particularly because of loose correlations between genotypes and dreadful political fates. In the future dictators should be better at this. Now if Stalin had taken all the smartest people in the Soviet Union and forcibly paired them up, artificially inflating assortative mating for intelligence, you would have seen an effect. If you were a billionaire, you could maybe bribe people into something similar.

Comment author: twl 13 May 2010 11:27:38PM *  2 points [-]

In AD175 Marcus Aurelius brought 5,500 Sarmatian heavy cavalry warriors to northern Britain where, after twenty years service, they "settled in a permanent military colony in Lancashire" which was "still mentioned almost 250 years later." You remind us of the possibility that the colony could have influenced the legend of King Arthur, and go on to add something new: it also "could have introduced several thousand copies of that hypothetical allele into Lancashire" and that the average Englishman "might be mostly Sarmatian in a key gene or two." I'm English, and intrigued! Are you able to expand on this? (Book pp. 146-148) I hope it is something good like increased unruliness (independence streak) and aggressiveness in battle and not something naff like Sarmatian lewdness...!!

Comment author: harpend 14 May 2010 01:45:33AM 2 points [-]

I have no further knowledge or insight about that, but Greg might. I will call this question to his attention and we may see what he knows.

HCH

Comment author: CronoDAS 12 May 2010 04:09:06AM 8 points [-]

What are your thoughts on the Flynn effect?

Comment author: harpend 12 May 2010 04:46:30PM 7 points [-]

It is an interesting puzzle. This was a secular rise in cognitive test scores well documented in a number of countries during the 20th century. It has stopped and even reversed in the last few decades. There seem to be several pausible ideas out there

One is that social changes have had the effect of "training" people for cognitive tests: more magazines, radio, chatter everywhere, advertising, etc. Hard idea to test. I do fieldwork in Southern Africa. Forty years ago there were no radios in the backcountry, no books, no magazines. Today radio, newspapers, magazines are everywhere. I expect that this changes people a lot but I have no evidence.

Flynn himself thinks nutrition got better but the data are not clear about that. I would favor as an explanation vaccination and antibiotics. Infectious disease and the inflammation associated with it does seem to damage people (Caleb Finch, Eileen Crimmins, others). We have cut the intensity of childhood insults way down everywhere.

My two cents........

Comment author: cupholder 12 May 2010 07:09:32PM 3 points [-]

Flynn himself thinks nutrition got better but the data are not clear about that.

I doubt Flynn thinks much of the nutrition hypothesis any more; his recent paper 'Requiem for nutrition as the cause of IQ gains' argues against nutrition as a major cause of IQ gains in developed nations. He would likely agree with you that the kinds of social changes you're thinking of had a big impact; I seem to remember him writing in his book from three years back that contemporary people make more of a habit of thinking about things abstractly, and learn more of the mental tools needed to do well on IQ tests.

Comment author: harpend 14 May 2010 01:51:45AM 9 points [-]

When I did fieldwork in the late 1960s in backcountry Botswana I hit upon the idea of asking my sister (a dairy farmer) to send me a box of back issues of American cattle magazines. It was unbelievable: I could have made a fortune selling pictures from them, not to mention whole issues, to the local cattle people. At that time people carefully hoarded little scraps of paper to use writing messages.

In the late 1980s I brought some more such magazines with me, and no one was interested at all. The media storm had penetrated and everyone had school textbooks, magazines, radios, etc.

Comment author: JoshSN 15 May 2010 07:44:55PM 1 point [-]

My main interest is how language barriers control how information, like cattle farming best practices, bounce around.

Comment author: cupholder 14 May 2010 02:38:42PM *  1 point [-]

Interesting. If mass media have only started to penetrate parts of Southern Africa in the last 40 years or so, I wonder if the Flynn effect is still happening there.

Editing this comment to add - I did a quick Google scholar search and didn't find Flynn effect studies for Southern Africa. The best I could get were papers on IQ rises in Sudan and rural Kenya.

Comment author: NancyLebovitz 13 May 2010 12:30:30AM 1 point [-]

I think there's some evidence that the Flynn effect isn't just about IQ tests: for example, I think it's only been within the past 30 years that there are popular books about popular culture.

Comment author: PhilGoetz 14 May 2010 02:40:22AM *  4 points [-]

How are popular books about popular culture an indicator of rising IQ? You mean, e.g., a book about Michael Jackson?

Science fiction blossomed in the 1930s. Educational books became big in the 1950s, I think. Self-help books became huge 40 or 50 years ago. Parenting books became huge in the 1960s. Popular sociology books date back to before Future Shock, printed 40 years ago.

I have the impression of a big increase in IQ when I listen to old radio comedy shows, pre-World War II. The humor is so simple and repetitive and uninteresting that I get the feeling the US must have consisted of adult-sized children. Maybe it's because radio was a new medium; but a lot of it was just a restaging of vaudeville humor that had been successful for decades.

Comment author: Vladimir_M 14 May 2010 03:36:40AM *  13 points [-]

PhilGoetz

I have the impression of a big increase in IQ when I listen to old radio comedy shows, pre-World War II. The humor is so simple and repetitive and uninteresting that I get the feeling the US must have consisted of adult-sized children. Maybe it's because radio was a new medium; but a lot of it was just a restaging of vaudeville humor that had been successful for decades.

I have the same impression, though it could be partly due to the growth and specialization in the pop-culture market, so that the sample you happen to see today is mainly from the output targeted at smarter audiences. But the difference seems too large to explain just by that effect; the old shows are often truly mind-numbingly dull, as you describe. There was a post about this topic a few years ago on Marginal Revolution with some striking diagrams: http://www.marginalrevolution.com/marginalrevolution/2005/04/tv_and_the_flyn.html

What makes it even more puzzling is that these apparent huge increases in average folks' sharpness were not accompanied by anything similar at the higher levels of intellectual accomplishment. In many countries, a teacher or professor who taught for, say, 30 years during the second half of the 20th century would have dealt with generations of students whose average raw IQ test scores increased by more than a whole standard deviation in that period. Yet there have been no reports of striking proliferation of super-smart students at any educational level. (Consider that if the average of a normal distribution increases by 1SD, it will, ceteris paribus, boost the percentage of scores exceeding the previous +3SD threshold by about 16 times!)

So basically, we're seeing pop culture getting more mentally demanding, along with a dramatic increase in average non-verbal IQ test scores, but no visible increase in the number of exceedingly brilliant individuals. At the same time, the tests apparently remain strong predictors of all sorts of intellectual performance. I suspect that the procedures by which IQ tests are constantly re-normed to produce neat normal distributions lead to a scoring system that is seriously misleading in at least some ways. This is also a serious objection I have to a lot of research in this area: it starts and ends assuming that we're dealing with a variable (IQ) which is normally distributed through the population, like height, even though it's in fact artificially made that way, and we still have no idea what's really underneath.

Comment author: Kaj_Sotala 14 May 2010 11:05:20PM *  3 points [-]

The explanation that Flynn describes in his book, What is Intelligence? is basically that modern culture gives us extra practice in many of the subskills that require a lot of intelligence. That, however, doesn't increase intelligence itself - it only makes us better at doing tasks that require those subskills.

This doesn't mean that IQ tests would have lost their value, either - if, say, everyone in the population ends up exercising an additional five hours per week, then everyone's athletic ability does go up, but it's still the ones who were the most athletically talented in the beginning who end up having the best results. The same principle applies for IQ: "general intelligence" + "domain-specific talent" + "amount of practice had" is probably a pretty good formula for figuring out how good you are at something, and if everyone gets roughly the same amount of extra practice, the tests remain a good way of distinguishing the one with the highest IQ.

In practice, the IQ tests' validity might be even better than only this would imply. The obvious question this raises is, "but does the whole population get the same amount of extra practice?". In all likelihood, the answer is no - but it's very possible that for a lot of things, those with the highest IQ get the largest amount of extra practice, since they will naturally find simple things boring and seek out the most complex things. Thus the amount of practice, itself, likely correlates with IQ.

Comment author: NancyLebovitz 14 May 2010 09:14:55AM 1 point [-]

One possibility is that our educational systems haven't caught up to the increase in general intelligence.

Another is that people who could be making major contributions are distracted by the complexity of popular culture. :-/

Comment author: cupholder 14 May 2010 01:59:34PM 1 point [-]

Another is that people who could be making major contributions are distracted by the complexity of popular culture. :-/

I'd generalise that: maybe a more complex and IQ-oriented culture means people have to run faster just to stay in the same place, intellectually.

Comment author: Vladimir_M 14 May 2010 08:14:23PM *  2 points [-]

That may be the case, but I still don't find the explanation satisfactory from the point of view of the classic general intelligence theory (not that I have a better alternative, though).

To clarify, the traditional theory of general intelligence, which is taken as a background assumption in most IQ-related research, assumes that general intelligence is normally distributed in the general population, and any reasonable measure of it will be highly correlated with IQ test scores (which are themselves artificially crafted to produce a normal distribution of scores). Moreover, it assumes that people whose intellects stand out as strikingly brilliant are drawn -- as a necessary condition, and not too far from sufficient -- from the pool of those whose general intelligence is exceptionally high. Now, if the scores on IQ tests are rising, but there is no visible increase in outstanding genius, it could mean one or more of these things (or something else I'm not aware of?):

  • We're applying higher criteria for genius. But are we really? Has the number of people at the level of von Neumann, Ramanujan, or Goedel really increased by two orders of magnitude since their time, as it should have if the distribution of general intelligence has simply moved up by 2SD since their time? (Note that for any increase in average, ceteris paribus, the increase in the rate of genius should be greater the higher the threshold we're looking at!)

  • The average has moved up, but the variance has shrunk. But this would have to be implausibly extreme shrinkage, since the average of IQ scores today is roughly at the z-score of +2 from two generations ago.

  • The modern culture is making common folks smarter, but it drags geniuses down. I believe there might be some truth to this. The pop culture everyone's supposed to follow, however trashy, has gotten more demanding mentally, but true intellectual pursuits have lost a lot of status compared to the past. Still, such effects can't explain the severity of the effect -- remember, the Flynn increase is greater than the difference between borderline retardation and being above average in the way the scores are used for diagnostics!

  • The IQ scores say a lot about people who are average or below average, but not much about smart people. This seems like the most plausible option to me, and the only one compatible with evidence. But this means that the standard model based on the normal distribution is seriously broken when it comes to the right side of the distribution, and it also makes the results of many heritability studies much more murky.

All in all, the situation is confusing, and unlikely to get clearer in the near future.

Comment author: mattnewport 14 May 2010 10:01:24PM *  8 points [-]

This study (which HughRistik originally pointed to here) suggests that IQ distribution might be better modeled as two overlapping normal distributions, one for people who are not suffering from any conditions disrupting normal intelligence development (such as disease, nutritional problems, maternal drug or alcohol use during pregnancy, etc.) and the other for those who suffered developmental impairment. If this model has some validity the Flynn effect could perhaps be explained as a reduction in the number of people falling into the 'impaired' distribution due to improved health and nutrition in the population. This would seem to explain an increase in the average score without a corresponding increase in the number of 'geniuses'.

Comment author: cupholder 14 May 2010 10:16:37PM *  2 points [-]

We're applying higher criteria for genius. But are we really?

I think this is more likely than not, but I couldn't quantify it. I think it's more likely for the simple reason that what earlier geniuses (like von Neumann etc.) did has already been done. To me, that implies the genius bar has been raised, in absolute terms, at least in the hard sciences and math.

The average has moved up, but the variance has shrunk. But this would have to be implausibly extreme shrinkage,

Agree.

The modern culture is making common folks smarter, but it drags geniuses down. I believe there might be some truth to this. The pop culture everyone's supposed to follow, however trashy, has gotten more demanding mentally, but true intellectual pursuits have lost a lot of status compared to the past. Still, such effects can't explain the severity of the effect --

Agree. It's hard for me to imagine many geniuses getting derailed just by trash TV and ostracism.

The IQ scores say a lot about people who are average or below average, but not much about smart people. This seems like the most plausible option to me, and the only one compatible with evidence.

I believe IQ still correlates positively with performance among very high-achievers, just not as well as for normal people. The biggest factor here might be touched on in your second paragraph:

Moreover, it assumes that people whose intellects stand out as strikingly brilliant are drawn -- as a necessary condition, and not too far from sufficient -- from the pool of those whose general intelligence is exceptionally high.

I would bet that the standouts you're talking about would have higher average IQ, but would not actually be 'exceptionally' high, because IQ doesn't correlate that well with success. Also, many of the geniuses we're thinking of would probably be specialists, and it's harder to track specialized performance with the (relatively) generalist metric of IQ. If the IQ threshold for genius is lower than you think, an upward shift in the mean makes less difference. (Of course it can't explain the effect away entirely; something else is happening. But it could be a part.)

Comment author: Vladimir_M 14 May 2010 10:48:57PM *  5 points [-]

cupholder:

I think it's more likely for the simple reason that what earlier geniuses (like von Neumann etc.) did has already been done. To me, that implies the genius bar has been raised, in absolute terms, at least in the hard sciences and math.

That could well be the case. However, it fails to explain the lack of apparent genius at lower educational stages. For example, if you look at a 30 year period in the second half of the 20th century, the standard primary and high school math programs probably didn't change dramatically during this time, and they certainly didn't become much harder. Moreover, one could find many older math teachers who worked with successive generations throughout this period -- in which the Flynn IQ increase was above 1SD in many countries. If the number of young potential von Neumanns increased drastically during this period, as it should have according to the simple normal distribution model, then the teachers should have been struck by how more and more kids find the standard math programs insultingly easy. This would be true even if these potential von Neumanns have subsequently found it impossible to make the same impact as him because all but the highest-hanging fruit is now gone.

I would bet that the standouts you're talking about would have higher average IQ, but would not actually be 'exceptionally' high, because IQ doesn't correlate that well with success.

Yes, that's basically what I meant when I speculated that IQ might be significantly informative about intellectually average and below-average people, but much less about above-average ones. Unfortunately, I think we'll have to wait for further major advances in brain science to make any conclusions beyond speculation there. Psychometrics suffers from too many complications to be of much further use in answering such questions (and the politicization of the field doesn't help either, of course).

Comment author: NancyLebovitz 14 May 2010 09:01:03PM 1 point [-]

It's conceivable that there are institutional barriers to genius expressing itself-- partly that there really is more knowledge to be assimilated before one can do original work, and partly that chasing grants just sucks up too much time and makes it less likely for people to work on unfashionable angles.

Comment author: Vladimir_M 14 May 2010 09:30:18PM *  2 points [-]

Still, it's not like historical geniuses all grew up as pampered aristocrats left to pursue whatever they liked. Many of them grew up as poor commoners destined for an entirely unremarkable life, but their exceptional brightness as kids caught the attention of the local teacher, priest, or some other educated and influential person who happened to be around, and who then used his influence to open an exceptional career path for them. Thus, if the distribution of kids' general intelligence is really going up all the way, we'd expect teachers and professors to report a dramatic increase in the number of such brilliant students, but that's apparently not the case.

Moreover, many historical geniuses had to overcome far greater hurdles than having to chase grants and learn a lot before reaching competence for original work. Here I mean not just the regular life hardships, like when Tesla had to dig ditches for a living or when Ramanujan couldn't afford paper and pencil, but also the intellectual hurdles like having to become professionally proficient in the predominant language of science (whether English today or German, French, or Latin in the past), which can take at least as much intellectual effort as studying a whole subfield of science thoroughly.

So, while your hypothesis makes sense, I don't think it can fully explain the puzzle.

Comment author: NancyLebovitz 14 May 2010 09:22:32AM 0 points [-]

How are popular books about popular culture an indicator of rising IQ? You mean, e.g., a book about Michael Jackson?

I've explained, but I was thinking about books looking at the physics or philosophy implications of particular popular shows or books.

It could just be that such books would have been popular a century ago, but no one thought to write and/or publish them.

Comment author: harpend 14 May 2010 01:46:56AM 1 point [-]

Can you elaborate your comment--sounds fascinating. HCH

Comment author: NancyLebovitz 14 May 2010 09:11:46AM 0 points [-]

I don't have titles handy, but I think the first one I noticed was essays about Stephen King. Since then, there've been books about the physics of Star Trek and the ethics of Buffy.

I'm curious about whether anyone knows of such books addressed to popular audiences from more than a few decades ago, or of studies of the genre.

Comment author: CronoDAS 12 May 2010 07:15:19PM 2 points [-]

One is that social changes have had the effect of "training" people for cognitive tests: more magazines, radio, chatter everywhere, advertising, etc.

A similar argument was made in the book Everything Bad Is Good for You: How Today's Popular Culture Is Actually Making Us Smarter.

Comment author: NancyLebovitz 13 May 2010 02:18:16AM *  6 points [-]

I suspect that people want more complex popular culture because they've gotten smarter at least as much as the more complex culture making them smarter by accident.

Anyone have any actual knowledge of why tv shows started doing longer, more complex story arcs?

Comment author: RobinZ 13 May 2010 02:30:11AM 5 points [-]

I have no such knowledge, but allow me to add "better recording and rewatching options" to the list of candidates. Ready access to the backlog is certainly a factor in the success of serials in webcomics over newspaper comics, for example. (Yes, there are serials in both, but they are the norm in webcomics and the exception in print.)

Comment author: Nanani 13 May 2010 03:04:57AM 6 points [-]

Not to mention viewer base fragmentation. There is less need to appeal to the so-called lowest common denominator when there are hundreds or thousands of avenues for transmission. Those without patience for long story arcs can watch a different program more easily today than they could before cable, satelite, and the internet.

Comment author: JoshSN 15 May 2010 07:28:27PM -1 points [-]

Well, now it is four cents. Parents even teach to IQ tests.

Childhood insults? I'm sure you meant childhood disease.

Comment author: cupholder 15 May 2010 09:20:20PM 2 points [-]

I think Harpending was using the word 'insults' in the (less common nowadays) sense of 'injuries.'

Comment deleted 11 May 2010 11:20:19AM *  [-]
Comment author: gcochran 14 May 2010 06:25:28AM *  14 points [-]

I would say that it is some sense obvious that higher intelligence is possible, because the process that led to whatever intelligence we have was haphazard (path-dependent, stochastic, and all that) and because what optimization did occur was under severe constraints - some of which no longer apply. Clearly, the best possible performance under severe constraints is inferior to the best possible with fewer constraints.

So, if C-sections allow baby heads to get bigger, or if calories are freely available today, changes in brain development that take advantage of those relaxed constraints ought to be feasible. In principle this does not have to result in people who are damaged or goofy, although they would not do well in ancestral environments. In practice, since we won't know what the hell we are doing... of course it will.

Still, that's too close to an existence proof: it doesn't really tell you how to do it.

You could probably get real improvements by mining existing genetic variation: look at individuals and groups with unusually high IQs, search for causal variants. Plomin and company haven't any real success ( in terms of QTLs that explain much of the variance) but for this purpose one doesn't care about variance explained, just effect size. A rare allele that does the job would be useful. I'd look at groups with high average IQ, but at others also.

There are other possible approaches. If you could error-correct the genome, fix all the mutational noise, you might see higher IQ. You could dig up Gauss and clone him. My favorite idea is finding two human ethnic groups that 'nick' - whose F1 offspring exhibit hybrid vigor.

As for the singularity: I could, I think, make a pretty good case that scientific and technological progress is slowing down.

Comment deleted 14 May 2010 12:32:44PM *  [-]
Comment author: Jack 14 May 2010 12:59:50PM *  7 points [-]

If the problem is Kurzweil's mesage than it probably doesn't help SIAI's brand that he's listed second.

Anecdotally, I'd say you're absolutely right and that SIAI's prospects could be substantially improved by jettisoning the term "singularity". I'm someone who SIAI should want to target as a supporter, and I've mostly come around but the term singularity just radiates bad juju for me. I think I'm going to apply for a visiting fellow spot but frankly, I'm not especially comfortable telling friends and family that I'm planning to work at a place called the Singularity Institute for Artificial Intelligence and not get paid for it (I'm hoping they don't have the same reaction to the word that I did). I suspect I would have been more supportive earlier if SIAI had been called something else.

Comment author: SilasBarta 14 May 2010 02:24:03PM *  1 point [-]

I concur. Whenever I describe what I would be doing if I volunteered for SIAI, I avoid mentioning its name entirely and just say that they deal in "robotics" (which I tend to use instead of AI) at the "theoretical level" and that they want to bring to the "level of human intelligence" and that they study "risks to humanity".

Of course, this is all "counting chickens 'fore they're hatched" at this point, because I haven't sent my email/CV to Anna Salamon yet...

Comment author: whpearson 14 May 2010 05:34:51PM *  2 points [-]

But current predictions of what happens when smarter than human AI is made, somewhat rely on there being a positive relation between brain/processing power and technological innovation.

The brain power and processing power of humanity is ever increasing, more human population, more educated humans and more computing power. We can crunch ever bigger data sets. The science we are trying to do requires us to use these bigger data sets as well (LHC, genomic analysis, weather prediction). Perhaps we have nearly exhausted the simple science and we are left with the increasingly complex, and similar problems will happen to AI if it tries to self-improve. The question would be whether the rate of self-improvement would be greater than or less than the rate of increasing difficulty of the problems it had to solve to self-improve.

Comment author: harpend 12 May 2010 04:58:53PM 8 points [-]

I have heard discussion about the singularity on the web but I have never had any idea at all what it is, so I can't say much about that.

I do not think there is much prospect for dramatic IQ elevation without producing somewhat damaged people. We talk a lot in our book about the ever-present deleterious consequences of the strong selection that follows any environmental change. Have a look for example at the whippet homozygous for a dinged version of myostatin. Even a magic pill is likely to do the same thing. OTOH scientists don't have a very good track record at predicting the future. Now, I am going to hop into my flying car and go to the office -:)

HCH

Comment deleted 12 May 2010 05:18:57PM *  [-]
Comment author: harpend 14 May 2010 02:25:55AM 10 points [-]

I don't know but I can give you some candidates. One is torsion spasm (Idiopathic Torsion Dystonia). It will give you about a ten point IQ boost just by itself. Most of the time the only effect of the disease is vulnerability to writer's cramp, but 10% of the time it puts you in a wheelchair. So you could do science just fine.

Similarly the Ashkenazi form of Gaucher's disease is not ordinarily all that serious but it also give a hefty IQ boost. Asperger like stuff would probably also increase: many super bright people seem to be a bit not quite. Of course lots of other super-brights seem to be completely normal.

I am just babbling, I have no special insight at all...

HCH

Comment author: Kaj_Sotala 11 May 2010 03:58:15AM 7 points [-]

Michael Vassar is having trouble accessing this site right now, so asked me to relay this question:

You mention in your book (p. 69) that from 100,000 BC to 12,000 BC, the human population increased from half a million to six million thanks to better hunting tools and techniques. On the other hand, from page 100 onwards, you discuss Malthusian limits to population, implying that the sizes of primitive populations were proportional to the amount of food available. In other words, you seem to be saying that from 100,000 BC to 12,000 BC, the human population grew because better hunting techniques increased the availability of food.

But better hunting technologies won't generally tend to raise Malthusian limits strongly. While hunting better will mean that new prey become exploitable, it also means that old prey are continually hunted to extinction. The net result isn't a systematic trend. How strong is the evidence for any prehistoric population sizes? How do the implied population densities compare to those for other large omnivores, such as black bears and pigs, in their territories, or to the population densities at which Chimps live? Why would human densities have been much lower?

Comment author: gcochran 11 May 2010 08:21:32PM *  9 points [-]
 Better hunting techniques can significantly raise Malthusian limits.

First, you have to remember that old-fashioned humans were one predator among many: improved hunting techniques could raise our share of the pot, as well as decreasing other predators' tendency to eat us. Also, modern humans seem to have used carcasses more efficiently than Neanderthals: they had permafrost storage pits and drying racks, so could have preserved meat for long periods. Neanderthals didn't, and I think they must have wasted a lot. Next, moderns used snares, traps, nets, bows etc to catch smaller game not much harvested by Neanderthals: they also made more use of fish and molluscs. And lastly, more plant foods. Altogether, their innovations gave them a larger share of the game, used that share more efficiently, tapped marine resources (lots of salmon in Europe), and harvested resources at a lower trophic level ( plants for example), which are always more abundant.

Hunting to extinction happened in some places, but not everywhere: it hardly happened in Africa at all. It happened most in places with no previous hominid occupation.

Implied population densities are, I think, extrapolations from known hunter gatherers, salted with some archeological info. Estimated densities range from 0.01/km sq to 1/km sq, strongly dependent upon resources. A lot like those of bears. Lower than chimps, probably: but then chimps manage with a lower-quality diet than humans. We're probably not as good at digesting leaves as they are.

Probably you have to consider Pleistocene climate as well: the world was generally nastier - colder, drier, lower plant productivity due to low CO2 levels.

Comment author: AnnaSalamon 11 May 2010 12:05:22AM *  10 points [-]

I haven't read your book yet, so forgive me if you discuss this there. But I’ve been wondering:

Simple traits (such as an organism's height) are probably relatively easy to alter via genetic mutations, without needing to combine many different genes chosen from huge populations. So, e.g., dog breeding altered dogs’ size relatively easily.

Complex adaptations aren’t nearly so easy to come by.

If intelligence is a conceptually simple thing, there might be simple mutations that create “more intelligence” -- it might be possible to make smarter people/mice/etc. by tuning a setting on an adaptation we already have. (E.g., “make more brain cells”).

If intelligence is instead something that requires many information-theoretic bits to specify, e.g. because “intelligence” is a matter of fit between an organism’s biases and the details of its environment, it shouldn’t be easy to create much more intelligence from a single mutation. (Just as if the target was a long arbitrary string in binary, and the genetic code specified that string digit by digit, simple mutations would increase fit by at most one digit.)

From the manner in which modern human intelligence evolved, what’s your guess at how simple human (or animal) intelligence is?

Comment author: harpend 12 May 2010 05:50:24PM 6 points [-]

It must be simple in some way since it is so heritable. People with IQs of 90 and IQs of 140 both prosper and do fine. although there are lots of statistical differences between two such groups.

Other other hand if we take a trait like "propensity to learn language in childhood" this seems to me to be relatively invariable and fixed and so probably very complex.

Certainly one could breed for IQ and raise the population mean a lot. But what would we be doing to our children? People with 140 IQ seem to do all right but I would worry a lot about the kind of life a kid with an IQ of 220 would have.

Comment author: alliumnsk 23 March 2015 08:08:52AM 3 points [-]

an IQ 220 kid will do just fine in company of other IQ 220 kids and teachers.

Comment author: NancyLebovitz 13 May 2010 02:08:48AM 9 points [-]

Do you see any difficulties for very high IQ children other than isolation?

It's a little much to expect people to have so much patience, but doing moderate IQ increases generation by generation, with large numbers of increased IQ children in each generation would do a lot to solve the social problems.

Comment author: harpend 11 May 2010 03:46:26AM 13 points [-]

You are even meaner than Shulman. We don't know how human intelligence evolved and we need to know it in order to answer your question I think. This is where evolutionary psychology and differential psychology (Am I using that term right?) must come together to work this out.

We think that we know a little bit about how to raise intelligence. Just turn down the suppression of early CNS growth. If you do that in one way the eyeball grows too big and you are nearsighted, which is highly correlated with intelligence. BRCA1 is another early CNS growth suppressor, and we speculate in the book that a mildly broken BRCA1 is an IQ booster even though it gives you cancer later. BTW Greg tells me that there a high correlation between IQ and the risk of brain cancer, perhaps because of the same mechanism.

But these ways of boosting IQ are Red Green engineering. (Red Green is a popular North American comedy on television. The hero is a do-it-yourselfer who does everything shoddily.)

On the other hand IQ seems to behave like a textbook quantitative trait and it ought to respond rapidly to selection. We suggest that it did among Ashkenazi Jews and probably Parsis. IQ does not seem to have a downside in the general population, e.g. it is positively correlated with physical attractiveness, health, lifespan, and so on. Do we get insight into the costs of high IQ by looking at Ashkenazi Jews? Do they have overall higher rates of mental quirks? Cancer? I don't know.

HCH

Comment author: NaN 11 May 2010 11:55:11AM 8 points [-]

We think that we know a little bit about how to raise intelligence. Just turn down the suppression of early CNS growth. If you do that in one way the eyeball grows too big and you are nearsighted, which is highly correlated with intelligence.

There is now substantial evidence that there is a causal link between prolonged focusing on close objects - of which probably the most common case is reading books (it appears that monitors are not close enough to have a substantial effect) - and nearsightedness/myopia, though this is still somewhat controversial. This is the typical explanation for the correlation between myopia and IQ and academic achievement.

A genetic explanation is possible, and would be fascinating, but I wouldn't want to accept that without further evidence. If the genetic explanation is true and environment makes no contribution, then I think one should find that IQ is more highly correlated with myopia than academic achievement -- I don't know if this has been found or not.

Comment author: alliumnsk 23 March 2015 08:06:49AM 0 points [-]

If the genetic explanation is true and environment makes no contribution, then I think one should find that IQ is more highly correlated with myopia than academic achievement

It's like saying "if evolution is true, crocoducks should exist". You are (deliberately?) misrepresenting opponent's views. He meant that of all genetic variation affecting IQ, only small, but non-negligible, subset affects both myopia and IQ. However I still don't quite get how larger brain can cause myopia rather than hyperopia.

Comment author: Jiro 23 March 2015 07:17:20PM 0 points [-]

Maybe the larger brain leads to more intelligence, and people with more intelligence read more, and reading more leads to myopia. (Whether reading actually leads to myopia can be questioned, but that doesn't affect the point.)

Comment author: PhilGoetz 11 May 2010 10:42:04PM *  0 points [-]

I think one should find that IQ is more highly correlated with myopia than academic achievement

More correlated than academic achievement is correlated with IQ, or with myopia?

Your comment is a very good point. But IQ may be more-closely correlated with academic achievement than academic achievement is with reading books; so this comparison might not help. (And you want to talk about the variance in X accounted for by Y but not by Z, rather than place a bet on whether Y or Z has a higher correlation with X.)

Comment author: harpend 12 May 2010 05:55:55PM 3 points [-]

Yes, of course. But remember that in science we are not in the business of "accepting" one thing of another. That is the domain of religion and politics. The only thing that matters is finding good hypotheses and testing them.

HCH

Comment author: Wei_Dai 11 May 2010 06:14:29AM 7 points [-]

We think that we know a little bit about how to raise intelligence. Just turn down the suppression of early CNS growth. If you do that in one way the eyeball grows too big and you are nearsighted, which is highly correlated with intelligence.

That's interesting. I found a 2006 paper which argued that a genetic mutation is responsible for myopia, and that it also increases intelligence, but the specific gene and mechanism involved were apparently still unknown at that time. Has there been some more recent research results on this topic?

Comment author: harpend 12 May 2010 05:57:06PM 4 points [-]

There is apparently a research group in China that has some solid results but I have not seen them and do not know if they are out yet.

HCH

Comment author: Will_Newsome 11 May 2010 04:06:54AM 9 points [-]

You are even meaner than Shulman.

They're engaged. :)

Comment author: John_Maxwell_IV 11 May 2010 05:54:36AM 7 points [-]

From The 2% Difference, an article by Robert Sapolsky:

Given the outward differences, it seems reasonable to expect to find fundamental differences in the portions of the genome that determine chimp and human brains—reasonable, at least, to a brainocentric neurobiologist like me. But as it turns out, the chimp brain and the human brain differ hardly at all in their genetic underpinnings. Indeed, a close look at the chimp genome reveals an important lesson in how genes and evolution work, and it suggests that chimps and humans are a lot more similar than even a neurobiologist might think.

...

... Still, chimps and humans have very different brains. So which are the brain-specific genes that have evolved in very different directions in the two species? It turns out that there are hardly any that fit that bill. This, too, makes a great deal of sense. Examine a neuron from a human brain under a microscope, then do the same with a neuron from the brain of a chimp, a rat, a frog, or a sea slug. The neurons all look the same: fibrous dendrites at one end, an axonal cable at the other. They all run on the same basic mechanism: channels and pumps that move sodium, potassium, and calcium around, triggering a wave of excitation called an action potential. They all have a similar complement of neurotransmitters: serotonin, dopamine, glutamate, and so on. They're all the same basic building blocks.

The main difference is in the sheer number of neurons. The human brain has 100 million times the number of neurons a sea slug's brain has. Where do those differences in quantity come from? At some point in their development, all embryos—whether human, chimp, rat, frog, or slug—must have a single first cell committed toward generating neurons. That cell divides and gives rise to 2 cells; those divide into 4, then 8, then 16. After a dozen rounds of cell division, you've got roughly enough neurons to run a slug. Go another 25 rounds or so and you've got a human brain. Stop a couple of rounds short of that and, at about one-third the size of a human brain, you've got one for a chimp. Vastly different outcomes, but relatively few genes regulate the number of rounds of cell division in the nervous system before calling a halt. And it's precisely some of those genes, the ones involved in neural development, that appear on the list of differences between the chimp and human genomes.

That's it; that's the 2 percent solution. What's shocking is the simplicity of it. Humans, to be human, don't need to have evolved unique genes that code for entirely novel types of neurons or neurotransmitters, or a more complex hippocampus (with resulting improvements in memory), or a more complex frontal cortex (from which we gain the ability to postpone gratification). Instead, our braininess as a species arises from having humongous numbers of just a few types of off-the-rack neurons and from the exponentially greater number of interactions between them. The difference is sheer quantity: Qualitative distinctions emerge from large numbers. Genes may have something to do with that quantity, and thus with the complexity of the quality that emerges. Yet no gene or genome can ever tell us what sorts of qualities those will be. Remember that when you and the chimp are eyeball to eyeball, trying to make sense of why the other seems vaguely familiar.

Comment author: MugaSofer 22 January 2013 10:17:52AM -2 points [-]

If that's actually correct, we should be able to just breed a superintelligence. Maybe not one as powerful as an AI gone foom, but still orders of magnitude higher than us mortals.

Unless he claims at some point that humans reached some sort of hard limit, but it seems vastly more likely that huge brains are costly and we're the point where the tradeoffs balanced.

Comment author: John_Maxwell_IV 23 January 2013 07:57:05AM *  1 point [-]

Supposedly human brain size is limited by the skulls that will fit out of our mothers, and human babies are actually born premature relative to other species because it's only when we are premature that our skulls will still fit out.

Of course, we have cesarean births now, so...

Comment author: MugaSofer 23 January 2013 01:24:58PM *  -2 points [-]

Great points.

And, since we're born premature as you said, there's already a partial workaround even if you need "natural" births for some reason (potential complications from the surgery?)

Comment author: Fronken 22 January 2013 04:59:07PM -1 points [-]

If that's actually correct, we should be able to just breed a superintelligence.

That's not really a new idea :P all those sci fi worlds with brain bugs and future humans worshiping the morlock king knew that.

Comment author: mattnewport 11 May 2010 12:23:40AM *  2 points [-]

Moved to the kibitzing thread.

Comment author: CarlShulman 11 May 2010 12:14:24AM 8 points [-]

If a trait is being selected for, the alleles with large positive effects will compound with a faster growth rate than those with small effects (even if there are initially many more small-effect alleles) and tend to account for a large portion of the heritability of that trait (at least until they have almost swept the population).

You suggest that psychological traits such as personality and cognition have been subject to recent positive selection, so why haven't GWAS (or targeted investigations, e.g. microcephalin) found much in the way of common large effect alleles for psychological traits? What are your best guesses on the genetic architectures of personality and cognition?

Comment author: harpend 11 May 2010 03:35:18AM 11 points [-]

Yikes! This is worse than my PhD orals.

There have been some (tentatively) identified like the 7-repeat version of the D4 dopamine receptor, the serotonin transporter, and others that Greg will be able to dredge up from his memory.

We may have found others but not identified them. Imagine that it would be highly beneficial to have a little bit less of substance s. If so then a mutation that broke the gene producing s would be favored a lot and would sweep until people with two copies of broken s started being born. How likely is it now that two broken copies of s will still work? A lot of the sweeps identified from SNP scans seem to have stalled out at intermediate frequencies (as opposed to going to fixation) suggesting that heterozygote advantage is widespread.

If so the genome wide association studies ought to find them, and they find a lot, many of the findings are not replicable. So after all the above I have no coherent answer to your question!

Comment author: Kaj_Sotala 11 May 2010 03:41:24AM *  4 points [-]

I thought RichardKennaway's previous comment was interesting, and would appreciate hearing your comments on it. Commenting on the hypothesis that life under the rule of others may have selected for submissiveness, he wrote:

On the other hand, submissiveness is surely selected against in rulers, who as noted in the posting leave more descendants than proles. So perhaps in a society in which the strong rule and the weak submit there is some evolutionarily stable distribution along a submissive/aggressive spectrum, rather than favouring one or the other?

Comment author: jmmcd 13 May 2010 10:55:14AM *  4 points [-]

I don't think it's correct to assume a pure strategy (ie each male is either dominant OR submissive). It might make more sense for males to be able to switch when the opportunity arises from submissive to dominant (a mixed strategy in game theory terms). I think outsider orang-utans can become alphas (adding the distinctive cheek-flaps etc) when they find a group that will let them join, for example.

We do see humans making the same transition (and in the other direction too) when they move between groups, and when opportunities arise.

Comment author: harpend 12 May 2010 05:42:35PM 10 points [-]

My feeling is that the dichotomy between societies where males are threatening and violent and societies where males are submissive and not threats to each other is the most interesting social dichotomy we have. In some societies where males are threats there is a clear alternative niche like the Berdache on the Great Plains. In urban ghettos with drug dealers and street corner males there is a significant set of males who hold down jobs and, often, bring the proceeds to support their matrifocal families. How much such males reproduce is not clear. A wonderful description of this, with a zany analysis, is (Sharff, J. W. (1981). Free enterprise and the ghetto family. Psychology Today, 15, 41-8.)

There may well be stable distributions lurking in the social system but they are likely different everywhere: that for Bushmen would be quite different from that for Mundurucu.

Rulers do not always leave more descendants than proles. I highly recommend Gregory Clark's "Farewell to Alms", in which he shows that the medieval ruling class in Britain essentially all killed each other and have no descendants today. On the other end peasants and laborers did not reproduce themselves, so almost everyone in the UK today is descended from the medieval gentry, prosperous merchants, and so on.

Comment author: TobyBartels 08 October 2012 10:37:19PM *  0 points [-]

the medieval ruling class in Britain essentially all killed each other and have no descendants today

I'm one of their descendants.

I rather assumed that every Anglo-Saxon was (excluding the royal family through Charles, whose ancestry is German, but including Diana Spencer and her children), and that I only knew how because I had wealthy ancestors who kept track. But even if that's not so, they don't have no descendants.

ETA: On second thought, perhaps the scope of ‘essentially’ was meant to extend to the end of the sentence.

Comment author: CarlShulman 11 May 2010 12:19:21AM 6 points [-]

The Neanderthal genomics work showing a few percent of non-African human genomes inherited from Neanderthals suggests that any individual handy Neanderthal alleles would have needed only a few doublings to reach fixation. Any news on whether the Neanderthal variants show more or less post-mixture selection than you would have expected?

Comment author: harpend 11 May 2010 02:58:16AM *  10 points [-]

Hi Carl:

No word on that yet. They identified regions of the genome where there are (1) deep gene trees in Europe and/or Asia, (2) we share variants with Neanderthals, and (3) these shared variants are absent in Africa, and they found a lot of them. But if some variants in Neanderthals were positively selected in humans very early on then they would have spread through all humanity, and no one has scanned for those yet.

Our favorite candidate is the famous FOXP2 region, without which one has no speech. Every human has it, and the diversity hear it on the chromosome suggests that it is 42,000 years old in humans. Neanderthals have the human version (so far), so a likely scenario is that we stole it from Neanderthals.

HCH

Comment author: gcochran 14 May 2010 08:46:17AM 9 points [-]

Paabo seems to think it unlikely that any of these introgressed alleles had a a significant selective advantage in humans, but that's unlikely. I'll bet money on this.

To be fair, I should explain why that is a sucker bet. John Hawks and I discussed about a situation with just a few tens of matings over all time: we were making the point that even in that minimal scenario, alleles with large advantages (on the order of 5%) could jump over to modern humans. The Max Planck estimate of 2% Neanderthal admixture is far more favorable to introgression: with that much of a start, and with at least 50,000 years to grow in, any allele with a selective advantage > 0.2% is likely to be over 50% today. Many such Neanderthal alleles should be fixed in Eurasians - or in some Eurasian populations in the right environments - or even in Africans, if the allele conferred global advantages. of course we'd have trouble proving this in Africans: the Science study really shows how much more Neanderthal ancestry Eurasians have than Africans, not the absolute amount in either population.

Note that the Fisher-wave velocity goes as the square root of the selective advantage: a Neanderthal allele with an advantage of 0.2% might have spread as far as the European lactase persistence variant, which probably had a selective advantage > 10%. Today we find that allele from north India, to Iceland, to the southern fringe of the Sahara.

Introgressing advantageous alleles derived from Neanderthals are probably more likely to go to fixation than most new favorable mutations. We now suspect that the majority of alleles that give large advantages to heterozygotes give smaller advantages to homozygotes and thus never go to fixation, using a variant of Fisher's geometric argument. In the long run they are replaced by alleles that work better in homozygotes and do go to fixation - but when we stole alleles from Neanderthals, we were mostly getting old tested ones, rather than flash-in-the-pan alleles like sickle-cell.

There had to be such advantageous alleles because Neanderthals had been in Europe and west Asia for hundreds of thousands of years - they were well-adapted to that different, non-African ecology.

When there is introgression between species, transmission of adaptive alleles seems to always happen. We know a lot about some cases: one good example is introgression in cattle. Taurine cattle were domesticated in the Middle East, Zebu cattle in India, from ancestral stocks that diverged about half a million years ago. Zebu genes have introgessed a lot into African taurine cattle, in part due to known advantages in heat/aridity tolerance and rinderpest resistance. Creeping zebuization has been going on the Middle East for thousands of years. If you go as far west as Egypt, cattle are about 25% Zebu in the nuclear genome, while you don't see anyzebu mtDNA or Y-chromosomes. This kind of discordance between the introgression of mtDNA/Y chromosomes and nuclear genomes is more common than not: looks like the same thing happened to us. Plausible when you think about it. Neanderthal mtDNA may well have had a selective disadvantage: they may have been blatant heat-wasters, since they had crummy clothing. Small population size might also have resulted in somewhat bunged up mtDNA, since selection is less efficient then.

Obviously some Neanderthal alleles had a selective disadvantage in humans, for example those that determined their different body form. Many more must have been effectively neutral, with no noticeable advantage over the version in anatomically modern humans. But some must have been useful - and the more useful they were, the more common they are today.

There seems to be a pattern in which an invasive species shows up, hangs around in an unspectacular way for sometime while it's picking up alleles from local sister species, and then spreads out irresistibly. We generally call those cosmopolitan species weeds.

Were some of those introgessing Neanderthal genes adaptive? Had to be. Do they account for the cultural big bang somewhat later? It would make sense, but it's not a lock. I'd call it likely.

Comment author: CarlShulman 11 May 2010 12:22:50AM *  5 points [-]

The mathematical models for an acceleration of human evolution seem like they could have been developed earlier. Would more researchers, or more 'maverick' researchers have much advanced progress in the field? Or would an increased stock of mathematical analysis have simply sat around unused until the advent of the new genomics tools and their ability to measure selection?

Comment author: harpend 11 May 2010 03:25:51AM 13 points [-]

That is a big and interesting question. I do not think that evolutionary biology needed more math at all: they would have done better with less I think. The only math needed (so far) in thinking about acceleration is the result that the fixation probability of a new mutant is 1/2N if it is neutral and 2s if it has selective advantage s. The other important equation is that the change in a quantitative trait is the product of the heritability and the selective differential (the difference between the mean of the population and the mean of parents).

The history is that there was a ruckus in the 1960s between the selectionists and the new sect of neutralism, and neutralism more or less won. Selectionists persisted but that literature has a focus on bacteria in chemostats, plants, yeast, and such. Neutralism answered lots of questions and is associated with some lovely math, but as we took it up we (many of us) lost sight of real evolutionary issues.

Milford Wolpoff, in a review of our book in the American Journal of Physical Anthropology points out that his student Dave Frayer collected a lot of data on changes in European skull size and shape that implied very rapid evolution. In other words we "knew it all along" but never paid attention. In fact Cochran and I "knew" it but never put it together with the new findings from SNP chips. John Hawks did, right away.

So fashion rules and we it is difficult to get away from it I suppose.

Comment author: gcochran 12 May 2010 04:23:09AM 13 points [-]

Hawks and I were talking about new genetic studies that showed a surprising number of sweeps, more than you'd expect from the long-term rate of change - and simultaneously noticed that there sure are a lot more people then there used to be - all potential mutants.

As for why someone didn't point this out earlier - say in 1930, when key results were available - I blame bad traditions in biology. Biologists mostly don't believe in theory: even when its predictions come true, they're not impressed.

My advantage, at least in part, comes from have had exactly one biology course in my entire life, which I took in the summer of my freshman year of high school, in a successful effort to avoid dissecting. If I ever write a scientific autobiography, it will be titled "Avoiding the Frog".

Comment author: CarlShulman 12 May 2010 08:22:11PM 2 points [-]

Biologists mostly don't believe in theory: even when its predictions come true, they're not impressed.

Because theory in the field is so often wrong that they treat successes as a stopped clock being right twice a day? Or something more complex?

Comment author: gcochran 14 May 2010 06:59:07AM *  7 points [-]

There are sub-patterns. There are facts about natural selection that every plant geneticist knows that few human geneticists will accept without a fight. I mean, really, Henry, when a prominent human geneticist says " You don't really believe that bit about lactase persistence being selected, do you?" , or when someone even more famous asks "So why would there be more mutations in a bigger population?" - their minds ain't right.

Comment author: NancyLebovitz 14 May 2010 09:31:57AM 2 points [-]

There are facts about natural selection that every plant geneticist knows that few human geneticists will accept without a fight.

Could you expand on that?

Comment author: harpend 14 May 2010 02:33:13AM 10 points [-]

I think Greg's 'biologists' are a special subset of biologists. As I see it CP Snow was right about the two cultures. But within science there are also two cultures, one of whom speaks mathematics and the other that speaks organic chemistry. Speaker of organic chemistry share a view that enough lab work and enough data will answer all the questions. They don't need no silly equations.

In our field the folks who speak mathematics tend to view the lab rats as glorified techs. This is certain not right but it is there and leads to a certain amount of mutual disdain.

This kind of mutual disdain is apparently just not there in physics between the theoretical and experimental physics people. I wish evolutionary biology were more like physics.

Comment author: TobyBartels 08 October 2012 08:04:52PM 1 point [-]

It goes further; there are even two cultures of mathematics!

Comment author: JenniferRM 11 May 2010 01:09:57AM *  3 points [-]

When I think of evolutionary psychology I generally jump to sharp and well defined claims that "mental modules" exist that (1) enable superior cognitive performance in specific domains relative to what typical people can do when they rely on "general reasoning" faculties, (2) evolved due to positive selection on our ancestors to deal with problems we faced over and over in our evolutionary history, and (3) should be pretty much universal among humans who don't have too many deleterious mutations.

When I think of people who focus specifically on innate human differences, I generally think of them studying much more abstract "traits" like performance on game theoretic tasks, or personality measures, or IQ. The sharp claims here mostly have to do with heritability numbers and whether a trait is "highly heritable" or "not very heritable".

Having this perception of two broad "kinds of research" my impression is that they do not seem to "play nicely" together. In some senses they don't address the same issues and in some senses they may have contradictory predictions.

First I'm curious as to whether you think different scientists really take generally different approaches in the way described here?

Assuming yes, do you think their research programs actually predict that people should see different things when examining "the same" phenomena?

Assuming yes, do you have a preference for one set of claims versus the other?

No matter where you step off of the train of questions, "Why or why not?" :-)

Comment author: harpend 11 May 2010 03:10:48AM 4 points [-]

I think your perception is correct, but I am no expert. I sense that evolutionary psychologists are really interested in human universals: the famous experiments of Tooby and Cosmides go right to that point. Why are we all afraid of snakes? Why are our babies do hard to toilet train? But they generally don't have a lot to say about variation among humans in these traits.

The other sort that you and I both perceive are interested in human diversity and aren't much concerned with the bigger questions of the ev psych people.

No, they don't "play nice" with each other mostly. It is an exaggeration to say that each regards the phenomena of the other as nuisances. They certainly should see different things: C&T see evolved cheater detection in a logic game while psychologists of the London school see G playing itself out in the diversity of correct answers.

The two areas will come together soon: they are already starting. As some of the comments here indicate, we can't really understand what "Neanderthal intelligence" might mean until we understand the evolution(s) of intelligence. We can examine data all day and still have not an iota of insight about that bigger issue.

Comment author: Nanani 11 May 2010 01:58:00AM 0 points [-]

Wow! I haven't got any questions (yet) but I am very eager to dive into this Q&A. Thanks to everyone involved in organizing this.

By the way, you spelled Steve SailEr's name wrong.

Comment author: harpend 11 May 2010 02:59:26AM 6 points [-]

And thank you all for the honor of your invitation.

HCH