Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

How minimal is our intelligence?

54 Post author: Douglas_Reay 25 November 2012 11:34PM

Gwern suggested that, if it were possible for civilization to have developed when our species had a lower IQ, then we'd still be dealing with the same problems, but we'd have a lower IQ with which to tackle them.   Or, to put it another way, it is unsurprising that living in a civilization has posed problems that our species finds difficult to tackle, because if we were capable of solving such problems easily, we'd probably also have been capable of developing civilization earlier than we did.

How true is that?

In this post I plan to look in detail at the origins of civilization with an eye to considering how much the timing of it did depend directly upon the IQ of our species, rather than upon other factors.

Although we don't have precise IQ test numbers for our immediate ancestral species, the fossil record is good enough to give us a clear idea of how brain size has changed over time:

brain mass as a percent of body mass against time

and we do have archaeological evidence of approximately when various technologies (such as pictograms, or using fire to cook meat) became common.

The First City

The Ziggurat of Ur-Nammu

About 6,000 years ago (4000 BCE), Ur was a thriving trading village on the flood plain near the mouth of the river Euphrates in what is now called southern Iraq and what historians call Sumeria.

By 3000 BCE it was the heart of a city-state with a core built up populated area covering 37 acres, and would go on over the following thousand years to lead the Sumerian empire, raise a great brick Ziggurat to its patron moon goddess, and become the largest city in the world (65,000 people concentrated in 54 acres).

It was eventually doomed by desertification and soil salination, caused by its own success (over-grazing and land clearing) but, by then, cities had spread throughout the fertile crescent of rivers at the intersection of the European, African and Asian land masses.

Ur may not have been the first city, but it was the first one we know of that wasn't part of a false dawn - one whose culture and technologies did demonstrably spread to other areas.  It was the flashpoint.

We don't know for certain what it was about the culture surrounding the dawn of cities that made that particular combination of trade, writing, specialisation, hierarchy and religion communicable, when similar cultures from previous false dawns failed to spread.   We can trace each of those elements to earlier sources, none of them were original to Ur, so perhaps it was a case of a critical mass achieving a self-sustaining reaction.

What we can look at is why the conditions to allow a village to become a large enough city for such a critical mass of developments to accumulate, occurred at that time and place.

From Village to City

Motivation aside, the chief problem with sustaining large numbers of people together in a small area, over several generations, keeping them healthy enough for the population to grow without continual immigration, is ensuring access to a scalable renewable predictable source of calories.

To be predictable means surviving famine years, which requires crops that can be stored for several years, such as grasses (wheat, barley and millet) with large seeds, and good storage facilities to store them in.   It also means surviving pestilence, which requires having a variety of such crops.    To be scalable and renewable means supplying water and nutrients to those crops on an ongoing basis, which requires irrigation and fertiliser from domesticated animals (if you don't have handy regular floods).

Having large mammals available to domesticate, who can provide fertiliser and traction (pulling ploughs and harrows) certainly makes things easier, but doesn't seem to have been a large factor in the timing of the rise of civilisation, or particularly dependent upon the IQ of the human species.   Research suggests that domestication may have been driven as much by the animals own behaviour as by human intention, with those animals daring to approach humans more closely getting first choice of discarded food.

Re-planting seeds to ensure plants to gather in following years, leading to low nutrition grasses adapting into grains with high protein concentrations in the seeds, does seem to a mainly intentional human activity in that we can trace most of the gain in size of such plant species seeds to locations where humans have transitioned from the palaeolithic hunter-gatherer culture (about 2.5 million years ago, to about 10,000 years ago) to the neolithic agricultural culture (about 10,000 year ago, onwards).

Good grain storage seems to have developed incrementally starting with crude stone silo pit designs in 9500 BCE, and progressing by 6000 BCE to customised buildings with raised floors and sealed ceramic containers which could store 80 tons of wheat in good condition for 4 years or more.  (Earthenware ceramics date to 25,000 BCE and earlier, though the potter's wheel, useful for mass production of regular storage vessels, does date to the Ubaid period.)

The main key to the timing of the transition from village to city seems to have been not human technology but the confluence of climate and biology.  Jared Diamond points the finger at the geography of the region - the fertile crescent farmers had access to a wider variety of grains than anywhere else in the world because that area links and has access to the species of three major land masses.   The Mediterranean climate has a long dry season with a short period of rain, which made it ideal for growing grains (which are much easier to store for several years than, for instance bananas).  And everything kicked off when the climate stabilised after the most recent ice age ended about 12,000 years ago.

Ice Ages

Strictly speaking, we're actually talking about the end of a "glacial period" rather than the end of an entire "ice age".  The timeline goes:

200,000 years ago - 130,000 years ago : glacial period
130,000 years ago - 110,000 years ago : interglacial period
110,000 years ago -  12,000 years ago : glacial period
  12,000 years ago - present : interglacial period


So the question now is, why didn't humanity spawn civilisation in the fertile crescent 130,000 years ago, during the last interglacial period?  Why did it happen in this one?  Did we get significantly brighter in the mean time?

It isn't, on the face of it, an implausible idea.  100,000 years is long enough for evolutionary change to happen, and maybe inventing pottery or becoming farmers did take more brain power than humanity had back then.  Or, if not IQ, perhaps it was some other mental change like attention span, or the capacity to obey written laws, live as a specialist in a hierarchy, or similar.

But there's no evidence that this is the case, nor is there a need to hypothesise it because there is at least one genetic change we do know about during that time period, that is by itself sufficient to explain the lack of civilisation 130,000 years ago.  And it has nothing to do with the brain.

Brains, Genes and Calories

Using the San Bushpeople as a guide to the palaeolithic diet, hunter-gather culture was able to support an average population density of one person per acre.   Not that they ate badly, as individuals.  Indeed, they seem to have done better than the early Neolithic farmers.  But they had to be free to wander to follow nomadic food sources, and they were limited by access to food that the human body could use to create Docosahexaenoic acid, which is a fatty acid required for human brain development.  Originally humans got this from fish living in the lakes and rivers of central Africa.   However, about 80,000 years ago, we developed a gene that let us synthesise the same acid from other sources, freeing humanity to migrate away from the wet areas, past the dry northern part, and out into the fertile crescent.

But there is a link between diet and brain.  Although the human brain represents only 2% of the body weight, it receives 15% of the cardiac output, 20% of total body oxygen consumption, and 25% of total body glucose utilization.  Brains are expensive, in terms of calories consumed.  Although brain size or brain activity that uses up glucose is not linearly related to individual IQ, they are linked on a species level.

IQ is polygenetic, meaning that many different genes are relevant to a person's potential maximum IQ.  (Note: there are many non-genetic factors that may prevent an individual reaching their potential).   Algernon's Law suggests that genes affecting IQ that have multiple alleles still common in the human population are likely to have a cost associated with the alleles tending to increase IQ, otherwise they'd have displaced the competing alleles.   In the same way that an animal species that develops the capability to grow a fur coat in response to cold weather is more advanced than one whose genes strictly determine that it will have a thick fur coat at all times, whether the weather is cold or hot; the polygenetic nature of human IQ gives human populations the ability to adapt and react on the time scale of just a few generations, increasing or decreasing the average IQ of the population as the environment changes to reduce or increase the penalties of particular trade-offs for particular alleles contributing to IQ.   In particular, if the trade-off for some of those alleles is increased energy consumption and we look at a population of humans moving from an environment where calories are the bottleneck on how many offspring can be produced and survive, to an environment where calories are more easily available, then we might expect to see something similar to the Flynn effect.

Summary

There is no cause to suppose, even if the human genome 100,000 years ago had the full set of IQ-related-alleles present in our genome today, that they would have developed civilisation much sooner.

 


Comment Navigation Aide

link - DuncanS              - animal vs human intelligence
link - DuncanS              - brain size & brain efficiency
link - JaySwartz            - adaptability vs intelligence
link - RichardKennaway - does more intelligence tend to bring more societal happiness?
link - mrglwrf               - Ur vs Uruk
link - NancyLebovitz     - does decreased variance of intelligence tend to bring more societal happiness?
link - fubarobfusco       - victors writing history
link --                            consequentialist treatment of library burning
link --                            the average net contribution to society of people working in academia
link - John_Maxwell_IV - independent development of civilisation in the Americas
link - shminux              - How much of our IQ is dependant upon Docosahexaenoic acid?
link - army1987            - implications for the Great Filter
link - Vladimir_Nesov    - genome vs expressed IQ
link - Vladimir_Nesov    - Rhetorical nitpick
link - Vaniver               - IQ & non-processor-speed components of problem solving
link - JoshuaZ              - breakthroughs don't tend to require geniuses in order to be made
link - Desrtopa             - cultural factorors



Comments (214)

Comment author: JoshuaZ 19 November 2012 04:19:44PM 11 points [-]

Also, this piece seems to be of high enough quality and of general interest that it probably makes sense to move it to main.

Comment author: Douglas_Reay 19 November 2012 05:25:22PM 5 points [-]

Ok, moved.

Comment author: Vladimir_Nesov 19 November 2012 04:45:07PM *  10 points [-]

There is no cause to suppose, even if the human genome 100,000 years ago had the full set of IQ-related-alleles present in our genome today, that they would have developed civilisation much sooner.

The original point was not about genomes, it was about expressed IQ. Suppose the reasons for absence of the currently normal IQ in the past were environmental. If I understand correctly, your argument in particular suggests that it's the environmentally-mediated increase in IQ that might have enabled the rise of civilization (in this interglacial period). Then it's still the case that present IQ level is about as low as it can be.

The distinction your argument makes seems to be about the reason for the recent rise in IQ (environmental, not generic, at least not with changes in genes directly related to brains), not about the level of expressed IQ necessary to spark a technological civilization.

Comment author: gwern 19 November 2012 05:54:46PM 9 points [-]

Yes, I think this would be my past self's reply (I don't remember making that particular argument, but it does sound like something I would say). Even if we granted that IQ-linked alleles were identical 100kya, we still wouldn't have to grant that IQ was the same! We know of many powerful environmental effects on phenotypic IQ: to give a recent example of interest to me, just iodine & iron deficiency will cost on average 15 IQ points. One might expect random diseases and parasites to cost even more. (And remember that aside from the effect on the mean, the tails of the bell curve are going to be affected even more outrageously.)

And we know IQ connects in all sorts of way to economic attitudes, activity, growth, etc, with patterns indicative of bidirectional causality; see http://lesswrong.com/lw/7e1/rationality_quotes_september_2011/4r01

More importantly, we have the equivalent of natural experiments on the importance of national IQ averages: African countries. There are countries where the limited samples suggest particularly low IQs; these are also the countries where economic growth is least, and anecdotally, charitable efforts like installing new infrastructure fail frequently.

Logically, it should be easier to leapfrog or catchup in growth based on existing technologies & methods, and this explains things like why it could take hundreds of millennia to go from apes to sub-Saharan Africa levels of wealth but South Korea could then go from sub-Saharan levels to industrialized democracy in something like 40 years. So, if the African countries with the least average intelligence can hardly maintain the existing infrastructures or per capita wealth, then this doesn't bode well for the prospects of them taking off, and is perfectly consistent with the observation of ~90 millennia of stagnation. (Now there's a 'great stagnation' for you!)

Comment author: JulianMorrison 24 November 2012 11:23:42AM 3 points [-]

The trouble with epigenetic IQ drop as a theory is that hunter gatherers were (IIRC, anthropologists please confirm) better fed, taller and healthier than early farmers. This being due to a combination of better diet (not a monoculture of one or two staples) and also due to the beginnings of the peasant/ruler classes and taxation of surplus. You would expect the farmers to be the ones with epigenetic lower IQ.

Comment author: gwern 25 November 2012 12:23:58AM 4 points [-]

I don't think 'epigenetic' means what you think it means. But anyway: yes, there is anthropological evidence of that sort (covered in Pinker's Better Angels and in something of Diamond's, IIRC), and height and mortality are generally believed to correlate with health and presumably then to IQ.

The problem with that is that that is a problem for all theories of civilization formation: if early farming was so much worse than hunter-gathering that we can tell just from the fossils, then why did civilization ever get started? There must have been something compelling or self-sustaining or network effects or something about it.

So, suppose it takes less IQ to maintain a basic civilization than to start one from scratch (as I already suggested in my Africa example), and suppose civilization has some sort of self-reinforcing property where it will force itself to remain in existence even when superior alternatives exist (as it seems it must, factually, given the poorer health of early farmers/civilizationers compared to hunter-gatherers sans civilization).

Then what happened was: over a very long period of time hunter-gatherers slowly accumulated knowledge or tools and IQs rose from better food or perhaps sexual selection or whatever, until finally relatively simultaneously multiple civilizations arose in multiple regions, whereupon the farmer effect reduced their IQ but not enough to overcome the self-sustaining-civilization effect. And then history began.

Comment author: Nornagest 26 November 2012 01:26:49AM *  3 points [-]

if early farming was so much worse than hunter-gathering that we can tell just from the fossils, then why did civilization ever get started? There must have been something compelling or self-sustaining or network effects or something about it.

I tend to think of this by analogy with gene-centered evolution. Just as natural selection selects for genes which are particularly good at reproducing themselves without any special regard for the well-being of their carriers, cultural evolution selects for similarly potent memetic systems without any particular regard for the well-being of the people propagating them.

From skeletal evidence forager lifestyles seem on average a lot healthier, but they also require much lower population densities. You can fit a lot more people per unit area with an agriculturalist lifestyle: if skeletal proxies are to be believed they'll individually be weaker, sicker, and shorter-lived, but they'll be populous enough that the much rarer foragers are going to have trouble displacing them. Cycle that over a few thousand years and eventually civilization ends up ruling the world, with the few remaining foragers pushed into little enclaves where agriculture is unsustainable for one reason or another. We'd occasionally see defections from one lifestyle to the other, but historically they don't seem very common.

The tricky part of this model seems to be figuring out how forager populations self-limit without lowering quality of life to agriculturalist levels. I'm not anthropologist enough to have a definitive answer to this, but I'd speculate that forager resource acquisition isn't as linearly dependent on population as agriculture is: put too many people in a given area and you end up scaring off game, overconsuming food plants, et cetera. Over time I'd expect this to inform territorial behavior and intuitions about optimal group size. Violence is probably also part of the answer.

Comment author: Vaniver 26 November 2012 01:43:30AM 1 point [-]

We'd occasionally see defections from one lifestyle to the other, but historically they don't seem very common.

Or, at least, they end up becoming irrelevant for the same reasons that the agriculturalists won in the first place. If Roanoke disappeared because all of the settlers decided to ditch the farm and live as Indians, there were still way more Europeans coming than the few Europeans that defected, and the new colonists could support a much higher population density than the ones that went native.

Comment author: RomeoStevens 26 November 2012 12:36:02AM 3 points [-]

if early farming was so much worse than hunter-gathering that we can tell just from the fossils, then why did civilization ever get started?

and why did European settlers in the Americas, when presented with the direct juxtaposition of hunter gatherer lifestyle with their own often 'go native'?

Farming solves military coordination problems that allow them to conquer neighbors. It would be a mistake to think that civilizations were successful because they provided a better quality of life for their denizens. We should expect to see the most successful civilization to be that which is able to devote a larger amount of wealth towards expansion.

Comment author: gwern 26 November 2012 01:14:46AM 2 points [-]

and why did European settlers in the Americas, when presented with the direct juxtaposition of hunter gatherer lifestyle with their own often 'go native'?

Uh, going native is exactly what the vein of thought is predicting. The question is not why did some go native, but why didn't all the rest?

Farming solves military coordination problems that allow them to conquer neighbors. It would be a mistake to think that civilizations were successful because they provided a better quality of life for their denizens. We should expect to see the most successful civilization to be that which is able to devote a larger amount of wealth towards expansion.

An old suggestion, but just as old is the point that civilizations routinely fail at military matters: it's a trope of history going back at least as far as Ibn Khaldun that amazingly often the barbarians roll over civilization, and conquer everything, only to fall victim to the next barbarians themselves.

Comment author: Nornagest 26 November 2012 07:12:32AM 2 points [-]

it's a trope of history going back at least as far as Ibn Khaldun that amazingly often the barbarians roll over civilization, and conquer everything, only to fall victim to the next barbarians themselves.

That does happen a lot, but the barbarians in question tend to be nomadic pastoralists, very rarely foragers. About the only exceptions I can think of happened in immediately post-contact North America, and that was a fantastically turbulent time culturally -- between the introduction of horses and 90+% of the initial population getting wiped out by disease, pretty much everything would likely have been up for grabs.

I don't know offhand how healthy or long-lived pastoralist cultures tended to be by comparison with sedentary agriculturalists. I do know that they generally fell somewhere between foragers and agriculturalists in terms of sustainable population density.

Comment author: RomeoStevens 26 November 2012 03:47:04AM 0 points [-]

why didn't all the rest?

Insufficient opportunity and brainwashing.

Barbarian hordes consume great amounts of the fruits of civilization and destroy the infrastructure that created it in their wake. They are self limiting.

Comment author: gwern 26 November 2012 04:00:20AM 3 points [-]

Barbarian hordes consume great amounts of the fruits of civilization and destroy the infrastructure that created it in their wake.

What civilization-wide infrastructure did the Mongols destroy in the process of creating the greatest land empire in history which then doomed them and limited their spread?

Comment author: RomeoStevens 26 November 2012 04:33:14AM 0 points [-]

The mongols were emphatically not barbarians, they introduced systems that were in most cases improvements over what they destroyed.

Comment author: Oligopsony 26 November 2012 04:49:06AM 3 points [-]

I suspect the connotations of "barbarian" are getting in the way here. The Mongols were highly mobile pastoralists and raiders; this did not get in the way of setting up sophisticated and creative institutions. (Nor did the latter undo the considerable net loss in poulation and extent of cultivation that accompanied the Mongol conquests.)

Comment author: Nornagest 26 November 2012 07:23:11AM 1 point [-]

Insufficient opportunity and brainwashing.

I think this is basically correct, but I'd express it in terms of cultural inertia rather than brainwashing. It's not (usually) part of a planned campaign of retention, it's just that learning a completely different culture and language and set of survival skills is a huge risk and would take a huge amount of effort: it might be attractive in marginal cases, but most people would likely feel they had too much to lose. Particularly if the relationship between the cultures is already adversarial.

Comment author: JulianMorrison 26 November 2012 12:23:37PM 2 points [-]

There are probably pure-win half steps, like the kind of farming where you plant in the seasonal area you always come back to at a certain time of the year, as you follow the herds, or the kind where game is so plentiful you can afford to settle, hunt, and dabble in farming vegetables beside your settlement (such as in the American Pacific north west). Farming seems to be tied to settlement. Farms stabilize settlements; settlements nurture farms. And farms domesticate crops, making farming easier and supporting a larger population.

In the Mesopotamia region, there were settlements in the rainy hills where the local wildlife was conveniently easy to domesticate but farming was hard. Those moved down centuries later into the rainless flood plain between the Tigris and Euphrates, where only group effort could ensure irrigation, and group surpluses were needed to stave off bad harvests, but farming worked well. The "Ubaid period" (neolithic) was pretty egalitarian, but centralization emerges in the "Jemdet Nasr period" and kingship in the "early dynastic period" (Sumerian for king is "lugal", "lu"=man, "gal"=big, and initially it seems to have been just a word for "boss"). With centralization and kingship, empires follow fast. Civilization was co-existing with non-farming groups, but civilization tempts even non-farmers to switch from hunting to raiding. Sumer got sacked repeatedly by nearby tribes.

I am thinking there was a demographic transition point, probably quite early, when the number of people that could be kept alive - not as healthy, but alive - by farming or equally by raiding the surplus of farmers, exceeded the carrying capacity of the local game and wild plants. At that point walking away from the fields was not possible. Therefore agriculture has a ratchet effect.

Comment author: JoshuaZ 19 November 2012 07:00:35PM 0 points [-]

How much of the failure of the African countries is due to their average lower intelligence and how much is that a consequence of other systemic problems (e.g. lack of institutions) that also make the maintenance of modern technologies difficult?

Comment author: gwern 19 November 2012 07:09:04PM 11 points [-]

In graphs of interacting cause & effects, that's not necessarily the best way to ask that question. Because IQ is predictive at least of general economic growth (but also increased by growth, 'bidirectional'), those systemic problems can be perfectly real and also rooted in lower IQs.

Comment author: Vaniver 19 November 2012 07:39:11PM *  9 points [-]

How much of the failure of the African countries is due to their average lower intelligence and how much is that a consequence of other systemic problems (e.g. lack of institutions) that also make the maintenance of modern technologies difficult?

I get the impression that "average lower intelligence" is a big cause of systemic problems, like lack of institutions. I'm reminded of Yvain's example that, in Haiti, they could not understand sorting things numerically or alphabetically. This meant bureaucratic institutions were basically worthless: "where is your file? Let me look at all of the files and try to find yours."

Edit: Also, see this paper.

Comment author: [deleted] 20 November 2012 12:36:40AM *  7 points [-]

I was going to say, “well, maybe that's a failure of education, not of intelligence”, but...

Not just "they don't want to do it" or "it never occurred to them", but after months and months of attempted explanation they don't understand that sorting alphabetically or numerically is even a thing. [emphasis added]

Okay, I'm shocked. (It might still be something that people with IQ between (say) 70 and 90 can learn if they're taught it in elementary school but couldn't ever learn as adults if they haven't, but the “privileging the hypothesis” warning light in my brain is on.)

Comment author: TorqueDrifter 20 November 2012 01:15:52AM *  5 points [-]

Tangentially, and specifically because I followed the link from LessWrong, this jumped out at me:

"Haitians have a culture of tending not to admit they're wrong[.]"

(Pretend that this sentence is a list of reasonable caveats about what to conclude from that.)

Comment author: CCC 21 November 2012 08:01:07AM 2 points [-]

I'm pretty sure that that's a failure of education, not of intelligence. Education has the best effect at a young age, when habits are formed; it's a lot harder to educate someone later, unless that someone really wants to be educated.

Looking at that specific example, I can see why someone who is lazy, and unfamiliar with alphabetic sorting, might not want to try it. Mainly, that step one would be to figure out this whole 'alphabet' thing and memorise what order things go in (a significant neural effort, done now; somewhat easier if already literate, but note that 'literate' does not necessarily mean 'familiar with the order of the alphabet'); step two would be to sort in alphabetical order everything that's already in the office (a significant physical effort, done now); step three would be to actually bother to put new things in order instead of just toss them in a random drawer (an ongoing effort).

So much easier to just pretend to understand less than one does (admittedly, it does mean a bit more time searching for a piece of paper when someone asks, but that's a minor task, and won't have to be done immediately in any case).

Comment author: Viliam_Bur 22 November 2012 09:35:57AM *  2 points [-]

You can get some benefit even without learning the order of the alphabet. If you divide things to groups by their first letter, even if the groups are sorted randomly, and the things in one group are sorted randomly, the search time should be at least 10 times shorter.

As a bonus, you can switch to this system gradually. Create empty groups for each first letter, and consider everything else as an "unsorted" group. When searching, first look in the group with given letter, then in the "unsorted" group. When finished, always put the thing into the group starting with that letter. Your system will sort gradually.

Comment author: CCC 22 November 2012 02:24:46PM 1 point [-]

You are correct. This methodology will work, as long as we assume that no-one will put a piece of paper in the wrong (apparently sorted) file.

Was it ever explained to the Haitians in this way, though?

Comment author: Vaniver 21 November 2012 04:00:38PM 0 points [-]

I'm pretty sure that that's a failure of education, not of intelligence.

By this you mean that you think P(Can't understand sorting | low education, moderate intelligence)>P(Can't understand sorting | moderate education, low intelligence)?

If you had said "this might be the result of low energy," then I would have agreed that's a likely partial explanation, as you argued for that fairly well and it fits the rigors of a tropical climate. But I'm concerned that you're conflating education and energy.

Comment author: CCC 21 November 2012 06:20:55PM 2 points [-]

No, I mean that I think that P(low education|group of humans who can't understand sorting)>P(low intelligence|group of humans who can't understand sorting).

The 'group' part is important; while education is often constant or near-constant among a community, intelligence is often not; thus, something that is true for an entire group is more likely a result of education than intelligence. Similarly, 'humans' is an important word, because I know that many humans are capable of sorting, and thus there is no species barrier.

Having said that, "low energy" is almost certainly also a contributing factor.

Comment author: Izeinwinter 04 January 2013 10:36:20PM 1 point [-]

Childhood malnutrition reduces IQ. Major childhood trauma reduces IQ. No childhood education makes you massively unlikely to grasp formal logic, Ect, ect. Most third world countries are profoundly crippling places to grow up, - The good news is that any such place that manages to not be a circle of hell for a straight 20 year stretch should see its economy do a hard takeoff as a generation reaches adulthood that was not lobotomized.

Comment author: JoshuaZ 19 November 2012 06:59:17PM 2 points [-]

If I understand correctly, your argument in particular suggests that it's the environmentally-mediated increase in IQ that might have enabled the rise of civilization (in this interglacial period).

This doesn't seem to be all that Douglas_Reay is arguing. There's also an aspect to his argument of the right environmental aspects being available for an extended period of time, along with the slow development of the right technologies for society to take off. See in particular these two paragraphs:

Good grain storage seems to have developed incrementally starting with crude stone silo pit designs in 9500 BCE, and progressing by 6000 BCE to customised buildings with raised floors and sealed ceramic containers which could store 80 tons of wheat in good condition for 4 years or more. (Earthenware ceramics date to 25,000 BCE and earlier, though the potter's wheel, useful for mass production of regular storage vessels, does date to the Ubaid period.)

The main key to the timing of the transition from village to city seems to have been not human technology but the confluence of climate and biology. Jared Diamond points the finger at the geography of the region - the fertile crescent farmers had access to a wider variety of grains than anywhere else in the world because that area links and has access to the species of three major land masses. The Mediterranean climate has a long dry season with a short period of rain, which made it ideal for growing grains (which are much easier to store for several years than, for instance bananas). And everything kicked off when the climate stabilised after the most recent ice age ended about 12,000 years ago.

Comment author: fubarobfusco 20 November 2012 08:14:02AM *  24 points [-]

Ur may not have been the first city, but it was the first one we know of that wasn't part of a false dawn - one whose culture and technologies did demonstrably spread to other areas. It was the flashpoint.

A contrary view — and I'm stating this deliberately rather strongly to make the point vivid:

"False dawn" is a retrospective view; which is to say an anachronistic one; which is to say a mythical one. And myths are written by the victors.

It's true that we perceive more continuity from Ur to today's civilization than from Xyz (some other ancient "dawn of civilization" point) to today. But why? Surely in part because the Sumerians and their Akkadian and Babylonian successors were good at scattering their enemies, killing their scribes, destroying their records, and stealing credit for their innovations. Just as each new civilization claimed that their god had created the world and invented morality, each claimed that their clever forefather had invented agriculture, writing, and tactics. If the Xyzzites had won, they would have done the same.

What's the evidence? Just that that's how civilizations — particularly religious empires — have generally behaved since then. The Hebrews, Catholics, and Muslims, for instance, were all at one time or another pretty big on wiping out their rivals' history and making them out to be barbaric, demonic, subhuman assholes — when they weren't just mass-murdering them. So our prior for the behavior of the Sumerians should be that they were unremarkable in this regard; they did the same wiping-out of rivals' records that the conquistadors and the Taliban did.

Today we have anti-censorship memes; the idea that anyone who would burn books is a villain and an enemy of everyone. But we also have the idea that mass censorship and extirpation of history is "Orwellian" — as if it had been invented in the '40s! This is backwards; it's anti-censorship that is the new, weird idea. Censorship is the normal behavior of normal rulers and normal priests throughout normal history.

Damnatio memoriae, censorship, burning the libraries (of Alexandria or the Yucatán), forcible conversion & assimilation — or just mass murder — are effective ways to make the other guy's civilization into a "false dawn". Since civilizations prior to widespread literacy (and many after it) routinely destroyed the records and lore of their rivals; we should expect that the first X that we have records of is quite certainly not the first X that existed, especially if its lore makes a big deal of claiming that it is.

Put another way — quite a lot of history is really a species of creationism, misrepresenting a selective process as a creative one. So we should not look to "the first city" for unique founding properties of civilization, since it wasn't the first and didn't have any; it was just the conquering power that happened to end up on top.

Comment author: Douglas_Reay 20 November 2012 09:50:55AM *  16 points [-]

Thus my caveat "we know of".

However, while it would be quite possible for a victor to erase written mention of a rival, it is harder to erase beyond all archaeological recovery the signs of a major city that's been stable and populated for a thousand years or more. For instance, if we look at Jericho, which was inhabited earlier than Ur was, we don't see archaeological evidence of it becoming a major city until much later than Ur (see link and link).

If there was a city large enough and long lived enough, around before Ur, that passed onto Ur the bundle of things like writing and hierarchy that we known Ur passed onto others, then I'm unaware of it, and the evidence has been surprisingly thoroughly erased (which isn't impossible, but neither is it a certainty that such a thing happened).

See also the comment about Uruk. There were a number of cities in Sumer close together that would have swapped ideas. But the things said about calories and types of grain apply to all of them.

Comment author: [deleted] 20 November 2012 07:35:50PM 5 points [-]

burning the libraries (of Alexandria

Whaaat? Did people do that on purpose? What the hell is wrong with my species?

Comment author: Nornagest 20 November 2012 07:57:30PM *  7 points [-]

It's not entirely clear. Wikipedia lists four possible causes of or contributors to the Library of Alexandria's destruction; all were connected to changes in government or religion, but only two (one connected to Christian sources, the other to Muslim) appear deliberate. Both of them seem somewhat dubious, though.

The destruction of Central American literature is a more straightforward case. Bishop Diego de Landa certainly ordered the destruction of Mayan codices where found, which only a few survived.

Comment author: fubarobfusco 20 November 2012 09:33:36PM *  4 points [-]

Wikipedia lists four possible causes of or contributors to the Library of Alexandria's destruction; all were connected to changes in government or religion, but only two (one connected to Christian sources, the other to Muslim) appear deliberate.

The Library also wasn't one building; and had some time to recover between one attack and the next. (As an analogy: Burning down some, or even most, of the buildings of a modern university wouldn't necessarily lead to the institution closing up shop.)

I'd been thinking of the 391 CE one, though, which I'd thought was widely understood to be an attack against pagan sites of learning. Updating in progress.

The destruction of Central American literature is a more straightforward case.

It's worth noting that there were people on the Spanish "team" who regretted that decision and spoke out against it, most famously Bartolomé de las Casas.

Comment author: Salemicus 20 November 2012 09:01:50PM 2 points [-]

It's all consequentialism around here... until someone does something to lower the social standing of academia.

Comment author: [deleted] 20 November 2012 09:30:27PM 1 point [-]

What?

Comment author: Salemicus 20 November 2012 10:06:50PM *  6 points [-]

A consequentialist would ask, with an open mind, whether burning the libraries lead to good or bad consequences. A virtue ethicist would express disgust at the profanity of burning books. Your comment closely resembles the latter, whereas most discussion here on other topics tries to approximate the former.

I think it is no coincidence that this switch occurs in this context. Oh no, some dusty old tomes got destroyed! Compared to other events of the time, piddling for human "utility." But burning books lowers the status of academics, which is why it is considered (in Haidt-ian terms) a taboo by some - including, I would suggest, most on this site.

Comment author: JoshuaZ 21 November 2012 12:58:06AM *  23 points [-]

I think it is no coincidence that this switch occurs in this context. Oh no, some dusty old tomes got destroyed! Compared to other events of the time, piddling for human "utility." But burning books lowers the status of academics, which is why it is considered (in Haidt-ian terms) a taboo by some - including, I would suggest, most on this site.

We have good reason to think that the missing volumes of Diophantus were at Alexandria. Much of what Diophantus did was centuries before his time. If people in the 1500s and 1600s had complete access to his and other Greek mathematicians' work, math would have likely progressed at a much faster pace, especially in number theory.

We also have reason to think that Alexandria contained the now lost Greek astronomical records, which likely contained comets and possibly also historical nova observations. While we have some nova and supernova observations from slightly later (primarily thanks to Chinese and Japanese records), the Greeks were doing astronomy well before. This sort of thing isn't just an idle curiosity: understanding the timing of supernova connects to understanding the most basic aspects of our universe. The chemical elements necessary for life are created and spread by supernova. Understanding the exact ratios, how common supernova are, and understanding more how supernova spread out, among other issues, are all important to understanding very important questions like how common life is, which is directly relevant to the Great Filter. We do have a lot of supernova observations in the last few years but historical examples are few and far between.

Compared to other events of the time, piddling for human "utility."

On the contrary. Kill a few people or make them suffer and it has little direct impact beyond a few years in the future. Destroying knowledge has an impact that resonates down for far longer.

But burning books lowers the status of academics, which is why it is considered (in Haidt-ian terms) a taboo by some - including, I would suggest, most on this site.

This is an interesting argument, and I find it unfortunate that you've been downvoted. The hypothesis is certainly interesting. But it may also be taboo for another reason: in many historical cases, book burning has been a precursor to killing people. This is a cliche, but it is a cliche that happens to have historical examples before it. Another consideration is that a high status of academics is arguably quite a good thing from a consequentialist perspective. People like Norman Borlaug, Louis Pasteur, and Alvin Roth have done more lasting good for humanity than almost anyone else. Academics are the main people who have any chance of having a substantial impact on human utility beyond their own lifespans (the only other groups are people who fund academics or people like Bill Gates who give funding to implement academic discoveries on a large scale). So even if it is purely an issue of status and taboo, there's a decent argument that those are taboos which are advantageous to humanity.

Comment author: Salemicus 21 November 2012 07:18:12PM *  1 point [-]

Number theory might have progressed faster... we might better understand the “Great Filter”

Isn’t this kind of thing archetypal of knowledge that in no way contributes to human welfare?

In many historical cases, book burning has been a precursor to killing people.

Perhaps, but note that this wasn’t a precursor to killing people; people were being widely killed regardless. But the modern attention is not on the rape, murder, pillage, etc... it’s on the book-burning. Why the distorted values?

a high status of academics is arguably quite a good thing from a consequentialist perspective

Alvin Roth is no doubt a bright guy, but the idea that he has done more lasting good for humanity than, say, Sam Walton, is absurd. You’re right that Bill Gates has made a huge impact – but his lasting good was achieved by selling computer software, not through the mostly foolish experimentation done by his foundation. Sure, some academics have done some good (although you wildly overstate it) but you have to consider the opportunity cost. The high status of academics causes us to get more academic research than otherwise, but it also encourages our best and brightest to waste their lives in the study of arcana. Can anyone seriously doubt that, on the margin, we are oversupplied with academics, and undersupplied with entrepreneurs and businessmen generally?

Comment author: JoshuaZ 21 November 2012 09:14:54PM 12 points [-]

Follow up reply in a separate comment since I didn't notice this part of the remark the first time through (and it is substantial enough that it should probably not just be included as an edit):

... we might better understand the “Great Filter”

Isn’t this kind of thing archetypal of knowledge that in no way contributes to human welfare?

If this falls into that category then the archetypes of knowledge that doesn't contribute to human welfare is massively out of whack. Figuring out how much of the Great Filter is in front of us or behind us is extremely important. If most of it is behind us, we have a lot less worry. If most of the Great Filter is in front of us, then existential risk is a severe danger to humanity as a whole. Moreover, if it is in front of us, then it most likely some form of technology and caused by some sort of technological change (since natural disasters aren't common enough to wipe out every civilization that gets off the ground). Since we're just beginning to travel into space, it is likely that if there is heavy Filtration in front of us, it isn't very far ahead but is in the next few centuries.

If there is heavy Filtration in front of us, then it is vitally important that we figure out what that Filter is and what we can do to avert it, if anything. This could be the difference between the destruction of humanity and humanity spreading to the stars. If there are any contributions that contribute to the welfare of humanity, those which involve our existence as a whole should be high up on the list.

Comment author: JoshuaZ 21 November 2012 08:42:13PM *  12 points [-]

Isn’t this kind of thing archetypal of knowledge that in no way contributes to human welfare?

Well, no. In modern times number theory has been extremely relevant for cryptography for example, and pretty much all e-commerce relies on it. But other areas of math have direct, useful applications and have turned out to be quite important. For example, engineering in the late Middle Ages and Renaissance benefited a lot from things like trig and logarithms. Improved math has lead to much better understanding of economies and financial systems as well. These are but a few limited examples.

But the modern attention is not on the rape, murder, pillage, etc... it’s on the book-burning

You are missing the point in this context having the taboo against book burning is helpful because it is something one can use as a warning sign.

Alvin Roth is no doubt a bright guy, but the idea that he has done more lasting good for humanity than, say, Sam Walton, is absurd.

So I'm curious as to how you are defining "good" in any useful sense that you can reach this conclusion. Moreover, the sort of thing that Roth does is in the process of being more and more useful. His work allowing for organ donations for example not only saves lives now but will go on saving lives at least until we have cheap cloned organs.

ou’re right that Bill Gates has made a huge impact – but his lasting good was achieved by selling computer software, not through the mostly foolish experimentation done by his foundation.

This is wrong. His work with malaria saves lives. His work with selling computer software involved making mediocre products and making up for that by massive marketing along with anti-trust abuses. There's an argument to be made that economic productivity can be used as a very rough measure of utility, but that breaks down in a market where advertising, marketing, and network effects of specific product designs matter more than quality of product.

Can anyone seriously doubt that, on the margin, we are oversupplied with academics, and undersupplied with entrepreneurs and businessmen generally?

Yes, to the point where I have to wonder how drastically far off our unstated premises about the world are. If anything, it seems like we have the exact opposite problem. We have a massive oversupply of "quants" and the like who aren't actually producing more utility or even actually working with real market inefficiencies but are instead doing things like moving servers a few feet closer to the exchange so they can shave a fraction of a second off of their transaction times. There may be an "oversupply" of how many academics there are compared to the number of paying positions but that's simply connected to the fact that most research has results that function as externalities(technically public goods) and thus the lack of academic jobs is a market failure.

Comment author: Salemicus 21 November 2012 10:33:33PM *  3 points [-]

No-one is disputing that mathematics can be useful. The question is, if we had slightly more advanced number theory slightly earlier in time, would that have been particularly useful? Answer - no.

You are missing the point in this context having the taboo against book burning is helpful because it is something one can use as a warning sign.

No, I am not missing the point. I am perfectly willing to concede that a taboo against book-burning might be helpful for that reason. But here we have an example where people were,at the same time as burning books, doing the exact worse stuff that book burning is allegedly a warning sign of. But no-one complains about the worse stuff, only the book burning. Which makes me disbelieve that people care about the taboo for that reason.

People say that keeping your lawn tidy keep the area looking well-maintained and so prevents crime. Let's say one guy in the area has a very messy lawn, and also goes around committing burglaries. Now suppose the Neighbourhood Watch shows no interest at all in the burglaries, but is shocked and appalled by the state of his lawn. We would have to conclude that these people don't care about crime, what they care about is lawns, and this story about lawns having an effect on crime is just a story they tell people because they can't justify their weird preference to others on its own terms.

Moreover, the sort of thing that Roth does is in the process of being more and more useful. His work allowing for organ donations for example not only saves lives now but will go on saving lives at least until we have cheap cloned organs.

Or, we could just allow a market for organ donations. Boom, done. Where's my Nobel?

Now, if you specify that we have to find the best fix while ignoring the obvious free-market solutions I don't deny that Alvin Roth has done good work. And I'm certainly not blaming Roth personally for the fact that academia exists as an adjunct to the state - although academics generally do bear the lions share of responsibility for that. But I am definitely questioning the value of this enterprise, compared to bringing cheap food, clothes, etc, to hundreds of millions of people like Sam Walton did.

This is wrong. His work with malaria saves lives. His work with selling computer software involved making mediocre products and making up for that by massive marketing along with anti-trust abuses. There's an argument to be made that economic productivity can be used as a very rough measure of utility, but that breaks down in a market where advertising, marketing, and network effects of specific product designs matter more than quality of product.

I don't see why "saves lives" is the metric, but I bet that Microsoft products have been involved in saving far more lives. Moreover, people are willing to pay for Microsoft products, despite your baseless claims of their inferiority. Gates's charities specifically go around doing things that people say they want but don't bother to do with their own money. I don't know much about the malaria program, but I do know the educational stuff has mostly been disastrous, and whole planks have been abandoned.

Yes, to the point where I have to wonder how drastically far off our unstated premises about the world are.

Obviously very far indeed.

Comment author: JoshuaZ 21 November 2012 11:43:10PM 11 points [-]

No-one is disputing that mathematics can be useful. The question is, if we had slightly more advanced number theory slightly earlier in time, would that have been particularly useful? Answer - no.

Answer: Yes. Even today, number theory research highly relevant to efficient crypto is ongoing. A few years of difference in when that shows up would have large economic consequences. For example, as we speak, research in ongoing into practical fully homomorphic encryption which if it is implemented will allow cloud computing and deep processing of sensitive information, as well as secure storage and retrieval of sensitive information (such as medical records) from clouds. This is but one example.

But no-one complains about the worse stuff, only the book burning. Which makes me disbelieve that people care about the taboo for that reason.

Well, there is always the danger of lost-purpose. But it may help to keep in mind that the book-burnings and genocides in question both occurred a long-time ago. It is easier for something to be at the forefront of one's mind when one can see more directly how it would have impacted one personally.

Or, we could just allow a market for organ donations. Boom, done. Where's my Nobel?

So, I'm generally inclined to allow for organ donation markets (although there are I think legitimate concerns about them). But since that's not going to happen any time soon, I fail to see its relevance. A lot of problems in the world need to be solved given the political constraints that exist. Roth's solution works in that context. The fact that a politically untenable better solution exists doesn't make his work less beneficial.

But I am definitely questioning the value of this enterprise, compared to bringing cheap food, clothes, etc, to hundreds of millions of people like Sam Walton did.

So, Derstopa already gave some reasons to doubt this. But it is also worth noting that Walton died in 1992, before much of Walmart's expansion. Also, there's a decent argument that Walmart's success was due not to superior organization but rather a large first-mover advantage (one of the classic ways markets can fail): Walmart takes advantage of its size in ways that small competitors cannot do. This means that smaller chains cannot grow to compete with Walmart in any fashion, so even if a smaller competitor is running something more efficiently, it won't matter much. (Please take care to note that this is not at all the mom-and-pop-store argument which I suspect you and I would both find extremely unconvincing.)

I don't see why "saves lives" is the metric

Ok. Do you prefer Quality-adjusted life years ? Bill is doing pretty well by that metric.

but I bet that Microsoft products have been involved in saving far more lives

"Involved with" is an extremely weak standard. The thing is that even if Microsoft had never existed, similar products (such as software or hardware from IBM, Apple, Linux, Tandy) would have been in those positions.

Moreover, people are willing to pay for Microsoft products, despite your baseless claims of their inferiority.

Let's examine why people are willing to do so. It isn't efficiency. For example, by standard benchmarks, Microsoft browsers have been some of the least efficient (although more recent versions of IE have performed very well by some metrics such as memory use ). Microsoft has had a massive marketing campaign to make people aware of their brand (classically marketing in a low information market is a recipe for market failure). And Microsoft has engaged in bundling of essentially unrelated products. Microsoft has also lobbied governments for contracts to the point where many government bids are phrased in ways that make non-Microsoft options essentially impossible. Most importantly: Microsoft gains a network effect: This occurs when the more common a product is, the more valuable it is compared to other similar products. In this context, once a single OS and set of associated products is common, people don't want other other products since they will run into both learning-curve with the "new" product and compatibility issues when trying to get the new product to work with the old.

Gates's charities specifically go around doing things that people say they want but don't bother to do with their own money.

That some people make noise about wanting to help charity but don't doesn't make the people who actually do it as contributing less utility. Or is there some other point here I'm missing?

I don't know much about the malaria program, but I do know the educational stuff has mostly been disastrous, and whole planks have been abandoned.

Yes, there's no question that the education work by the Gates foundation has been profoundly unsuccessful. But the general consensus concerning malaria is that they've done a lot of good work. This may be something you may want to look into.

Comment author: Bugmaster 22 November 2012 01:07:06AM 6 points [-]

No-one is disputing that mathematics can be useful. The question is, if we had slightly more advanced number theory slightly earlier in time, would that have been particularly useful? Answer - no.

My answer is "probably yes". Mathematics directly enables entire areas of science and engineering. Cathedrals and bridges are much easier to build if you know trigonometry. Electricity is a lot easier to harness if you know trigonometry and calculus, and easier still if you are aware of complex numbers. Optics -- and therefore cameras and telescopes, among many other things -- is a lot easier with linear algebra, and so are many other engineering applications. And, of course, modern electronics are practically impossible without some pretty advanced math and science, which in turn requires all these other things.

If we assume that technology is generally beneficial, then it's best to develop the disciplines which enable it -- i.e., science and mathematics -- as early as possible.

Comment author: Desrtopa 21 November 2012 10:54:28PM 8 points [-]

Alvin Roth is no doubt a bright guy, but the idea that he has done more lasting good for humanity than, say, Sam Walton, is absurd.

I wouldn't be so sure about that. I'm not about to investigate the economics of their entire supply chain (I already don't shop at Walmart simply due to location, so it doesn't even stand to influence my buying decisions,) but I wouldn't be surprised if Walmart is actually wealth-negative in the grand scheme. They produce very large profits, but particularly considering that their margins are so small and their model depends on dealing in such large bulk, I think there's a fair likelihood that the negative externalities of their business are in excess of their profit margin.

It's impossible for a business to be GDP negative, but very possible for one to be negative in terms of real overall wealth produced when all externalities are accounted for, which I suspect leads some to greatly overestimate the positive impact of business.

Comment author: Salemicus 22 November 2012 12:51:42AM 2 points [-]

Why focus on the negative externalities rather than the positive? And why neglect all the partner surpluses - consumer surplus, worker surplus, etc? I'd guess that Walmart produces wealth at least an order of magnitude greater than its profits.

Comment author: Desrtopa 22 November 2012 02:26:45AM 7 points [-]

Why focus on the negative externalities rather than the positive?

Because corporations make a deliberate effort to privatize gains while socializing expenses.

GDP is a pretty worthless indicator of wealth production, let alone utility production; the economists who developed the measure in the first place protested that it should by no means be taken as a measure of wealth production. There are other measures of economic growth which paint a less optimistic picture of the last few decades in industrialized nations, although they have problems of their own with making value judgments about what to measure against industrial activity, but the idea that every economic transaction must represent an increase in well-being is trivially false both in principle and practice.

Comment author: TorqueDrifter 21 November 2012 07:36:53PM 4 points [-]

Number theory might have progressed faster... we might better understand the “Great Filter”

Isn’t this kind of thing archetypal of knowledge that in no way contributes to human welfare?

I don't think you'll find many here to agree that math doesn't help with human welfare.

Comment author: Peterdjones 23 November 2012 08:43:23PM *  0 points [-]

Alvin Roth is no doubt a bright guy, but the idea that he has done more lasting good for humanity than, say, Sam Walton, is absurd.

Apples and oranges. Business is there to make money. Money is instrumental, it is there to be spent on terminal values, things of intrinsic worth. People spend their excess on entertainement, art, hobbies, family life, and, yes knowledge. All these things are terminal values.

Comment author: FluffyC 24 November 2012 01:00:16AM *  6 points [-]

Surely a consequentialist could come to a conclusion about book-burning being bad and then write an outraged comment about it--the potential negatives in the long-term of the burning of such a library are debatable but the potential positives in the long-term are AFAICT non-existent. Such a catastrophic failure of cost-benefit analysis would be something a consequentialist could in fact be quite outraged about.

Incidentally,

Compared to other events of the time, piddling for human "utility."

...it seems self-evident to me that this is not in any way an interesting or meaningful comparison to ask people to make (ETA: in light of the above, anyway). It's "good" rhetoric but seems to be abysmal rationality; it's a "there are starving children in Africa, eat your peas" argument.

Comment author: [deleted] 21 November 2012 12:26:58AM 6 points [-]

I don't need to carry out expected utility calculations explicitly to guess that burning down a library is way more likely to be bad than good. My "What?" was because I can't see any obvious reason to suspect that actually carrying it out would yield a substantially different answer than my guess, and wondered whether you had such a reason in mind.

Comment author: Bugmaster 21 November 2012 07:43:04PM 3 points [-]

A consequentialist would ask, with an open mind, whether burning the libraries lead to good or bad consequences. A virtue ethicist would express disgust at the profanity of burning books.

Despite being a consequentialist (*), I believe that the act of burning libraries possesses such a massive disutility that it is almost always the wrong thing to do. I can elaborate on my reasoning if you're interested, but my main point is that consequentialism and virtue ethicism can sometimes come to the same conclusion; this does not invalidate either philosophy.

(*) Or as close to being one as I can accomplish given my biases.

Comment author: Peterdjones 23 November 2012 08:46:34PM 0 points [-]

Compared to other events of the time, piddling for human "utility

Since there is no gain whatsoever, it is still negative consequentially.

Comment author: JoshuaZ 23 November 2012 09:07:20PM 0 points [-]

This doesn't seem that relevant. If you look above you'll see that Salemicus primary argument concerning the library wasn't that it was necessarily a good thing to do but that it wasn't severe compared to much worse things that happened in the same time period. His other argument about the role of academics was a subthread of that.

Comment author: Peterdjones 23 November 2012 09:10:03PM 1 point [-]

How do you quantify the worth of knowledge when you don't know what it is?

Comment author: JoshuaZ 23 November 2012 09:35:04PM 1 point [-]

How do you quantify the worth of knowledge when you don't know what it is?

With difficulty. If you read the rest of this thread, specific examples based on what is suspected to have been at Alexandria have been discussed. One can make reasoned guesses based on was known and what was referenced elsewhere as being studied topics. See the earlier discussion about Diophantus (in the same subthread) for example.

Comment author: Peterdjones 23 November 2012 09:45:39PM 0 points [-]

Ok. The comment wasn't directed at you. It's just another of the many problems of trying to evaluate eveything by monetary worth.

Comment author: thomblake 20 November 2012 07:47:43PM 0 points [-]

That one was probably an accident. Caesar had to burn his own ships so the enemy couldn't keep him from using them, and the fire got out of control.

Comment author: [deleted] 20 November 2012 08:21:57PM *  1 point [-]

Since civilizations prior to widespread literacy (and many after it) routinely destroyed the records and lore of their rivals; we should expect that the first X that we have records of is quite certainly not the first X that existed, especially if its lore makes a big deal of claiming that it is.

This would work if conquerers were also effective at destroying archeological evidence. But they seem not to have been, and a complete archeological record would just settle the question of which city was really first, given some appropriate and agreed upon standard. And that city would be the true dawn.

Comment author: John_Maxwell_IV 20 November 2012 01:47:25AM 5 points [-]

Didn't civilization develop independently in several different places, e.g. the Aztec or Inca civilizations in the Americas?

Comment author: Nornagest 20 November 2012 02:04:13AM 2 points [-]

Yeah, the agricultural transition (and resulting centralization of living, development of hierarchical government, etc.) is thought to have happened in about a half-dozen places between around 9000 BC and 1000 BC.

Comment author: Douglas_Reay 20 November 2012 07:01:08AM 6 points [-]

Looking at the Americas, we have evidence of cultures with agriculture and pottery, roughly equivalent to Europe's Linear A, going back about 6000 years ago (4000 BCE). We have writing dating back to about 3000 years ago (1000 BCE), though this was probably delayed by much of their function earlier being usurped by quipu (which date back at least to 2600 BCE). This corresponds to the emergence of the first long term stable cities in the Americas starting at about 1500 BCE and the growth, about 1000 years later, of Teotihuacan, a true majestic city rivalling ancient Ur in size and influence.

So yes, that is an independent (but later) development of civilization, which I think endorses the idea that once the climate settled down after the intergalacial, our species was going to develop civilization on a fairly quick timescale (compared to biological evolutionary timescales), and that it wasn't lack of intelligence holding us up.

Comment author: Vaniver 19 November 2012 04:26:23PM *  5 points [-]

Having large mammals available to domesticate, who can provide fertiliser and traction (pulling ploughs and harrows) certainly makes things easier, but doesn't seem to have been a large factor in the timing of the rise of civilisation, or particularly dependent upon the IQ of the human species.

How would we test this? If human IQ matters, it seems like we would need some animal which is in contact with low-IQ humans and higher IQ humans, which the first couldn't tame but the second could. You already link to an example of recent man domesticating the fox, and there's quite a bit of European zebra-taming, though fully domesticating a species takes generations to breed out deleterious traits. The example of deer seems like weak support as well; some strains were somewhat domesticated by northern peoples, but that evidence is only weak to me because it's not clear the economic value of deer was the same across climates.

And it has nothing to do with the brain.

which is a fatty acid required for human brain development.

Er... what?

It seems to me that most IQ-related alleles are "build the brain this way," and so a gene that creates a necessary acid out of whatever you have lying around seems like it's an IQ-related allele. Among those with sufficient diets, there will be no effect, but among those with deficient diets, there will be a positive effect; unless the entire population has sufficient diets, that'll lead to a positive correlation.

If you're making the claim that "processing speed is not the only factor," then sure! The best example of that is neanderthals, who probably were around as good (if not better) at abstract problem-solving, foresight, tool-making, and so on, but contributed only a small percentage of genes to current humans. It's not certain why that's the case yet, but a strong partial explanation is they didn't have trade networks, and so were making the best of local materials while their competitors were able to use superior materials acquired from far away. Another good example is that developed, large civilizations moved northward with agriculture, even though there's strong evidence that IQs are higher among groups that spent significant timescales in colder (i.e. more northern) climates).

But it seems really odd to me to claim that if you dropped current humans into the world at the start of the previous interglacial period with no extral capital besides their genes, you would expect them to take twelve thousand years to reach the state of development we're at now. They've already got neat things like lactase persistence (developed 5-10k years ago), and while their adapations to modern society might be a handicap during their hunter-gatherer phase, supposing they survive it should speed up the progress of their civilization afterward.

Comment author: JoshuaZ 19 November 2012 04:29:41PM 6 points [-]

The best example of that is neanderthals, who probably were better at abstract problem-solving, foresight, tool-making, and so on

What evidence do we have for this?

Comment author: Vaniver 19 November 2012 04:39:34PM *  4 points [-]

I'll have to check my original source for that when I get home; I was under the impression it was because their forebrains were larger, but looking now I'm primarily finding claims that their whole brains were larger (which, given their larger body size, doesn't mean all that much).

This looks like the closest thing in the relevant wiki article to my claim:

The quality of tools found at archaeological sites is further said to suggest that Neanderthals were good at "expert" cognition, a form of observational learning and practice acquired through apprenticeship that relies heavily on long-term procedural memory.

but it's also tempered by things that might be evidence the other way. (Neanderthal tools changed little in thousands of years- is that because they found the local optimum early, or because they were bad at innovating?)

[edit] This argument wasn't in the book I thought it was in, so I'm slightly less confident in it. I think there's strong evidence that the primary differential between neanderthals and their successors was social, not mental processing speed / memory / etc., and will edit the grandparent to reflect that.

Comment author: RichardKennaway 20 November 2012 05:27:43PM 4 points [-]

Gwern suggested that, if it were possible for civilization to have developed when our species had a lower IQ, then we'd still be dealing with the same problems, but we'd have a lower IQ with which to tackle them. Or, to put it another way, it is unsurprising that living in a civilization has posed problems that our species finds difficult to tackle, because if we were capable of solving such problems easily, we'd probably also have been capable of developing civilization earlier than we did.

And to put it yet another way, by something like a Peter Principle ("people are promoted to their level of incompetence"), we create problems up to our capacity to deal with them. However stupid or intelligent we are, we will always be dealing with problems at the edge of what we can deal with.

This, btw, makes me sceptical about predictions of radical increases in intelligence (of us or of our creations) bringing about paradise.

Comment author: [deleted] 21 November 2012 07:38:53AM 3 points [-]

we create problems up to our capacity to deal with them. However stupid or intelligent we are, we will always be dealing with problems at the edge of what we can deal with.

Were you thinking of any specific societal problems when you wrote this?

Most societal problems of today had smaller scale analogues in the past. Foreign relations, warfare, and internal security should have existed at least as long as there have been city states. Unsustainable development and overpopulation relative to available resources are nothing new; they were even cited in the main post as contributors to Ur's downfall. Likewise, public sickness, waste management, violent and coercive crime, inadequate housing, and unfavorable economic climates would all be familiar to, say, the Indus Valley Civilization. A few examples of modern anthropogenic risks: climate change, unfriendly intelligence explosion, nuclear warfare, nanotech. And then of course negentropy opportunity cost is an old problem we didn't create, we just didn't know about it back in the day.

In short, smart societies make a few new difficult problems, but mostly make larger societies which have larger versions of the old problems.

Comment author: Vaniver 20 November 2012 06:17:06PM 2 points [-]

This, btw, makes me sceptical about predictions of radical increases in intelligence (of us or of our creations) bringing about paradise.

To the extent that a boring place probably isn't paradise, sure. But a world in which almost all of your effort is spent tussling with other minds at your level seems much better than, say, the present world, where much of your effort is spent on the annoyances of corporeal existence.

Comment author: RichardKennaway 20 November 2012 06:55:15PM 6 points [-]

Yes, things can get better. Better than we can barely imagine. But by that standard, we're already living in the paradise of the past, and it's not exactly happy ever after, is it?

Comment author: johnlawrenceaspden 20 November 2012 07:27:39PM 5 points [-]

It's ok! Fix death and I'd be cool with it.

Comment author: Douglas_Reay 21 November 2012 02:21:14AM 1 point [-]

Do you include in the scope of that 'fix' dealing with problems associated with population, promotion, ambition, recidivist criminals who (after serving a few terms in jail) have time to learn to be good at crime, etc?

Comment author: johnlawrenceaspden 22 November 2012 10:31:19PM 1 point [-]

We would, as they say, have as long as we liked to sort out those sorts of issues.

Comment author: Douglas_Reay 23 November 2012 09:38:33AM 0 points [-]

How does that differ from saying "Given unlimited time to fix all social problems, society will eventually become a paradise" or "The root problem with current society is that we have not yet had sufficient time to fix all the other problems with it"? Couldn't the same be said about any imperfect society? I don't see how it is praise for the state of our current society versus previous societies.

Comment author: Desrtopa 19 November 2012 03:20:20PM 8 points [-]

If the gene for the synthesis of docosahexaenoic acid arose 80kya, and the current interglacial period began 12kya, that still leaves four thousand years between the end of the glacial period and the beginning of city-based civilization, which, keep in mind, is a long time.

If the civilization developments followed within a hundred years or so of the necessary biological and environmental factors coming into place, I wouldn't be so skeptical that our intelligence already exceeded the minimum necessary to produce those developments. But we already had domesticated grazing animals thousands of years before the foundation of Ur, and grains earlier than that. Don't forget that when we're dealing with cultural rather than biological evolution, a millenium is no longer a relative eyeblink.

Comment author: John_Maxwell_IV 20 November 2012 12:11:50PM *  3 points [-]

Humans are relatively conformist, and we often have a hard time translating abstract/revolutionary ideas in to practice. It seems likely that many humans had ideas for things resembling civilization, or things that could've lead to the development of a civilization, before the first actual civilization, in the same way more people dream about starting businesses than actually start businesses.

Paul Graham seems to think that local culture plays a huge role in startup success. Now consider that even the cultures Paul Graham considers pretty bad are still American city cultures, and America has a reputation for individualism, rags-to-riches success, etc. and that's all on a foundation of enlightenment values related to progress, questioning authority, and so on. And we've got a long and storied history of society changing on a large scale, within our lifetimes even.

So yeah, stagnant cultures are not necessarily being held back by lack of intelligence. It could be the standard akrasia/agency-failure type stuff that we're still struggling with today. (Arguably something similar is going on for peoples' failure to appreciate the possible magnitude of human-level AGI--it's just way too bizarre relative to historical norms for most of us to take it seriously.)

Comment author: John_Maxwell_IV 20 November 2012 11:48:39AM *  1 point [-]

Still, if you figure that our intelligence was increasing in a linear fashion, it seems slightly unlikely that it would trip over the civilization-making threshold during one of the relatively shorter interglacial periods. So I think we probably bought ourselves at least a little head start because of the ice age thing.

By the way, I wonder how well racial IQ correlates with civilization formation. Are people of Sumerian descent unusually smart, for instance? If not, maybe civilization formation has more of an element of serendipity than we're giving it credit for? Arguably sticking to a hunter-gatherer lifestyle might actually be smarter than forming a civilization in the short run.

Comment author: gwern 20 November 2012 07:33:56PM 4 points [-]

By the way, I wonder how well racial IQ correlates with civilization formation. Are people of Sumerian descent unusually smart, for instance?

I have no idea how we would check this, unfortunately, short of a lot of digging up extremely old bones. The area that is now Sumeria has been swept by invasion after invasion after invasion from pretty much every direction, and over 4000 years there'd be a lot of drift even if there were no invasions and no immigration. IQ being highly polygenic makes matters worse: a few generations of dysgenic selection (extensive cousin marriage?) could wipe out much of the faint signal one is looking for, and the poor quality DNA from digs might have the same issue.

Comment author: mrglwrf 20 November 2012 05:22:45PM 7 points [-]

Historical quibble- in "The First City" section, you seem to be partially confusing Ur with Uruk. Uruk is generally regarded as the first city in Sumeria, during the eponymous Uruk period (4000-3100 BC). Also generally believed to be the center of the "Uruk phenomenon" during which cuneiform writing and a number of other features of Mesopotamian civilization were developed. Ur was the capital of the Neo-Sumerian Ur III empire c.2000 BC, which built the Great Ziggurat of Ur shown in the picture.

Comment author: Douglas_Reay 20 November 2012 08:27:22PM 2 points [-]

Yep, they were both big and in the same area around the same time. I gave the tip of the hat to Ur being the flashpoint because we can document, via the spread of the Code of Ur-Nammu, its influence upon others. But it could be argued either way.

Comment author: mrglwrf 23 November 2012 01:41:55AM 0 points [-]

In the earlier period, Uruk was in fact substantially larger, thus the quibble. Marc Van De Mieroop, The Ancient Mesopotamian City, p.37:

But many aspects of Uruk show its special status in southern Mesopotamia. Its size greatly surpasses that of contemporary cities: around 3200 it is estimated to have been about 100 hectares in size, while in the region to its north the largest city measured only 50 hectares, and in the south the only other city, Ur, covered only 10-15 hectares. ... And Uruk continued to grow: around 2800 its walls encircled an area of 494 hectares and occupation outside the walls was likely.

Comment author: DuncanS 22 November 2012 12:25:18AM *  3 points [-]

What is the essential difference between human and animal intelligence? I don't actually think it's just a matter of degree. To put it simply, most brains are once-through machines. They take input from the senses, process it in conjunction with memories, and turn that into actions, and perhaps new memories. Their brains have lots of special-purpose optimizations for many things, and a surprising amount can be achieved like this. The brains are once-through largely because that's the fastest approach, and speed is important for many things. Human brains are still mostly once-through.

But we humans have one extra trick, which is to do with self-awareness. We can to an extent sense the output of our brains, and that output then becomes new input. This in turn leads to new output which can become input again. This apparently simple capability - forming a loop - is all that's needed to form a Turing-complete machine out of the specialized animal brain.

Without such a loop, an animal may know many things, but it will not know that it knows them. Because it isn't able to sense explicitly about it was just thinking about, it can't then start off a new thought based on the contents of the previous one.

The divide isn't absolute, I'm sure - I believe essentially all mammals have quite a bit of self-awareness, but only in humans does that facility seem to be good enough to allow the development of a chain of thought. And that small difference makes all the difference in the world.

Comment author: orthonormal 22 November 2012 06:17:37AM 4 points [-]

Chimps can suss out recursive puzzles where you have color-coded keys and locks, and you need to unlock Box A to get Key B to unlock Box B to get Key C to unlock Box C which contains food. They even choose the right box to unlock when one chain leads to the food and the other doesn't.

Sorry, there's not a difference of kind to be found here.

Comment author: jsteinhardt 22 November 2012 08:21:21PM 1 point [-]

How much training is necessary for them to do this? Humans can reason this out without any training, if the chimps had to be trained substantially (e.g. first starting with one box, being rewarded with food, then starting with two boxes, etc.) then I think this would constitute a difference.

Comment author: [deleted] 22 November 2012 08:27:22PM 4 points [-]

Well, one could argue that humans "train" for similar problems throughout their lives... Would you expect a feral child to figure that out straight away?

Comment author: MugaSofer 22 November 2012 06:28:14AM 0 points [-]

But then, there are plenty of examples of chimps exhibiting behavior that implies intelligence.

Comment author: JoshuaZ 22 November 2012 12:33:26AM 0 points [-]

The divide isn't absolute, I'm sure - I believe essentially all mammals have quite a bit of self-awareness, but only in humans does that facility seem to be good enough to allow the development of a chain of thought.

If dolphins or chimps did or did not have chains of thought how would be able to tell the difference?

Comment author: DuncanS 22 November 2012 12:56:33AM *  -2 points [-]

Because of what you can do with a train of thought.

"That mammoth is very dangerous, but would be tasty if I killed it."

"I could kill it if I had the right weapon"

"What kind of weapon would work?"

As against.... "That mammoth is very dangerous - run!"

Computer science is where this particular insight comes from. If you can lay down memories, execute loops and evaluate conditions, you can simulate anything. If you don't have the ability to read your own output, you can't.

If dolphins or chimps did have arbitrarily long chains of thought, they'd be able to do general reasoning, as we do.

Comment author: PeterisP 26 November 2012 11:52:14AM 2 points [-]

The examples of corvids designing and making specialized tools after observing what they would need to solve specific problems (placement of an otherwise unaccessible treat) seem to demonstrate such chains of thought.

Comment author: JoshuaZ 22 November 2012 01:03:36AM 0 points [-]

So what do you expect to be the signs of arbitrary general reasoning? Humans run out of memory eventually. If a dolphin or a chimp can do arbitrary reasoning but lacks the capacity to keep long-chains inside but for this, what would you expect to see. I'm still not sure what actual testable distinction would occur in these cases, although in so far as I can think of what might arguably be evidence, it looks like dolphins pass, as you can see in this article already linked to in this thread.

Comment author: DuncanS 25 November 2012 10:00:15PM 2 points [-]

Let's think about the computer that you're using to look at this website. It's able to do general purpose logic, which is in some ways quite a trivial thing to learn. It's really quite poor at pattern matching, where we and essentially all intelligent animals excel. It is able to do fast data manipulation, reading its own output back.

As I'm sure you know, there's a distinction between computing systems which, given enough memory, can simulate any other computing system and computing systems which can't. Critical to the former is the ability to form a stored program of some description, and read it back and execute it. Computers that can do this can emulate any other computer, (albeit in a speed-challenged way in some cases).

Chimps and dolphins are undoubtedly smart, but for some reason they aren't crossing the threshold to generality. Their minds can represent many things, but not (apparently) the full gamut of what we can do. You won't find any chimps or dolphins discussing philosophy or computer science. My point actually is that humans went from making only relatively simple stone tools to discussing philosophy in an evolutionary eye-blink - there isn't THAT much of a difference between the two states.

My observation is that when we think, we introspect. We think about our thinking. This allows thought to connect to thought, and form patterns. If you can do THAT, then you are able to form the matrix of thought that leads to being able to think about the kinds of things we discuss here.

This only can happen if you have a sufficiently strong introspective sense. If you haven't got that, your thoughts remain dominated by the concrete world driven by your other senses.

Can I turn this on its head? A chimp has WAY more processing power than any supercomputer ever built, including the Watson machine that trounced various humans at jeopardy. The puzzle is why they can't think about philosophy, not why we can. Our much vaunted generality is pretty borderline at times - humans are truly BAD at being rational, and incredibly slow at reasoning. Why is such a powerful piece of hardware as us so utterly incompetent at something so simple?

The reason, I believe, is that our brains are largely evolved to do something else. Our purpose is to sense the world, and rapidly come up with some appropriate response. We are vastly parallel machines which do pattern recognition and ultra-fast response, based on inherently slow switches. Introspection appears largely irrelevant to this. We probably evolved it only as a means of predicting what other humans and creatures would do, and only incidentally did it turn into a means of thinking about thinking.

What is the actual testable distinction? Hard to say, but once you gain the ability to reason independently from the senses, the ability to think about numbers - big numbers - is not that far away.

Something like the ability to grasp that there is no largest number is probably the threshold - the logic's simple, but requires you to think of a number separately from the real world. Hard to know how to show whether dolphins might know this or not, I appreciate that. I think it's essentially proven that dolphins are smart enough to understand the logical relationships between the pieces of this proof, as the relationships are simple, and they can grasp things of that complexity that are driven by the external world. But perhaps they can't see their internal world well enough to be able to pull 'number' as an idea out from 'two' and 'three' (which are ideas that dolphins are surely able to get.), and then finish the puzzle.

Perhaps it's not chains that are the issue, but the ability to abstract clear of the outside world and carry on going.

Comment author: DuncanS 22 November 2012 12:06:06AM 3 points [-]

Evolution, as an algorithm, is very much better as an optimizer of an existing design than it is as a creator of a new design. Optimizing the size of the brain of a creature is, for evolution, an easy problem. Making a better, more efficient brain is a much harder problem, and happens slowly, comparatively speaking.

The optimization problem is essentially a kind of budgeting problem. If I have a budget of X calories per day, I can spend it on X kilos of muscle, or Y grams of brain tissue. Both will cost me the same amount of calories, and each brings its own advantages. Since evolution is good at this kind of problem, we can expect that it will correctly find the point of tradeoff - the point where the rate of gain of advantage for additional expenditure on ANY organ in the body is exactly the same.

Putting it differently, a cow design could trade larger brain for smaller muscles, or larger muscles for smaller brain. The actual cow is found at the point where those tradeoffs are pretty much balanced.

A whale has a large brain, but it's quite small in comparison to the whale as a whole. If a whale were to double the size of its brain, it wouldn't make a huge dent in the overall calorie budget. However, evolution's balance of the whale body suggests that it wouldn't be worth it. Making a whale brain that much bigger wouldn't make the whale sufficiently better for it to cost in.

Where this argument basically leads is to turn the conventional wisdom on its head. People say that big brains are better because they are bigger. However, the argument that evolution can balance the size of body structures efficiently and quickly leads to the opposite conclusion. Modern brains are bigger because they are better. Because modern brains are better than they used to be - because evolution has managed to create better brains - it becomes more worthwhile making them bigger. Because brains are better, adding more brain gives you a bigger benefit, so the tradeoff point moves towards larger brain sizes.

Dinosaur brains were very much smaller, on the whole, than the brains of similar animals today. We can infer from this argument that this because their brains were less effective, and that in turn lowered any advantage that might have been gained from making the size of the brain larger. Consequently, dinosaurs must have been even more stupid than the small size of their brains suggests.

Although there is a nutritional argument for bigger brains in humans - the taming of fire allowed for much more efficient food usage - perhaps there is also some sense in which the human brain has recently become better, which in turn led it to become larger. Speculative, perhaps. But on the larger scale, looking at the sweeping increase in brain sizes across the whole of the geological record, the qualitative improvement in brains has to be seen in the gradual increase in size.

Comment author: JoshuaZ 22 November 2012 12:17:54AM 9 points [-]

Although there is a nutritional argument for bigger brains in humans - the taming of fire allowed for much more efficient food usage - perhaps there is also some sense in which the human brain has recently become better, which in turn led it to become larger.

Human brains have been shrinking..

Comment author: Vladimir_Nesov 19 November 2012 04:36:24PM *  6 points [-]

(Summary:) There is no cause to suppose, even if the human genome 100,000 years ago had the full set of IQ-related-alleles present in our genome today, that they would have developed civilisation much sooner.

(Rhetorical nitpick:) You gave an argument against one such cause. This doesn't mean there are aren't other causes, and it's not clear that your argument is decisive.

Comment author: Luke_A_Somers 26 November 2012 08:01:06PM 2 points [-]

it is unsurprising that living in a civilization has posed problems that our species finds difficult to tackle, because if we were capable of solving such problems easily, we'd probably also have been capable of developing civilization earlier than we did.

I'd more say, it's unsurprising that life poses problems our species finds difficult to tackle, because we have moving goalposts of satisfaction in terms of our problems being solved.

Comment author: JaySwartz 21 November 2012 02:34:52AM 2 points [-]

200k years ago when Homo Sapiens first appeared, fundamental adaptability was the dominant force. The most adaptable, not the most intelligent, survived. While adaptability is a component of intelligence, intelligence is not a component of adaptability. The coincidence with the start of the ice age is consistent with this. The ice age is a relatively minor extinction event, but none the less the appearance and survival of Homo Sapiens is consistent, where less adaptable life forms did not survive.

Across the Hominidae family Homo Sapiens proved to be most adaptable. During the ice age the likely focus was simply to survive. When a temperate climate returned there are some who believe Homo Sapiens, much as future Aztecs and others, began to systematically eliminate their competition.

Concurrently, another phenomenon was occurring. Homo Sapiens was learning and steadily increasing their understanding of the world. While there is not evidence that has survived the years, it would be reasonable to posit that learning continued in much the same fashion as today; new knowledge building on established knowledge. Being less organized than later situations it would progress more slowly.

Our improved knowledge likely increased our survival rates through the second ice age. When temperate climates returned, the stage was set for the advancement of mankind to organized farming, written language and Ur.

Somewhere in this time frame, intelligence began to overtake adaptability as the dominant force. This also marked the shift from evolutionary pressure to societal pressure as the underlying force behind advancement and survivability. The random nature of evolutionary advances gave way to a more complex society-driven selection process.

It's also important to draw a subtle distinction. The advances were not a function of increase in general IQ. They were a function of integration of the concepts envisioned by a subset of high IQ individuals into society; i.e., a societal variant of evolutionary adaptability.

Comment author: chaosmage 26 November 2012 04:32:04PM *  1 point [-]

We don't know for certain what it was about the culture surrounding the dawn of cities that made that particular combination of trade, writing, specialisation, hierarchy and religion communicable, when similar cultures from previous false dawns failed to spread. We can trace each of those elements to earlier sources, none of them were original to Ur, so perhaps it was a case of a critical mass achieving a self-sustaining reaction.

I suggest that the decisive ingredient was an explicit, somewhat accurate understanding of how children are conceived, and following from this, a concept of fatherhood.

Many hunter-gatherer societies didn't have that when we contacted them. They all had figured out it had something to do with childbearing age and menstruation. Some had narrowed it down to the pregnant woman having recently had sex with a man. But you don't need to know ejaculation inside the vagina is what counts, and that it matters who ejaculates there, unless you're trying to domesticate mammals.

From my superficial understanding of anthropology, it appears that in hunter-gatherer societies, the men have very little responsibility for the kids. Of course they contribute food and protection, which is commonly shared among the whole group including the kids. They'll teach the boys the essential skills, but any man will teach any boy the same set of skills; there's no personal connection and no specialization of labor. As a man in a hunter-gatherer society, you essentially need not worry about the next generation. And we do find that in these societies, the men (as well as the kids) tend to have a lot of spare time between hunts.

I imagine a hunter-gatherer, experimenting with domestication, first realizing he could be a father. That gives him one hell of an evolutionary advantage, and he's probably not the dumbest member of his group, so he may have good intelligence-related traits that he can now spread more effectively. But I think what's far more important is that this realization creates a lot of new priorities for him, and for everone he tells about this. Because he'd naturally start to measure his own success by the well-being of his children, much like the success of mothers was measured before. So he starts to invest much more time (both his own and the kid's) into teaching them skills that mothers can't teach because they're busy mothering. He could pass on more knowledge than a hunter-gatherer would, he'd prefer to teach his own kids over others, and boom he invents trades, family businesses, distribution of labor. Now knowledge can accumulate, inventions can be copied and spread, memetic/cultural evolution kicks in. Both the technologies that allow cities, and the refined fighting skills of the nomadic raiders, follow from intensified education.

Education increases expressed IQ. However, it also increases the value of expressed IQ in sexual selection. So I don't think we're quite as dumb as we were when civilization began. But I do think you won't find significant division of labor in any society that doesn't know about domestication of animals.

So when you ask why people accept the comparatively bad living conditions of early civilization, the answer is simple: they do it for the kids. You don't do that when you think that being a man, you can't have any.

Comment author: gwern 19 December 2012 09:58:07PM 1 point [-]
Comment author: NancyLebovitz 20 November 2012 04:50:29PM 1 point [-]

if it were possible for civilization to have developed when our species had a lower IQ, then we'd still be dealing with the same problems, but we'd have a lower IQ with which to tackle them.

On the other hand, so many of our problems are caused by other people, and some of them are caused by smart people. It took a lot of intelligence to make the financial crisis happen.

Now I'm wondering whether a more equal distribution of intelligence would lead to fewer problems.

Comment author: CCC 21 November 2012 08:11:45AM 3 points [-]

I strongly suspect that fewer idiots would lead to fewer problems (but someone who knows that they are an idiot, and listens closely to the advice of more intelligent people, may cause fewer problems than an arrogant but more intelligent person who believes that no-one can give him good advice). However, I don't think that fewer geniuses would dramatically reduce problems (on the basis that a problem caused by a genius is often temporary - like the financial crisis - while a problem solved by a genius - like the invention of X for a given X - is often solved permanently).

Comment author: shminux 19 November 2012 08:41:21PM 1 point [-]

But they had to be free to wander to follow nomadic food sources, and they were limited by access to food that the human body could use to create Docosahexaenoic acid, which is a fatty acid required for human brain development. Originally humans got this from fish living in the lakes and rivers of central Africa. However, about 80,000 years ago, we developed a gene that let us synthesise the same acid from other sources, freeing humanity to migrate away from the wet areas, past the dry northern part, and out into the fertile crescent.

So your point is that the expressed IQ was DHA-bound for those living away from the shore, thus making them not smart enough to develop civilization where the conditions were ripe, right? Why the DHA specifically? Wouldn't there be workarounds goven that "IQ is polygenic", if the evolutionary pressure was toward higher IQ? I'm wondering if this is but one of a multitude of possible explanations, and how one would attempt to falsify it.

Comment author: Strange7 20 November 2012 12:56:44AM 3 points [-]

The workaround that ended up being selected for was a new DHA synthesis pathway.

Comment author: [deleted] 19 November 2012 08:05:32PM 1 point [-]

Yep. As I implied elsewhere, I think that the step between intelligence and civilization is an important though overlooked one in the Great Filter.

Comment author: gwern 19 November 2012 08:45:12PM 4 points [-]

That's a paper I'd like to see someone do at some point: given the scaling information about human-level brains in the very interesting recent paper "The remarkable, yet not extraordinary, human brain as a scaled-up primate brain and its associated cost", Herculano-Houzel 2012 (quotes from it are in my essay linked in OP), and something like OP and my African points, estimate how close to the break-even point we are: how few calories/day of brain consumption are we away from being able to support civilization development on any timescale at all?

Comment author: John_Maxwell_IV 20 November 2012 04:43:06AM *  2 points [-]

Given that the neolithic revolution happened in more than one place, I don't see how it could be a very significant filter. Or are you referring to "civilization" in a sense not achieved by the Incans or the Aztecs? It's interesting to wonder how far the Aztecs & subsequent civilizations could've gone if they hadn't been interrupted by the Europeans.

Comment author: IlyaShpitser 20 November 2012 05:01:02AM *  2 points [-]

The Aztecs were an interesting society. I wonder how much of their gratuitous sacrifices were politically calculated to keep the city states in line, and how much was due to their genuine and profound existential anxiety ("we owe the Gods for their continued sacrifice to keep the world alive -- so we better keep sacrificing to them or the sun may not come up tomorrow!")

I don't think Aztecs are a good candidate for an alternative history civ, they feel to me like a failure mode. Incas make more sense (they also had potatoes, quinoa, llamas, etc.)

Comment author: tut 14 December 2012 06:20:54PM 0 points [-]

Both Judaism and Hinduism also started out as cosmic maintenance religions, so that might be a stage that civilizations need to pass rather than a specific failure mode of only one of them.

Comment author: CCC 21 November 2012 08:14:25AM *  1 point [-]

That just means that the right conditions were a worldwide (or close to worldwide) phenomenon on Earth. This does not imply that the right conditions for the development of civilisation are necessarily common given the right conditions for the formation of intelligent life.

Unfortunately, we only have one example of a planet having the right conditions for the formation of intelligent life. Drawing statistical inferences from a single example is not a good idea.

Comment author: [deleted] 20 November 2012 09:47:22AM 1 point [-]

Given that the neolithic revolution happened in more than one place, I don't see how it could be a very significant filter.

But it probably wouldn't have happened anywhere if there wasn't an interglacial period. My point is that intelligent life is unlikely to develop a technological civilization unless the planet they're on allows them to achieve very high population densities (e.g. by artificially growing more food than otherwise available), which ISTM that Earth before the interglacial period didn't.

Or are you referring to "civilization" in a sense not achieved by the Incans or the Aztecs?

From what I read on http://en.wikipedia.org/wiki/Aztec#Economy they definitely do count as a civilization by my standards.

Comment author: CarlShulman 20 December 2012 07:28:05PM 0 points [-]

then we might expect to see something similar to the Flynn effect.

The Flynn Effect has been an order of magnitude too fast to be accounted for by such factors.

Comment author: JoshuaZ 19 November 2012 04:03:21PM 0 points [-]

Technologies allow more technologies to be built. For example, writing bootstraps the ability to pass on knowledge a lot. Similarly, larger populations allow a higher chance that people will make discoveries.

The toy model I sometimes use to describe this is a biased coin with a chance of turning up heads of something like 1- /(C(k +n)) where C and k and are constants, with C very small, and k very large, and n is the number of previous heads. Here a heads denotes a discovery or invention. If for example C=1 and k=10^5 then it will take a long time to get the first few coin flips but once one has a few discoveries will start to become increasingly common. C essentially denotes intelligence, so a smarter species will start getting coin flips faster.

Of course this sort of thing only works if a species has a chance at getting to civilization at all, which the vast majority don't. But it does suggest that decreased intelligence could still result in a civilization. It doesn't seem implausible that if you took out a few of the genes that occasionally come together to result in Isaac Newtons and Terry Taos, you'd still be able to get progress at a decent pace. Even Newton for example was doing stuff that was largely being investigated by other people like Leibnitz and Hooke.

Comment author: Decius 19 November 2012 08:37:25PM 1 point [-]

Breakthroughs do cluster, but that's because of the tendency for a group to be working on a lot of related problems at once, and a breakthrough in any one area might resolve a key issue in any number of other areas.

For example, the motor/generator is a moderate breakthrough in the field of mechanics that solves several larger issues in electrical distribution. The relay, created for electrical distribution, led to the vacuum tube and then the transistor.

In a purer sense, better smelting practices provided more consistent steel, which allowed the polishing of more precise lenses, developing better telescopes which provided more information about the crystalline structure of metals yielding better metallurgy. The cycle doesn't recurse infinitely because we virtually never have some project that is just waiting on a development that is two steps ahead of current understanding.

Comment author: JoshuaZ 20 November 2012 03:55:45AM 1 point [-]

Breakthroughs do cluster, but that's because of the tendency for a group to be working on a lot of related problems at once, and a breakthrough in any one area might resolve a key issue in any number of other areas.

This is an explanation for clustering in modern breakthroughs. But there's a different sort of clustering: Discoveries and inventions are happening more and more rapidly. A few thousand years ago they happened at best every few hundred years. By the time one reached the late middle ages they happened every few decades. In the 19th century discoveries and inventions occurred at a breakneck pace. There's a decent argument that things have slowed down again in the last few years (possibly with a peak around 1900 and a decline since then) but it is this sort of more and more rapid pace in the large scale that suggests this type of model.

Comment author: Decius 20 November 2012 05:05:51AM 0 points [-]

So, there were more than 20 clusters of related discoveries in the 19th century? What were they?

A large number of related discoveries about e.g. electromagnetism should count the same as the large number of related discoveries about food preparation, or chipping flint, or masonry, or architectural engineering.

Comment author: JoshuaZ 20 November 2012 05:19:43AM *  1 point [-]

So, there were more than 20 clusters of related discoveries in the 19th century? What were they?

Well, electricity is one area where there were easily at least 20. Volta made the eponymous pile, Ohm discovers his law, Faraday discovers induction, Maxwell discovers his laws (and notes that the speed of propagation of an electromagnetic field is the observed speed of light), Faraday invented the first generators, Siemens refined it, Seebeck discovered the thermoelectric effect, Edison made a practical lightbulb, Edison made large scale electric grids, Hertz transmitted radio waves, Marconi used them to transmit signals, Daniell makes the first practical batteries (later improved to gravity cells), lead acid batteries also occur in this time period. Etc.

But this is missing part of the primary point: Discoveries help out even in not directly related areas. Better communication helps all areas. Thus for example, the ease of modern transportation and communication helped make the late 19th century transits of Venus to be observed with far more careful coordination than previous transits. And Darwin and other 19th century naturalists were able to do much of their work because sea travel had become substantially faster and more reliable in the 19th century than earlier. This is part of a general pattern: technologies and developments beget more technologies and insights even to areas that aren't directly connected.

Comment author: Decius 20 November 2012 06:06:28PM 1 point [-]

If fire and composting each count as one cluster, then electricity, electromagnetic radiation, and the relationship between the two are each one cluster. Also, I think that both Newtonian physics and Aristotelian physics count equally much as major developments, along with a very large number of developments that have been completely abandoned and forgotten. Combined with the developments that 'everybody knows' now (e.g. how to create and extinguish fires, till soil, make plants edible), I think that the rate of new discoveries has remained roughly proportional to the number of people alive and the degree by which they exceed subsistence living.

Granted, that is a huge increase in absolute rate, but it isn't strictly linked to an increase in intelligence or reasoning abilities.

Comment author: JoshuaZ 20 November 2012 07:39:55PM 0 points [-]

Even if it is an increase proportional to the population, that still means that a model where increased technology (which allows a larger population) is responsible for further increases. So the upshot is still the same, which is that it is highly plausible in that context that other species had enough intelligence to make civilization but never got the first few lucky technologies.

Comment author: Decius 21 November 2012 07:23:05AM 2 points [-]

A dolphin's ability to invent novel behaviours was put to the test in a famous experiment by the renowned dolphin expert Karen Pryor. Two rough-toothed dolphins were rewarded whenever they came up with a new behaviour. It took just a few trials for both dolphins to realise what was required. A similar trial was set up with humans. The humans took about as long to realise what they were being trained to do as did the dolphins. For both the dolphins and the humans, there was a period of frustration (even anger, in the humans) before they "caught on". Once they figured it out, the humans expressed great relief, whereas the dolphins raced around the tank excitedly, displaying more and more novel behaviours.

source

And cue the Douglas Adams reference.

Comment author: CAE_Jones 21 November 2012 10:17:08AM 2 points [-]

I have to wonder how much dolphin anatomy factors into their apparent lack of civilization-building. Then again, I haven't read anything about dolphins developing anything like agriculture (whereas some social insects seem to manage some impressive achievements, such as ants domesticating other insects, farming fungi, and building vast inter-connected colonies). Yet it seems pretty clear that social insects are nothing like intelligent in the way that primates and dolphins are.

Comment author: Decius 21 November 2012 05:46:21PM 1 point [-]

Well, there is the complex hunting behavior, and indications of limited tool use. Why is agriculture special?

Comment author: Vaniver 19 November 2012 11:30:10PM 1 point [-]

developing better telescopes which provided more information about the crystalline structure of metals

That doesn't sound like the history of solid state physics / materials engineering that I know; what do you have in mind here?

Comment author: Decius 20 November 2012 04:35:08AM 0 points [-]

Sorry- the need of optics to have metals with certain properties is part of any history of optics, and in order to understand metallurgy one needs to see metals as crystalline, which requires optics superior to those which have been created without applied metallurgy.

There's a certain advantage in that much of materials science can be cheated by experimentation without understanding, such that it is possible to work steel without knowing what steel is.

Comment author: Vaniver 20 November 2012 01:25:21PM *  1 point [-]

I was under the impression that the discovery that metals were crystalline was due to Bragg in 1912, and the wide angles involved don't require significant lens quality.

Metals do have microstructure that's very metallurgically relevant, which can be seen under a microscope (and there lens quality is rather relevant). While understanding the underlying crystalline structure helps the analysis, as you point out the experimentalists were able to find useful alloys and cooling recipes without knowing about the crystalline structure, with some help from knowing the microstructure.

I think the word "crystalline" was what was throwing me off from your description, though it is unclear to me how much advances in optics helped experimental metallurgists.

Comment author: Decius 20 November 2012 05:54:52PM 0 points [-]

Most of the alloying and cooling was developed without even looking at what you call the microstructure. Current-generation optical microscopes are easily capable of observing individual surface crystals under elastic and inelastic deformation.

The effects of a given heat treatment on a given object is fairly simple to measure, but to predict the effect of an untested combination requires deeper understanding. Trial and error can create isolated useful developments, but understanding the next level allows accurate prediction of interesting developments. For example, the effects of alloying agents in iron remain experimentally determined, rather than predicted.