I have found entirely the opposite; it's very strongly correlated with spelling ability - or so it seems from my necessarily few observations, of course. I know some excellent mathematicians who write very stilted prose, and a few make more grammatical errors than I'd have expected, but they can all at least spell well.
Abstract: In the FOOM debate, Eliezer emphasizes 'optimization power', something like intelligence, as the main thing that makes both evolution and humans so powerful. A different choice of abstractions says that the main thing that's been giving various organisms - from single-celled creatures to wasps to humans - an advantage is the capability to form superorganisms, thus reaping the gains of specialization and shifting evolutionary selection pressure to the level of the superorganism. There seem to be several ways by which a technological singularity could involve the creation of new kinds of superorganisms, which would then reap benefits above and beyond those that individual humans can achieve, and which would quite likely have quite different values. This strongly suggests that even if one is not worried about the intelligence explosion (because of e.g. finding a hard takeoff improbable), one should still be worried about the co-operative explosion.
After watching Jonathan Haidt's excellent new TEDTalk yesterday, I bought his latest book, The Righteous Mind: Why Good People Are Divided by Politics and Religion. At one point, Haidt has a discussion of evolutionary superorganisms - cases where previously separate organisms have joined together into a single superorganism, shifting evolution's selection pressure to operate on the level of the superorganism and avoiding the usual pitfalls that block group selection (excerpts below). With an increased ability for the previously-separate organisms to co-operate, these new superorganisms can often out-compete simpler organisms.
Haidt's argument is that color politics and other political mind-killingness are due to a set of adaptations that temporarily lets people merge into a superorganism and set individual interest aside. To a lesser extent, so are moral intuitions about things such as fairness and proportionality. Yes, it's a group selection argument. Haidt acknowledges that group selection has been unpopular in biology for a while, but notes that it has also been making a comeback recently, and cites e.g. the work on multi-level selection as supporting his thesis. I mention some of his references (which I have not yet read) below.
Anyway, the reason why I'm bringing this up is that I've been re-reading the FOOM debate of late, and in Life's Story Continues, Eliezer references some of the same evolutionary milestones as Haidt does. And while Eliezer also mentions that the cells provided a major co-operative advantage that allowed for specialization, he views this merely through the lens of optimization power, and dismisses e.g. unicellular eukaryotes with the words "meh, so what".
The interesting thing about the FOOM debate is that both Eliezer and Robin seem to talk a lot about the significance of co-operation, but they never quite take it up explicitly. Robin talks about the way that isolated groups typically aren't able to take over the world, because it's much more effective to co-operate with others than try to do everything yourself, or because information within the group tends to leak out to other parties. Eliezer talks about the way that cells allowed the ability for specialization, and how writing allowed human culture to accumulate and people to build on each other's inventions.
Even as Eliezer talks about intelligence, insight, and recursion, one could view this too as discussion about the power of specialization, co-operation and superorganisms - for intelligence seems to consist of a large number of specialized modules, all somehow merged to work in the same organism. And Robin seems to take the view of large groups of people acting as some kind of a loose superorganism, thus beating smaller groups that try to do things alone:
Robin has also explicitly made the point that it is the difficulty of co-operation which suggests that we can keep ourselves safe from uploads or AIs with hostile intentions:
Situations like war or violent rebellions are, arguably, cases where the "human superorganism adaptations" kick in the strongest - where people have the strongest propensity to view themselves primarily as a part of a group, and where they are the most ready to sacrifice themselves for the interest of the group. Indeed, Haidt quotes (both in the book and the TEDTalk) former soldiers who say that there's something very unique in the states of consciousness that war can produce:
So Robin, in If Uploads Come First, seems to basically be saying that uploads are dangerous if we let them become superorganisms. Usually, individuals have a large number of their own worries and priorities, and even if they did have much to gain by co-operating, they can't trust each other enough nor avoid the temptation to free-ride enough to really work together well enough to become dangerous.
Incidentally, this provides an easy rebuttal to the "corporations are already superintelligent" claim - while corporations have a variety of mechanisms for trying to provide their employees with the proper incentives, anyone who's worked for a big company knows that they employees tend to follow their own interests, even when they conflict with those of the company. It's certainly nothing like the situation with a cell, where the survival of each cell organ depends on the survival of the whole cell. If the cell dies, the cell organs die; if the company fails, the employees can just get a new job.
It would seem to me that, whatever your take on the intelligence explosion is, the current evolutionary history would strongly suggest that new kinds of superorganisms - larger, more cohesive than human groups, and less dependent on crippling their own rationality in order to maintain group cohesion - would be a major risk for humanity. This is not to say that an intelligence explosion wouldn't be dangerous as well - I have no idea what a mind that could think 1,000 times faster than me could do - but a co-operative explosion should be considered dangerous even if you thought a hard takeoff via recursive self-improvement (say) was impossible. And many of the ways for creating a superorganism (see below) seem to involve processes that could conceivably lead to the superorganisms having quite different values from humans. Even if no single superorganism could take over, that's not much of a comfort for the ordinary humans who are caught in a crossfire.
How might a co-operative explosion happen? I see at least three possibilities:
Below are some more excerpts from Haidt's book:
Haidt's references on this include, though are not limited to, the following:
Okasha, S. (2006) Evolution and the Levels of Selection. Oxford: Oxford University Press.
Hölldobler, B., and E. O. Wilson. (2009) The Superorganism: The Beauty, Elegance, and Strangeness of Insect Societies. New York: Norton.
Bourke, A. F. G. (2011) Principles of Social Evolution. New York: Oxford University Press.
Wilson, E. O., and B. Hölldobler. (2005) “Eusociality: Origin and Consequences.” Proceedings of the National Academy of Sciences of the United States of America 102:13367–71.
Tomasello, M., A. Melis, C. Tennie, E. Wyman, E. Herrmann, and A. Schneider. (Forthcoming) “Two Key Steps in the Evolution of Human Cooperation: The Mutualism Hypothesis.” Current Anthropology.