(FWIW various LessWrongers who have studied the issue don't agree with Eliezer's bias against arguments from group selection. (Um, me, for example; what finally convinced me was a staggeringly impressive chapter (IIRC 'twas "The Coevolution of Institutions and Preferences") from Microeconomics: Behavior, Institutions, and Evolution, though I also remember being swayed by various papers published by NECSI.) I'd be very interested in any opinions from well-read biologists or economists.)
The analogies between biological and social evolution are limited. Not only does group selection work in social evolution, but social evolution is Lamarckian in that it retains acquired traits. So you need to be careful when reasoning from one to another; I think that is one reason people keep trying to "justify" group selection in biology.
The "new" group selection (e.g. here and here) works with both organic and cultural evolution.
Dogs pass on fleas they acquired during their lifespan to their offspring - much as humans pass on ideas they acquired during their lifespan to their offspring. Both the fleas and the ideas can mutate inside their hosts - and those changes are passed on as well.
The differences between organic and cultural evolution are thus frequently overstated. Critically, Darwinian evolutionary theory applies to both realms.
except it's more like viruses than flies: singificant amounts of evolution can hapen within a single host generation, and entirely different species can crospolinate if they end up within the same host.
Not only does group selection work in social evolution, but social evolution is Lamarckian in that it retains acquired traits
Isn't modern opinion that vanilla natural selection is also non-negligibly Lamarckian? (I suppose it's very possible that the sources I've read over-stated the Lamarckian factors.)
When you have a parenthetical inside a parenthetical inside a parenthetical, is it time to break out the square brackets?
No, it's time to take out some of the round ones.
I find that even the trivial heuristic "delete all parentheses" usually improves what I write.
The heuristic I generally use is "use parentheses as needed, but rewrite if you find that you're needing to use square brackets." Why? Thinking about it, I believe this is because I see parentheses all the time in professional texts, but almost never parentheticals inside parentheticals.
But as I verbalize this heuristic, I suddenly feel like it might lend the writing a certain charm or desirable style to defy convention and double-bag some asides. Hmm.
(A related heuristic for those with little time is to assume that lots of parentheses is correlated with lack of writing ability is correlated with low intelligence is correlated with inability to contribute interesting ideas, thus allowing you to ignore people that (ab-)use lots of parentheses. I admit to using this heuristic sometimes.)
I find that people who use a lot of parentheses tend to be intelligent, and I think this screens off the alleged inference from lots of parentheses to inability to contribute interesting ideas.
I don't know whether I'm right in thinking there's a parentheses/intelligence correlation, but if I am there's a reasonably plausible explanation. Why would someone use lots of parens? Because when they think about something, a bunch of other related things occur to them too and they want to avoid oversimplifying. Of course it's even better to think of the related things and then find ways to express yourself that don't depend on overloading your prose with parenthes, but most people who use few parentheses aren't in that category.
I don't know whether I'm right in thinking there's a parentheses/intelligence correlation, but if I am there's a reasonably plausible explanation. Why would someone use lots of parens?
((Well), that's (easy).) (((Heavy) users (of (parentheses))) tend to (be ((LISP) weenies))), and ((learning (LISP)) gives ((a) boost (of) ((15) to (30) (IQ) points), (at least))).
(First impression: You're talking about the 130 vs. 145 distinction whereas I'm talking about the 145 vs. 160 distinction (which you characterize as "even better"). (Can-barely-stand-up drunk (yet again!), opinions may or may not be reflectively endorsed, let alone right.))
Yes, it's plausible that we're talking about different distinctions. But even in the range 145-160 I am very, very unconvinced that using fewer parens is a good sign of intelligence. Perhaps you have some actual evidence? Unfortunately, people with an IQ of 160 are scarce enough that it'll probably be difficult to distinguish a real connection from a spurious one where it just happens that the smartest people are also being careful about writing style.
(Increasingly contemptuous of your too-drunk-to-stand signalling extravaganza; my comments may be distorted in consequence.)
Yes, I think I have evidence -- of about 5 people I know of 160+ IQ, none use many parentheses, whereas I know of a greater than 1 in 6 fraction in the immediate predecessor-S.D. that fall into the parenthesis-(ab)using category. Of course, even I myself don't put much faith in that data.
(Is my drunkenness-signaling (failed) signaling or (failed) counter-signaling (ignoring externalities in the form of diminished credibility)? I can't tell.)
Is treating "data" as plural rather than singular correlated with difference between high and very high IQs in your experience? :-)
(I wonder whether I'm evidence one way or another here. I'm somewhere around 150, I think, and I used to use an awful lot of parens and have forced myself not to because I think not doing so is better style. But I'm more concerned with writing style than many other people I know who are about as clever as I am.)
((Counter-signalling is a special case of signalling. It isn't necessarily (failed) just because I don't like it.))
((()))
Is treating "data" as plural rather than singular correlated with difference between high and very high IQs in your experience? :-)
In my experience that seems to correlate a lot more with conscientiousness and caring about writing style after screening off intelligence. (Also: fuck!—I hate when I forget to treat "data" as plural.)
I used to use an awful lot of parens and have forced myself not to because I think not doing so is better style.
Same here, at least when it comes to writing for a truly general audience or for myself.
(Side note: another thing that confuses me is that intelligence doesn't seem to me to be overwhelmingly correlated with spelling ability. Not quite sure what to make of this; thus far I've attributed it to unrelated selection effects on who I've encountered. Would be interested in others' impressions.)
I have found entirely the opposite; it's very strongly correlated with spelling ability - or so it seems from my necessarily few observations, of course. I know some excellent mathematicians who write very stilted prose, and a few make more grammatical errors than I'd have expected, but they can all at least spell well.
I have the opposite impression, but now that I have that correlation it's hard to make further unbiased observations.
I know many very intelligent good spellers, and several very intelligent mediocre spellers, and one or two very intelligent apparently incorrigibly atrocious spellers. I don't know any moderate-intelligence good spellers, a few moderate-intelligence atrocious spellers, and quite a few quite a few moderate-intelligence mediocre spellers. I don't know very many dumb people socially, and mostly don't know how good their spelling is as they don't write much. People I met on the Internet don't really count, as I filter too much on spelling ability to begin with.
(Since you two seem to be mostly using the mentioned IQ scores as a way to indicate relative intelligence, rather than speaking of anything directly related to IQ and IQ tests, this is somewhat tangential; however, Mr. Newsome does mention some actual scores below, and I think it's always good to be mindful when throwing IQ scores around. So when speaking of IQ specifically, I find it helpful to keep in mind the following.
There are many different tests, which value scores differently. In some tests, scores higher than about 150 are impossible or meaningless; and in all tests, the higher the numbers go the less reliable [more fuzzy] they are. One reason for this, IIRC, is that smaller and smaller differences in performance will impact the result more, on the extreme ends of the curve; so the difference in score between two people with genius IQs could be a bad day that resulted in a poorer performance on a single question. [There is another reason, the same reason that high enough scores can be meaningless; I believe this is due to the scarcity of data/people on those extreme ends, making it difficult or impossible to normalize the test for them, but I'm not certain I have the explanation right. I'm sure someone else here knows more.])
(Hence my use of parentheses: it's a way of saying, "you would be justified in ignoring this contribution". Nesov does a similar thing when he's nitpicking or making a tangential point.)
No, that time passed when you merely had a single parenthetical inside a parenthetical. But when you have a further parenthetical inside the former two, is it then time to break out the curly brackets?
The "new" group selection (e.g. here, here and here) has been demonstrated to be pretty-much equivalent to the standard and uncontroversial inclusive fitness framwork in a raft of papers.
Here's Marek Kohn writing in 2008:
There is widespread agreement that group selection and kin selection — the post-1960s orthodoxy that identifies shared interests with shared genes — are formally equivalent.
That's not to say that group selection is useless - since it involves different models and accounting methods.
There are still a few dissenters. E.g. Nowak, Tarnita and Wilson (2010) apparently disagree - saying:
Group selection models, if correctly formulated, can be useful approaches to studying evolution. Moreover, the claim that group selection is kin selection is certainly wrong.
These folk apparently don't grok the topic too well.
For a more modern and knowledgeable group selection critique, see:
Here's Stuart West on video, covering much the same topic.
It's important to remember that a given quantity of intelligence / brain matter / computational power is much more powerful as a single organism than it is as a collection of them. There are problems that a human can solve easily that five cats never could, no matter how they cooperated.
Well to be certain you are taking it out of context to compare it. You are comparing CAT intelligence with HUMAN intelligence! That is not fair. Compare CAT intelligence in groups vs CAT intelligence is isolation then that would be a fair comparison. "Standing on the shoulders of Giants" No matter how clever you are or how super intelligent you might be - you cannot invent everything all by yourselves. The keyboard you type or the monitor you use or the internet is an example of Co-operative Explosion. Each one of us creates a piece of the puzzle that becomes the big picture - each one of us are not capable enough to pull of internet or space travel all by ourselves. We need the research of millions of individuals piled up over centuries to let this happen and I guess that is what the author is trying to convey here. He is not trying to compare Artificial Narrow Intelligence(animal level) vs Artificial General Intelligence(Human Level)vs Artificial Intelligence (perfected human intelligence) vs Artificial Super Intelligence (exponentially capable intelligence). This blog/forum is a live example of Co-Operative Explosion. This blog evolved over a few years with ideas that are embedded by thousands of intelligent individuals at different points of time all of them with different points of view but the net result is beautiful collection of knowledge! Imagine if you hire a few people and ask them to build this blog how foolish that proposition might sound!
Haidt's argument is that color politics and other political mind-killingness are due to a set of adaptations that temporarily lets people merge into a superorganism and set individual interest aside.
This seems more likely to be part of a general set of adaptations and norms for being nice to those like you (often kin or tribe members who you have reciprocal relationships with) and not so nice to strange-looking outsiders, who are not in reciprocal relationships either with you, or with other group members - and are thus poorly motivated to cooperate with you. Such explanations are based on kin selection and reciprocity - and typically make little or no mention of group selection or "superorganisms".
There's a field known as "tag-based cooperation" - which is all about the game-theoretic basis of color politics. Here's one of the papers that launched that field:
For some who were less impressed, here's my blog post on Jonathan Haidt's talk - and here's Jerry Coyne's response - on the same topic.
Interesting, thanks. I'm revising my faith in Haidt's theories downwards.
EDIT: Just noticed, Haidt defends himself in the comments.
EDIT: Just noticed, Haidt defends himself in the comments.
From there:
The debate is whether group-level selection GS) played ANY role, or whether everything about our moral/political/religious lives can be explained straightforwardly, without contortions, at the level of the individual.
This way of framing the debate just seems daft to me. Individuals care for others. In particular, they care for their kin. We have known the details of why individuals care for kin since the 1960s. It should not be group selection vs individual selection - it should be group selection vs kin selection - and kin selection basically won this battle back in the 1980s. That is not to say that group selection is wrong, it's just not a favoured set of models and terminology.
EDIT2: David S. Wilson comments on Coyne's post.
FWIW, that's about a different post by Coyne, from some time back.
Glenn Gray: Many veterans will admit that the experience of communal effort in battle has been the high point of their lives. "I" passes insensibly into a "we," "my" becomes "our" and individual faith loses its central importance. I believe that it is nothing less than the assurance of immortality that makes self-sacrifice at these moments so relatively easy. I may fall, but I do not die, for that which is real in me goes forward and lives on in the comrades for whom I gave up my life.
...
Incidentally, this provides an easy rebuttal to the "corporations are already superintelligent" claim - while corporations have a variety of mechanisms for trying to provide their employees with the proper incentives, anyone who's worked for a big company knows that they employees tend to follow their own interests, even when they conflict with those of the company. It's certainly nothing like the situation with a cell, where the survival of each cell organ depends on the survival of the whole cell. If the cell dies, the cell organs die; if the company fails, the employees can just get a new job.
This seems to be a testable claim: Are military groups more efficient than companies at jobs companies typically do, given equivalent money/resources? For extra credit, do the same test for life-threatening jobs in which cooperation is paramount, such as coal mining, or working on overhead power lines. I don't think this is the case, or the military would want to contract for such jobs with private-sector businesses.
Police corps and fire departments may qualify here, since they do exhibit some similarity with military. But they occupy small niches - they surely do not justify a claim that "superorganisms" are always more efficient.
while corporations have a variety of mechanisms for trying to provide their employees with the proper incentives, anyone who's worked for a big company knows that they employees tend to follow their own interests, even when they conflict with those of the company. It's certainly nothing like the situation with a cell, where the survival of each cell organ depends on the survival of the whole cell. If the cell dies, the cell organs die; if the company fails, the employees can just get a new job.
These observations might not hold for uploads running on hardware paid for by the company. Which would give a combination of company+upload-tech superior cooperation options compared to current forms of collaboration. Also, company-owned uploads will have most of their social network inside the company as well, in particular not with uploads owned by competitors. Hence the natural group boundary would not be "uploads" versus "normals", but company boundaries.
Hence the natural group boundary would not be "uploads" versus "normals", but company boundaries.
Or maybe governments - if they get their act together.
Dividing your country into competing companies hardly seems very efficient.
Robin has a post that in part addresses the question of how much value sharing can improve cooperation:
On the general abstract argument, we see a common pattern in both the evolution of species and human organizations — while winning systems often enforce substantial value sharing and loyalty on small scales, they achieve much less on larger scales. Values tend to be more integrated in a single organism’s brain, relative to larger families or species, and in a team or firm, relative to a nation or world. Value coordination seems hard, especially on larger scales.
This is not especially puzzling theoretically. While there can be huge gains to coordination, especially in war, it is far less obvious just how much one needs value sharing to gain action coordination. There are many other factors that influence coordination, after all; even perfect value matching is consistent with quite poor coordination. It is also far from obvious that values in generic large minds can easily be separated from other large mind parts. When the parts of large systems evolve independently, to adapt to differing local circumstances, their values may also evolve independently. Detecting and eliminating value divergences might in general be quite expensive.
In general, it is not at all obvious that the benefits of more value sharing are worth these costs. And even if more value sharing is worth the costs, that would only imply that value-sharing entities should be a bit larger than they are now, not that they should shift to a world-encompassing extreme.
My own intuition is that high fidelity value sharing (the kind made possible by mind copying / resets) would be a major breakthrough, and not just an incremental improvement as Robin suggests.
My own intuition is that high fidelity value sharing (the kind made possible by mind copying / resets) would be a major breakthrough, and not just an incremental improvement as Robin suggests.
Wouldn't the indexicality of human values lead to Calvin problems, if that's the kind of mind you're copying?
My own intuition is that high fidelity value sharing (the kind made possible by mind copying / resets) would be a major breakthrough, and not just an incremental improvement as Robin suggests.
We do have high fidelity copying today. We can accurately copy anything we can represent as digital information - including values. While we can copy values, one problem is that we can't easily convince people to "install" them. Instead the values of others often get rejected by people's memetic immune system as attempts at manipulation.
If we can copy values, or represent them as digital information, I haven't heard about it.
The closest thing I've seen is tools for exporting values into an intersubjective format like speech, writing, art, or behavior. As you point out, the subsequent corresponding import often fails... whether that's because of explicit defense mechanisms, or because the exported data structure lacks key data, or because the import process is defective in some way, or for some other reason, is hard to tease out.
Maybe you mean something different from me by the term 'values'. The values I was referring to are fairly simple to write down. Many of them are so codified in legal systems and religious traditions.
If I tell you that I like mulberries more than blackberries, then that's information about my values represented digitally. The guts of the value information really is in there. Consequently, you can accurately make predictions about what I will do if presented with various food choices - without actually personally adopting my values.
Yeah, we apparently mean different things. To my mind the statement "I like mulberries more than blackberries" is not even close to being a high-fidelity copy of my relative preferences for mulberries and blackberries; I could not reconstruct the latter given only the former.
I would classify it as being value information which can be accurately copied. I never meant to suggest that it was an accurate copy of what was in your mind. For instance, your mental representation could be considered to include additional information about what mulberries and blackberries are - broadly related to what might be found in an encyclopedia of berries. The point is that we can represent people's values digitally - to the point where we can make quite good predictions about what choices they will make under controlled conditions involving value-derived choices. Values aren't especially mysterious, they are just what people want - and we have a big mountain of information about that which can be represented digitally.
You mean "values aren't especially mysterious", I expect.
I agree that they're not mysterious. More specifically, I agree that what it means to capture my value information about X and Y is to capture the information someone else would need in order to accurately and reliably predict my relative preferences for X and Y under a wide range of conditions. And, yes, a large chunk of that information is what you describe here as encyclopedic knowledge.
So for (X,Y)=(mulberry, blackberry) a high-fidelity copy of my values, in conjunction with a suitable encyclopedia of berries, would allow you to reliably predict which one I would prefer to eat with chocolate ice cream, which one I would prefer to spread as jam on rye bread, which one I would prefer to decorate a cake with, which one I would prefer to receive a pint of as a gift, how many pints of one I'd exchange for a pint of the other, etc., etc., etc.
Yes?
Assuming I've gotten that right... so, when you say:
We do have high fidelity copying today. We can accurately copy anything we can represent as digital information - including values
...do you mean to suggest that we can, today, create a high-fidelity copy of my values with respect to mulberries and blackberries as described above?
(Obviously, this is a very simple problem in degenerate cases like "I like blackberries and hate mulberries," but that's not all that interesting.)
If so, do you know of any examples of that sort of high-fidelity copy of someone's values with respect to some non-degenerate (X,Y) pair actually having been created? Can you point me at one?
I can't meet your "complex value extraction" challenge. I never meant to imply "complete" extraction - just that we can extract value information (like this) and then copy it around with high fidelity. Revealed preferences can be good, but I wouldn't like to get into quantifying their accuracy here.
OK.
I certainly agree that any information we know how to digitally encode in the first place, we can copy around with high fidelity.
But we don't know how to digitally encode our values in the first place, so we don't know how to copy them. That's not because value is some kind of mysterious abstract ethereal "whatness of the if"... we can define it concretely as the stuff that informs, and in principle allows an observer to predict, our revealed preferences... but because it's complicated.
I'm inclined to agree with Wei_Dai that high-fidelity value sharing would represent a significant breakthrough in our understanding of and our ability to engineer human psychology, and would likely be a game-changer.
But we don't know how to digitally encode our values in the first place, so we don't know how to copy them.
Well, we do have the idea of revealed preference. Also, if you want to know what people value, you can often try asking them. Between them, these ideas work quite well.
What we can't do is build a machine that optimises them - so there is something missing, but it's mostly not value information. We can't automatically perform inductive inference very well, for one thing.
I suspect I agree with you about what information we can encode today, and you seem to agree with me that there's additional information in our brains (for example, information about berries) that we use to make those judgments which revealed preferences (and to a lesser extent explicitly articulated preferences) report on, which we don't yet know how to encode.
I don't really care whether we call that additional information "value information" or not; I thought initially you were claiming that we could in practice encode it. Thank you for clarifying.
Also agreed that there are operations our brains perform that we don't know how to automate.
This sounds like a semantic quibble to me. Okay, maybe the main problem is not in copying but in "installing", but wouldn't mind copying effectively make "installation" much easier, as well?
It wasn't intended as a semantic quibble - the idea was more to say: is there really a "major breakthrough" here? If so, what does it consist of? I was arguing against it being "high fidelity value sharing".
Mind copying would indeed bypass the "installation" issue.
Abstract: In the FOOM debate, Eliezer emphasizes 'optimization power', something like intelligence, as the main thing that makes both evolution and humans so powerful. A different choice of abstractions says that the main thing that's been giving various organisms - from single-celled creatures to wasps to humans - an advantage is the capability to form superorganisms, thus reaping the gains of specialization and shifting evolutionary selection pressure to the level of the superorganism. There seem to be several ways by which a technological singularity could involve the creation of new kinds of superorganisms, which would then reap benefits above and beyond those that individual humans can achieve, and which would quite likely have quite different values. This strongly suggests that even if one is not worried about the intelligence explosion (because of e.g. finding a hard takeoff improbable), one should still be worried about the co-operative explosion.
After watching Jonathan Haidt's excellent new TEDTalk yesterday, I bought his latest book, The Righteous Mind: Why Good People Are Divided by Politics and Religion. At one point, Haidt has a discussion of evolutionary superorganisms - cases where previously separate organisms have joined together into a single superorganism, shifting evolution's selection pressure to operate on the level of the superorganism and avoiding the usual pitfalls that block group selection (excerpts below). With an increased ability for the previously-separate organisms to co-operate, these new superorganisms can often out-compete simpler organisms.
Haidt's argument is that color politics and other political mind-killingness are due to a set of adaptations that temporarily lets people merge into a superorganism and set individual interest aside. To a lesser extent, so are moral intuitions about things such as fairness and proportionality. Yes, it's a group selection argument. Haidt acknowledges that group selection has been unpopular in biology for a while, but notes that it has also been making a comeback recently, and cites e.g. the work on multi-level selection as supporting his thesis. I mention some of his references (which I have not yet read) below.
Anyway, the reason why I'm bringing this up is that I've been re-reading the FOOM debate of late, and in Life's Story Continues, Eliezer references some of the same evolutionary milestones as Haidt does. And while Eliezer also mentions that the cells provided a major co-operative advantage that allowed for specialization, he views this merely through the lens of optimization power, and dismisses e.g. unicellular eukaryotes with the words "meh, so what".
The interesting thing about the FOOM debate is that both Eliezer and Robin seem to talk a lot about the significance of co-operation, but they never quite take it up explicitly. Robin talks about the way that isolated groups typically aren't able to take over the world, because it's much more effective to co-operate with others than try to do everything yourself, or because information within the group tends to leak out to other parties. Eliezer talks about the way that cells allowed the ability for specialization, and how writing allowed human culture to accumulate and people to build on each other's inventions.
Even as Eliezer talks about intelligence, insight, and recursion, one could view this too as discussion about the power of specialization, co-operation and superorganisms - for intelligence seems to consist of a large number of specialized modules, all somehow merged to work in the same organism. And Robin seems to take the view of large groups of people acting as some kind of a loose superorganism, thus beating smaller groups that try to do things alone:
Robin has also explicitly made the point that it is the difficulty of co-operation which suggests that we can keep ourselves safe from uploads or AIs with hostile intentions:
Situations like war or violent rebellions are, arguably, cases where the "human superorganism adaptations" kick in the strongest - where people have the strongest propensity to view themselves primarily as a part of a group, and where they are the most ready to sacrifice themselves for the interest of the group. Indeed, Haidt quotes (both in the book and the TEDTalk) former soldiers who say that there's something very unique in the states of consciousness that war can produce:
So Robin, in If Uploads Come First, seems to basically be saying that uploads are dangerous if we let them become superorganisms. Usually, individuals have a large number of their own worries and priorities, and even if they did have much to gain by co-operating, they can't trust each other enough nor avoid the temptation to free-ride enough to really work together well enough to become dangerous.
Incidentally, this provides an easy rebuttal to the "corporations are already superintelligent" claim - while corporations have a variety of mechanisms for trying to provide their employees with the proper incentives, anyone who's worked for a big company knows that they employees tend to follow their own interests, even when they conflict with those of the company. It's certainly nothing like the situation with a cell, where the survival of each cell organ depends on the survival of the whole cell. If the cell dies, the cell organs die; if the company fails, the employees can just get a new job.
It would seem to me that, whatever your take on the intelligence explosion is, the current evolutionary history would strongly suggest that new kinds of superorganisms - larger, more cohesive than human groups, and less dependent on crippling their own rationality in order to maintain group cohesion - would be a major risk for humanity. This is not to say that an intelligence explosion wouldn't be dangerous as well - I have no idea what a mind that could think 1,000 times faster than me could do - but a co-operative explosion should be considered dangerous even if you thought a hard takeoff via recursive self-improvement (say) was impossible. And many of the ways for creating a superorganism (see below) seem to involve processes that could conceivably lead to the superorganisms having quite different values from humans. Even if no single superorganism could take over, that's not much of a comfort for the ordinary humans who are caught in a crossfire.
How might a co-operative explosion happen? I see at least three possibilities:
Below are some more excerpts from Haidt's book:
Haidt's references on this include, though are not limited to, the following:
Okasha, S. (2006) Evolution and the Levels of Selection. Oxford: Oxford University Press.
Hölldobler, B., and E. O. Wilson. (2009) The Superorganism: The Beauty, Elegance, and Strangeness of Insect Societies. New York: Norton.
Bourke, A. F. G. (2011) Principles of Social Evolution. New York: Oxford University Press.
Wilson, E. O., and B. Hölldobler. (2005) “Eusociality: Origin and Consequences.” Proceedings of the National Academy of Sciences of the United States of America 102:13367–71.
Tomasello, M., A. Melis, C. Tennie, E. Wyman, E. Herrmann, and A. Schneider. (Forthcoming) “Two Key Steps in the Evolution of Human Cooperation: The Mutualism Hypothesis.” Current Anthropology.