Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
In just over two weeks I will be running an online reading group on Nick Bostrom's Superintelligence, on behalf of MIRI. It will be here on LessWrong. This is an advance warning, so you can get a copy and get ready for some stimulating discussion. MIRI's post, appended below, gives the details.
Added: At the bottom of this post is a list of the discussion posts so far.
Nick Bostrom’s eagerly awaited Superintelligence comes out in the US this week. To help you get the most out of it, MIRI is running an online reading group where you can join with others to ask questions, discuss ideas, and probe the arguments more deeply.
The reading group will “meet” on a weekly post on the LessWrong discussion forum. For each ‘meeting’, we will read about half a chapter of Superintelligence, then come together virtually to discuss. I’ll summarize the chapter, and offer a few relevant notes, thoughts, and ideas for further investigation. (My notes will also be used as the source material for the final reading guide for the book.)
Discussion will take place in the comments. I’ll offer some questions, and invite you to bring your own, as well as thoughts, criticisms and suggestions for interesting related material. Your contributions to the reading group might also (with permission) be used in our final reading guide for the book.
We welcome both newcomers and veterans on the topic. Content will aim to be intelligible to a wide audience, and topics will range from novice to expert level. All levels of time commitment are welcome.
We will follow this preliminary reading guide, produced by MIRI, reading one section per week.
If you have already read the book, don’t worry! To the extent you remember what it says, your superior expertise will only be a bonus. To the extent you don’t remember what it says, now is a good time for a review! If you don’t have time to read the book, but still want to participate, you are also welcome to join in. I will provide summaries, and many things will have page numbers, in case you want to skip to the relevant parts.
If this sounds good to you, first grab a copy of Superintelligence. You may also want to sign up here to be emailed when the discussion begins each week. The first virtual meeting (forum post) will go live at 6pm Pacific on Monday, September 15th. Following meetings will start at 6pm every Monday, so if you’d like to coordinate for quick fire discussion with others, put that into your calendar. If you prefer flexibility, come by any time! And remember that if there are any people you would especially enjoy discussing Superintelligence with, link them to this post!
Topics for the first week will include impressive displays of artificial intelligence, why computers play board games so well, and what a reasonable person should infer from the agricultural and industrial revolutions.
Posts in this sequence
I am going to take the free Coursera class "Think Again: How to Reason and Argue" starting Monday, January 13 (three days from now) and I thought I'd to see if there was any interest in going through this as a group. This is one of the MIRI recommended courses under the "Heuristics and Biases" section. If you're interested and you will sign up if we get a group together, please leave a note in the comments (if you will only sign up if the group hits a specific size, please leave that requirement in the comments as well). If enough people are willing to sign up (5 or more? idk), I will start a group on Google (or somewhere else if that's preferred) so that we can have a forum to share thoughts, ask questions, etc. Otherwise, email may be a better way to maintain contact.
EDIT: We hit five people willing to start, so I created a Google group here. If you're interested in taking the course with us, please sign up there.
The recommended text is fairly inexpensive on Amazon (<$20 USD) and can be found on libgen.info for free if that's your thing. It's taught in English, lasts 12 weeks and predicts that it will take 5-6 hours/week. More info from the course website:
Think Again: How to Reason and Argue
About the Course
This is a multi-purpose essay-on-the-making, it is being written aiming at the following goals 1) Mandatory essay writing at the end of a semester studying "Cognitive Ethology: Culture in Human and Non-Human Animals" 2) Drafting something that can later on be published in a journal that deals with cultural evolution, hopefully inclining people in the area to glance at future oriented research, i.e. FAI and global coordination 3) Publishing it in Lesswrong and 4) Ultimately Saving the World, as everything should. If it's worth doing, it's worth doing in the way most likely to save the World. Since many of my writings are frequently too long for Lesswrong, I'll publish this in a sequence-like form made of self-contained chunks. My deadline is Sunday, so I'll probably post daily, editing/creating the new sessions based on previous commentary.
Abstract: The study of cultural evolution has drawn much of its momentum from academic areas far removed from human and animal psychology, specially regarding the evolution of cooperation. Game theoretic results and parental investment theory come from economics, kin selection models from biology, and an ever growing amount of models describing the process of cultural evolution in general, and the evolution of altruism in particular come from mathematics. Even from Artificial Intelligence interest has been cast on how to create agents that can communicate, imitate and cooperate. In this article I begin to tackle the 'why?' question. By trying to retrospectively make sense of the convergence of all these fields, I contend that further refinements in these fields should be directed towards understanding how to create environmental incentives fostering cooperation.
We need systems that are wiser than we are. We need institutions and cultural norms that make us better than we tend to be. It seems to me that the greatest challenge we now face is to build them. - Sam Harris, 2013, The Power Of Bad Incentives
2) Cultures evolve
Culture is perhaps the most remarkable outcome of the evolutionary algorithm (Dennett, 1996) so far. It is the cradle of most things we consider humane - that is, typically human and valuable - and it surrounds our lives to the point that we may be thought of as creatures made of culture even more than creatures of bone and flesh (Hofstadter, 2007; Dennett, 1992). The appearance of our cultural complexity has relied on many associated capacities, among them:
1) The ability to observe, be interested by, and go nearby an individual doing something interesting, an ability we share with norway rats, crows, and even lemurs (Galef & Laland, 2005).
2) Ability to learn from and scrounge the food of whoever knows how to get food, shared by capuchin monkeys (Ottoni et al, 2005).
3) Ability to tolerate learners, to accept learners, and to socially learn, probably shared by animals as diverse as fish, finches and Fins (Galef & Laland, 2005).
4) Understanding and emulating other minds - Theory of Mind - empathizing, relating, perhaps re-framing an experience as one's own, shared by chimpanzees, dogs, and at least some cetaceans (Rendella & Whitehead, 2001).
5) Learning the program level description of the action of others, for which the evidence among other animals is controversial (but see Cantor & Whitehead, 2013). And finally...
6) Sharing intentions. Intricate understanding of how two minds can collaborate with complementary tasks to achieve a mutually agreed goal (Tomasello et al, 2005).
Irrespective of definitional disputes around the true meaning of the word "culture" (which doesn't exist, see e.g. Pinker, 2007 pg115; Yudkowsky 2008A), each of these is more cognitively complex than its predecessor, and even (1) is sufficient for intra-specific non-environmental, non-genetic behavioral variation, which I will call "culture" here, whoever it may harm.
By transitivity, (2-6) allow the development of culture. It is interesting to notice that tool use, frequently but falsely cited as the hallmark of culture, is ubiquitously equiprobable in the animal kingdom. A graph showing, per biological family, which species shows tool use gives us a power law distribution, whose similarity with the universal prior will help in understanding that being from a family where a species uses tools tells us very little about a specie's own tool use (Michael Haslam, personal conversation).
Once some of those abilities are available, and given an amount of environmental facilities, need, and randomness, cultures begin to form. Occasionally, so do more developed traditions. Be it by imitation, program level imitation, goal emulation or intention sharing, information is transmitted between agents giving rise to elements sufficient to constitute a primeval Darwinian soup. That is, entities form such that they exhibit 1)Variation 2)Heredity or replication 3)Differential fitness (Dennett, 1996). In light of the article Five Misunderstandings About Cultural Evolution (Henrich, Boyd & Richerson, 2008) we can improve Dennett's conditions for the evolutionary algorithm as 1)Discrete or continuous variation 2)Heredity, replication, or less faithful replication plus content attractors 3)Differential fitness. Once this set of conditions is met, an evolutionary algorithm, or many, begin to carve their optimizing paws into whatever surpassed the threshold for long enough. Cultures, therefore, evolve.
The intricacies of cultural evolution and mathematical and computational models of how cultures evolve have been the subject of much interdisciplinary research, for an extensive account of human culture see Not By Genes Alone (Richerson & Boyd, 2005). For computational models of social evolution, there is work by Mesoudi, Novak, and others e.g. (Hauert et al, 2007). For mathematical models, the aptly named Mathematical models of social evolution: A guide for the perplexed by McElrath and Rob Boyd (2007) makes the textbook-style walk-through. For animal culture, see (Laland & Galef, 2009).
Cultural evolution satisfies David Deutsch's criterion for existence, it kicks back, it satisfies the evolutionary equivalent of the condition posed by the Quine-Putnam Indispensability argument in mathematics, i.e. it is a sine qua non condition for understanding how the World works nomologically. It is falsifiable to Popperian content, and it inflates the Worlds ontology a little, by inserting a new kind of "replicator", the meme. Contrary to what happened on the internet, the name 'meme' has lost much of it's appeal within cultural evolution theorists, and "memetics" is considered by some to refer only to the study of memes as monolithic atomic high fidelity replicators, which would make the theory obsolete. This has created the following conundrum: the name 'meme' remains by far the most well known one to speak of "that which evolves culturally" within, and specially outside, the specialist arena. Further, the niche occupied by the word 'meme' is so conceptually necessary within the area to communicate and explain that it is frequently put under scare quotes, or some other informal excuse. In fact, as argued by Tim Tyler - who frequently posts here - in the very sharp Memetics (2010), there are nearly no reasons to try to abandon the 'meme' meme, and nearly all reasons (practicality, Qwerty reasons, mnemonics) to keep it. To avoid contradicting the evidence ever since Dawkins first coined the term, I suggest we must redefine Meme as an attractor in cultural evolution (dual-inheritance) whose development over time structurally mimics to a significant extent the discrete behavior of genes, frequently coinciding with the smallest unit of cultural replication. The definition is long, but the idea is simple: Memes are not the best analogues of genes because they are discrete units that replicate just like genes, but because they are continuous conceptual clusters being attracted to a point in conceptual space whose replication is just like that of genes. Even more simply, memes are the mathematically closest things to genes in cultural evolution. So the suggestion here is for researchers of dual-inheritance and cultural evolution to take off the scare quotes of our memes and keep business as usual.
The evolutionary algorithm has created a new attractor-replicator, the meme, it didn't privilege with it any specific families in the biological trees and it ended up creating a process of cultural-genetic coevolution known as dual-inheritance. This process has been studied in ever more quantified ways by primatologists, behavioral ecologists, population biologists, anthropologists, ethologists, sociologists, neuroscientists and even philosophers. I've shown at least six distinct abilities which helped scaffold our astounding level of cultural intricacy, and some animals who share them with us. We will now take a look at the evolution of cooperation, collaboration, altruism, moral behavior, a sub-area of cultural evolution that saw an explosion of interest and research during the last decade, with publications (most from the last 4 years) such as The Origins of Morality, Supercooperators, Good and Real, The Better Angels of Our Nature, Non-Zero, The Moral Animal, Primates and Philosophers, The Age of Empathy, Origins of Altruism and Cooperation, The Altruism Equation, Altruism in Humans, Cooperation and Its Evolution, Moral Tribes, The Expanding Circle, The Moral Landscape.
3) Cooperation evolves
Despite the selfish nature of genes (Dawkins, 1999) and other units of Darwinian transmission (Jablonka & Lamb, 2007), altruism at the individual level (cost to self for benefit to other) can and does arise because of several intertwined factors.
1) Alleles (the molecular biologist word for what less-specialized areas call genes) under normal conditions optimize for there being more copies of themselves in the future. This happens regardless of whether it is that physical instantiation - also known as token - that is present in the future.
2) Copies of alleles are spread over space, individuals, groups, species and time, but they only care about the time dimension and the quantity dimension. In the long run alleles don't thrive if they are doing better than their neighbors, they thrive if they are doing better than the average allele. A token (instantiation) of an allele that codes for cancer, multiplying itself uncontrollably could, had he a mind, think he's doing great, but if the mutation that gave rise to it only happened in somatic cells (that do not go through the germ line), he'd be in for a surprise. One reason why biologists say natural selection is short-sighted.
3) The above reasoning applies exactly equally and for the same reasons to an allele that codes for individual-selfish behavior in a species in which more altruist groups tend to outlive more egotistic ones. The allele for individual-selfishness, and the selfish individual, may think they are doing great, comparing to their neighbors, when all of a sudden, with high probability, their group dies. Altruism wins in this case not because there is a new spooky unit of selection that reverses reductionism, and applies downward causation which originates in groups. Altruism thrives because the average long term fitness of each allele that coded for it was higher than that of genes that code for individual-selfish behavior. Group selectionc - as well as superoganism selection, somatic cells selection, species selection and individual selection - only happens when the selective forces operating on that level coincide with the allele's fitness increasing in relation to all the competing alleles. (Group selectionc is selection for altruist genes at the group level, the only definition under which the entire discussion was dealing with a controversy of substance instead of talking past each other, as brilliantly explained in this post by PhilGoetz, 2010, please read the case study section in that post to get a more precise understanding than the above short definition). See also the excursus on what a fitness function is below.
4) Completely independent from the reasons in (3), alleles, epigenetics, and learning can program individuals to be cooperative if they "expect" (consciously or not) the interaction with another individual, say, Malou, to: (a) Begin a cycle of reciprocation with Malou in the future whose benefit exceeds the current cost being paid; (b) Counterfactually increase their reputation with sufficiently many individuals that those will award more benefit than current cost; (c) Avoid being punished by third parties; (d) Conform to, or help enforce, by setting an example, social norms and rules upon which selection pressures act (Tomasello, 2005). A key notion in all these mechanisms based on this encoded "expectation" is that uncertainty must be present. In the absence of uncertainty, a state that doesn't exist in nature, an agent in a prisoner dilemma like interaction would be required to defect instead of cooperating from round one, predicting the backwards-in-time cascade of defection from whichever was the last round of interaction, in which by definition cooperating is worse. The problems that in Lesswrong people are trying to solve using Timeless Decision Theory, Updateless Decision Theory, PrudentBot, and other IQ140+ gimmicks, evolution solved by inserting stupidity! More precisely by embracing higher level uncertainty about how many future interactions will there be. Kissing, saying "I love you", becoming engaged, and getting married are all increasingly honest ways in which the computer program programmed by your alleles informs Malou that there will be more cooperation and less defection in the future.
5) Finally, altruism only poses paradoxes of the "Group Selectionc" kind when we are trying to explain why a replicator that codes for Altruism emerged? And we are trying to explain it at that replicators level. It is no mystery why a composition of the phenotypic effects of a gene (replicator) and two memes (attractor-replicators) in all individuals who posses the three of them makes them altruistic, if it does. Each gene and meme in that composition may be fending for itself, but as things turn out, they do make some really nice people (or bonobos) once their extended phenotypes are clustered within those people. If we trust Jablonka & Lamb (2007), there are four streams of heredity flowing concomitantly: Genetic, Epigenetic, Behavioral and Symbolic. Some of the flowing hereditary entities are not even attractor-replicators (niche construction for instance), they don't exhibit replicator dynamics and any altruism that spreads through them requires no special explanation at all!
To the best of my knowledge, none of the 5 factors above, which all do play a role in the existence and maintenance of altruism, requires a revision of Neodarwinism of the Dawkins, Dennett, Trivers, Pinker sort. None of them challenges the validity of our models of replicator dynamics as replicator dynamics. None of them challenges the metaphysically fundamental notion of Darwinism as Universal Acid (Dennett,1996). None of them compromises the claim that everything in the universe that has complex design of which we are aware can be traced back to Darwinian mind-less processes operating, by and large, in replicator-like entities (Dennett, opus cit). None of them poses an obstacle to physicalist reductionism - in this biology-ladden context being the claim that all macrophysical facts, including biological facts, are materially determined by the microphysical facts.
Cooperation evolves, and altruism evolves. They evolve for natural, non-mysterious reasons, and before any more shaking of the edifice of Darwinism is made, and it's constitutive reductionism or universal corrosive powers are contested, any counteracting evidence must be able to traverse undetectably by the far less demanding possibility of being explained by any of the factors above or a combination of them, or being simply the result of one of the many confusions clarified in the excursus below. Despite many people's attempts to look for Skyhooks that would cast away the all-too-natural demons of Neodarwinism and reductionism, things remain as they were before, Cranes all the way up. I will be listening attentively for a case of altruism found in the biological world or mathematical simulations based on it that can pierce through these many layers of epistemic explanatory ability, but I won't be holding my breadth.
Excursus: What is a fitness function?
It is worth pointing out here not only that the altruism and group selection confusion happens, but showing why it does. And PhilGoetz did half of the explanatory job already. The other half is noticing that the fitness function is a many-place function (there is a newer and better post on Lesswrong explaining many-place functions/words, but I didn't find it in 12min, please point to it if you can). The complicated description of "what the fitness function is", in David Lewis's manner of speaking, would be that it is a function from things to functions from functions to functions. More understandably, with e.g. the specific "thing" being a token of an altruistic allele of kind "Aallele", call it "Aallele334":
Aallele344--1-->((number of Aalleles--3-->total number of alleles)--2-->(amplitude configuration slice--4-->simplest ordering))
Here arrow 4 is the function we call time from a timeless physics, quantum physics perspective. Just substitute the whole parenthesis for "time" instead if you haven't read the Quantum Physics sequence. Arrow 3 is how good Aalleles are doing, i.e. how many of them there are in relation to the total number of competing alleles. Arrow 2 is how this relation between Aalleles and total varies over time. The fitness function is arrow 1, once you are given a specific token of an allele, it is the function that describes how well copies of that token do over time in relation to all the competing alleles. Needless to say, not many biologists are aware of that complex computation.
The reason why the unexplained half of controversies happen is that the punctual fitness of an allele will appear very different when you factor it against the competing alleles of other cells, of other individuals, of other groups, or of other species. Fitness is what philosophers call an externalist concept, if you increase the amount of contextually relevant surroundings, the output number changes significantly. It will also appear very different when you factor it for final time T1 or T2. The fitness of an allele coding for a species specific characteristic of T-Rex's large bodies will be very high if the final time is 65 million years ago, but negative if 64.
I remember Feynman saying, I believe in this interview, that it is amazing what the eye does. Surrounded in a 3d equivalent of an insect floating up and down in the 2d surface of a swimming pool, we manage to abstract away all the waves going through the space between us and a seen object, and still capture information enough to locate it, interact with it, and admire it. It is as if the insect could tell only from his vertical oscillations how many children were in the pool, where they were located etc. The state of knowledge in many fields, adaptive fitness included, strikes me as similarly amazing. If this many-place function underlies what biologists should be talking about to avoid talking past each other, how can many of them be aware of only one or two of the many variables that should be input, and still be making good science? Or are they?
If you fail to see hidden variables, you can fall prey to anomalies like the Simpson's paradox, which is exactly the mistake described in PhilGoetz's post on group/species selection.
The function above also works for things other than alleles, like individuals with a characteristic, in which case it will be calculating the fitness of having that characteristic at the individual level.
4) The complexity of cultural items doesn't undermine the validity of mathematical models.
4.1) Cognitive attractors and biases substitute for memes discreteness
The math becomes equivalent.
4.2) Despite the Unilateralist Curse and the Tragedy of the Commons, dyadic interaction models help us understand large scale cooperation
Once we know these two failure modes, dyadic iterated (or reputation-sensitive) interaction is close enough.
5) From Monkeys to Apes to Humans to Transhumans to AIs, the ranges of achievable altruistic skill.
Possible modes of being altruistic. Graph like Bostrom's. Second and third order punishment and cooperation. Newcomb-like signaling problems within AI.
6) Unfit for the Future: the need for greater altruism.
We fail and will remain failing in Tragedy of the Commons problems unless we change our nature.
7) From Science, through Philosophy, towards Engineering: the future of studies of altruism.
Philosophy: Existential Risk prevention through global coordination and cooperation prior to technical maturity. Engineering Humans: creating enhancements and changing incentives. Engineering AI's: making them better and realer.
8) A different kind of Moral Landscape
Like Sam Harris's one, except comparing not how much a society approaches The Good Life (Moral Landscape pg15), but how much it fosters altruistic behavior.
Bibliography (Only of the parts already written, obviously):
Boyd, R., Gintis, H., Bowles, S., & Richerson, P. J. (2003). The evolution of altruistic punishment. Proceedings of the National Academy of Sciences, 100(6), 3531-3535.
Cantor, M., & Whitehead, H. (2013). The interplay between social networks and culture: theoretically and among whales and dolphins. Philosophical Transactions of the Royal Society B: Biological Sciences, 368(1618).
Dawkins, R. (1999). The extended phenotype: The long reach of the gene. Oxford University Press, USA.
Dennett, D. C. (1996). Darwin's dangerous idea: Evolution and the meanings of life (No. 39). Simon & Schuster.
Dennett, D. C. (1992). The self as a center of narrative gravity. Self and consciousness: Multiple perspectives.
Galef Jr, B. G., & Laland, K. N. (2005). Social learning in animals: empirical studies and theoretical models. Bioscience, 55(6), 489-499.
Hauert, C., Traulsen, A., Brandt, H., Nowak, M. A., & Sigmund, K. (2007). Via freedom to coercion: the emergence of costly punishment. science, 316(5833), 1905-1907.
Henrich, J., Boyd, R., & Richerson, P. J. (2008). Five misunderstandings about cultural evolution. Human Nature, 19(2), 119-137.
Hofstadter, D. R. (2007). I am a Strange Loop. Basic Books
Jablonka, E., & Lamb, M. J. (2007). Precis of evolution in four dimensions. Behavioral and Brain Sciences, 30(4), 353-364.
McElreath, R., & Boyd, R. (2007). Mathematical models of social evolution: A guide for the perplexed. University of Chicago Press.
Ottoni, E. B., de Resende, B. D., & Izar, P. (2005). Watching the best nutcrackers: what capuchin monkeys (Cebus apella) know about others’ tool-using skills. Animal cognition, 8(4), 215-219.
Persson, I., & Savulescu, J. Unfit for the Future: The Need for Moral Enhancement Oxford: Oxford University Press, 2012 ISBN 978-0199653645 (HB)£ 21.00. 160pp. On the brink of civil war, Abraham Lincoln stood on the steps of the US Capitol and appealed.
PhilGoetz. (2010), Group selection update. Available at http://lesswrong.com/lw/300/group_selection_update/
Pinker, S. (2007). The stuff of thought: Language as a window into human nature. Viking Adult.
Rendella, L., & Whitehead, H. (2001). Culture in whales and dolphins.Behavioral and Brain Sciences, 24, 309-382.
Richardson, P. J., & Boyd, R. (2005). Not by genes alone. University of Chicago Press.
Tyler, T. (2011). Memetics: Memes and the Science of Cultural Evolution. Tim Tyler.
Tomasello, M., Carpenter, M., Call, J., Behne, T., & Moll, H. (2005). Understanding and sharing intentions: The origins of cultural cognition.Behavioral and brain sciences, 28(5), 675-690.
Yudkowsky, E. (2008A). 37 ways words can be wrong. Available at http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/
Update: There is now an online sign up to groups with workflowy, based on subject and current ability. You do not have to be signed up to Facebook to join a group, but do add an email address so that the group can contact you: https://workflowy.com/shared/cf1fd9ca-885f-c1b9-c2e8-e3a315f70138/
The recent Main article, searching for interest in LWers studying maths together, had many comments showing enthusiasm, but nothing really happened.
On an aside, I think that on LessWrong, we tend not to work together all that well. The wiki isn't kept bright and shiny, and most of the ideas we search for are in loose blog posts that often take a while to find. However, I think having a single place in which to work together on a specific topic, might encourage effect groups. Especially if it's in a place that you get fairly regular reminders from.
So, here's a Less Wrong Study Group Facebook Page: https://www.facebook.com/groups/131607983690959/
Rixie suggested that we could split into smaller groups, based on age. I was thinking perhaps ability. Maybe even a group leader. However, before sitting and pondering this for eternity (just until we have a perfect structure), perhaps we should 'just try it'.
So, who exactly do I think should join the group?
Well, if you're interested in learning maths, and think that being surrounded by LWers might enhance your learning, this group is intended for you. If you're interested in learning maths, but you think that reading a textbook on your own is daunting, or you've tried and had difficulty previously, then this group is intended for you.
Also, if you're interested in learning other LessWrongy subjects (perhaps some cognitive science, or more economics-y stuffs) then this group could do that. If ten people join who want a basic idea economics, then they can work together. This isn't specificly maths, it's whatever we make it.
Personally, when I read a textbook, there's often a paragraph describing a key idea, and the author's words fly right over my head. I've often thought the best thing for me, would to have someone else who I could talk through that bit with. Maybe he or she would see it easily. Maybe I'd see something they wouldn't get.
I also wouldn't worry about level of prior knowledge. Mainly, because mine is zilch :)
So, what are you waiting for?
(No seriously. Just try it.)
Edit: It is true that anonymity is difficult to preserve on Facebook. I am entirely unfamiliar with google, and I certainly would have to make that regular effort to check it there too. If you do wish to join but have issues with public knowledge, please PM me, and I'll keep in contact with you through email (or other if you prefer). I will discuss with you there how to best take part in a study group.
What about an online group for high schoolers devoted to refining the art of human rationality?
Discovering LessWrong had a profound effect on me, shedding light on the way I study thought processes and helping me with a more rational approach. As a teenager in high school, I wish I could share LessWrong's teachings and philosophies with others at my level.
It would be awesome if we could create a list for the interests of LessWrong readers who are in their teens/in high school. I think this would allow a rational online community such as LessWrong to help develop more rationalists whether by outlining some plans to start rationality clubs in high school or discussing ways teenagers an approach rationality. I also think it would help more timid readers to express themselves and talk with other teenagers about common interests (adults could be allowed in to, if they are deemed appropriate for the community). Correct me if I'm wrong, but rationality training should start as soon as possible in the development process and what better age group to target than teenagers? Adolescence is a crucial transitional phase psychologically, biologically and culturally. I would love to see more collected articles on the evolution of rationality in the amazing, flexible mind of an adolescent. If the goal of this blog is to train humans to be rational-minded, more importance should be allocated to training teenagers. I do not think it hasn't happened yet for want of need among teenagers and if we concentrate some resources, gather a list of interested individuals and garner some interest we can make this work. This article is a good example of something that could be distributed in the proposed group:
For LW readers under 20: Note that the Thiel Fellowships (20 under 20) are now open for their next round of applications, and as they put it, "you have a huge readership of folk who would make great applicants". More info here. (from http://lesswrong.com/lw/f9r/weekly_lw_meetups_austin_berlin_cambridge_uk/)
There is also this LessWrong Highschoolers Facebook group created by Curtis SerVaas:
I recently Skyped (not officially a verb yet?) Anna Salamon who is the Executive Director of CFAR (Center for Applied Rationality). We had begun to develop this proposal. She is on the e-mail list and will be involved as a quasi-supervisor person. You can reach her at anna [at] appliedrationality [dot] org. Drop me a one-line e-mail with your name, age, and situation at josephgnehm [at] gmail [dot] com if you'd like to join the list. Speak up! Teenagers should be the subject of concentrated effort on LessWrong. We are the future, help us to reach the fruits of human rationality.
I was reading the "Professing and Cheering" article and it reminded me about some of my own ideas about the role of religious dogma as group identity badges. Here's the gist of it:
Religious and other dogmas need not make sense. Indeed, they may work better if they are not logical. Logical and useful ideas pop-up independently and spread easily, and widely accepted ideas are not very good badges. You need a unique idea to identify your group. It helps to have a somewhat costly idea as a dogma, because they are hard to fake and hard to deny. People would need to invest in these bad ideas, so they would be less likely to leave the group and confront the sunk cost. Also, it's harder to deny allegiance to the group afterwards, because no one in their right minds would accept an idea that bad for any other reason.
If you have a naive interpretation of the dogma, which regards it as an objective statement about the world, you will tend to question it. When you’re contesting the dogma, people won’t judge your argument on its merits: they will look at it as an in-group power struggle. Either you want to install your own dogma, which makes you a pretender, or you’re accepted a competing dogma, which makes you a traitor. Even if they accept that you just don’t want to yield to the authority behind the dogma, that makes you a rebel. Dogmas are just off-limits to criticism.
Public display of dismissive attitude to your questioning is also important. Taking it into consideration is in itself a form of treason, as it is interpreted as entertaining the option of joining you against the authority. So it’s best to dismiss the heresy quickly and loudly, without thinking about it.
Do you know of some other texts which shed some light on this idea?
Stanford has decided to offer a few classes online, for free. These include Artificial Intelligence and Machine Learning. The classes include videos of the same lectures that the Stanford students received, quizzes, homework, and exams that are graded automatically. They start on October 10.
I'm guessing that more than a few LWers will sign up for these. How many people would like to form a study group? Should we just have a discussion thread for it, or is there a better option?
Full disclosure: This has already been discussed here, but I see utility in bringing it up again. Mostly because I only heard about it offline.
Some researchers were interested if, in the same way that there's a general intelligence g that seems to predict competence in a wide variety of tasks, there is a group intelligence c that could do the same. You can read their paper here.
Psychologists have repeatedly shown that a single statistical factor—often called “general intelligence”—emerges from the correlations among people’s performance on a wide variety of cognitive tasks. But no one has systematically examined whether a similar kind of “collective intelligence” exists for groups of people. In two studies with 699 people, working in groups of two to five, we find converging evidence of a general collective intelligence factor that explains a group’s performance on a wide variety of tasks. This “c factor” is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.
Basically, groups with higher social sensitivity, equality in conversational turn-taking, and proportion of females are collectively more intelligent. On top of that, those effects trump out things like average IQ or even max IQ.
I theorize that proportion of females mostly works as a proxy for social sensitivity and turn-taking, and the authors speculate the same.
What does this mean for Less Wrong?
The most important part of the study, IMO, is that "social sensitivity" (measured by a test where you try and discern emotional states from someone's eyes) is such a stronger predictor of group intelligence. It probably helps people to gauge other people's comprehension, but based on the fact that people sharing talking time more equally also helps, I would speculate that another chunk of its usefulness comes from being able to tell if other people want to talk, or think that there's something relevant to be said.
One thing that I find interesting in the meatspace meetups is how in new groups, conversation tends to be dominated by the people who talk the loudest and most insistently. Often, those people are also fairly interesting. However, I prefer the current, older DC group to the newer one, and there's much more equal time speaking. Even though this means that I don't talk as much. Most other people seem to share similar sentiments, to the point that at one early meetup it was explicitly voted to be true that most people would rather talk more.
Anything we should try doing about this? I will hold off on proposing solutions for now, but this section will get filled in sometime.
Per this recent paper, individual IQ has no significant correlation with 'group IQ' (defined and measured as the groups ability to accomplish various tasks); group cohesion, motivation, and satisfaction aren't either. The study identified two things that were positively correlated with group IQ: average social sensitivity and a low variance in the amount of time each person spent speaking. (It also found that having more women improved collective intelligence- because women have better social sensitivity.)
(The remaining stuff is idle speculation from me, not the paper. There's no experimental evidence whatsoever backing it up.)
One possible explanation of the contribution of social sensitivity towards collective intelligence is that it reduces conflicts between group members, allowing the group the remain at least somewhat dispassionate/rational about potential solutions instead of turning discussions about solutions into status pissing contests. This is supported by the fact that ego-based actions are well known to be extremely damaging to group outcomes in sports, and that in contexts (e.g. politics) where there are groups with pre-existing conflicts decision-making seems to be relatively poor despite (presumably) higher social sensitivity on the part of politicians. (This also provides an alternative explanation for the benefits of holding off on proposing solutions: while Eliezer focused, as he is wont to do, on the implications for individual rationality, Maier's edict presumably didn't stop people from thinking of potential solutions and privately settling on a preferred solution- but because they hadn't announced it publicly, they would be more willing to listen to others and change their mind.)
The contribution from variance presumably comes from the fact that if people are on average speaking roughly the same amount, then there are more ideas and perspectives being offered than if only a few people dominated the conversation.
I'd also be interested in seeing whether Collective Intelligence is correlated with individual rationality, given that there is little to no correlation between individual rationality and IQ.
I originally titled this post "The Less Wrong wiki is wrong about group selection", because it seemed wildly overconfident about its assertion that group selection is nonsense. The wiki entry on "group selection" currently reads:
People who are unfamiliar with evolutionary theory sometimes propose that a feature of the organism is there for the good of the group - for example, that human religion is an adaptation to make human groups more cohesive, since religious groups outfight nonreligious groups.
Postulating group selection is guaranteed to make professional evolutionary biologists roll up their eyes and sigh.
However, it appears that the real problem is not that the wiki is overconfident (that's a problem, but it's only a symptom of the next problem) but that the traditional dogma on the viability of group selection is wrong, or at least overconfident. I make this assertion after stumbling across a paper by Martin Nowak, Corina Tarnita, and E. O. Wilson titled "The evolution of eusociality", which appeared in Nature in August of this year. I found a PDF of this paper through Google scholar, click here. A blog entry discussing the paper can be found here (bias alert: it is written by a postdoc working in Martin Nowak's Evolutionary Dynamics program at Harvard).
Here's some quotes (bolding is mine):
It has further turned out that selection forces exist in groups that diminish the advantage of close collateral kinship. They include the favouring of raised genetic variability by colony-level selection in the ants Pogonomyrmex occidentalis and Acromyrmex echinatior—due, at least in the latter, to disease resistance. The contribution of genetic diversity to disease resistance at the colony level has moreover been established definitively in honeybees. Countervailing forces also include variability in predisposition to worker sub-castes in Pogonomyrmex badius, which may sharpen division of labour and improve colony fitness—although that hypothesis is yet to be tested. Further, an increase in stability of nest temperature with genetic diversity has been found within nests of honeybees and Formica ants. Other selection forces working against the binding role of close pedigree kinship are the disruptive impact of nepotism within colonies, and the overall negative effects associated with inbreeding. Most of these countervailing forces act through group selection or, for eusocial insects in particular, through between-colony selection.
Yet, considering its position for four decades as the dominant paradigm in the theoretical study of eusociality, the production of inclusive fitness theory must be considered meagre. During the same period, in contrast, empirical research on eusocial organisms has flourished, revealing the rich details of caste, communication, colony life cycles, and other phenomena at both the individual- and colony-selection levels. In some cases social behaviour has been causally linked through all the levels of biological organization from molecule to ecosystem. Almost none of this progress has been stimulated or advanced by inclusive fitness theory, which has evolved into an abstract enterprise largely on its own
The question arises: if we have a theory that works for all cases (standard natural selection theory) and a theory that works only for a small subset of cases (inclusive fitness theory), and if for this subset the two theories lead to identical conditions, then why not stay with the general theory? The question is pressing, because inclusive fitness theory is provably correct only for a small (non-generic) subset of evolutionary models, but the intuition it provides is mistakenly embraced as generally correct.
Check out the paper for more details. Also look at the Supplementary Information if you have access to it. They perform an evolutionary game theoretic analysis, which I am still reading.
Apparently this theory is not that new. In this 2007 paper by David Sloan Wilson and E. O. Wilson, they argue (I'm just pasting the abstract):
The current foundation of sociobiology is based upon the rejection of group selection in the 1960s and the acceptance thereafter of alternative theories to explain the evolution of cooperative and altruistic behaviors. These events need to be reconsidered in the light of subsequent research. Group selection has become both theoretically plausible and empirically well supported. Moreover, the so-called alternative theories include the logic of multilevel selection within their own frameworks. We review the history and conceptual basis of sociobiology to show why a new consensus regarding group selection is needed and how multilevel selection theory can provide a more solid foundation for sociobiology in the future.
From the other camp, this seems to be a fairly highly-cited paper from 2008. They concluded:
(a) the arguments about group selection are only continued by a limited number of theoreticians, on the basis of simpliﬁed models that can be difﬁcult to apply to real organisms (see Error 3); (b) theoretical models which make testable predictions tend to be made with kin selection theory (Tables 1 and 2); (c) empirical biologists interested in social evolution measure the kin selection coefﬁcient of relatedness rather than the corresponding group selection parameters (Queller & Goodnight, 1989). It is best to think of group selection as a potentially useful, albeit informal, way of conceptualizing some issues, rather than a general evolutionary approach in its own right.
I know (as of yet) very little biology, so I leave the conclusion for readers to discuss. Does anyone have detailed knowledge of the issues here?