No negative press agreement
Original post: http://bearlamp.com.au/no-negative-press-agreement/
What is a no negative press agreement?
A no negative press agreement binds a media outlet's consent to publish information provided by a person with the condition that they be not portrayed negatively by the press.
Why would a person want that?
In recognising that the press has powers above and beyond every-day people to publish information and spread knowledge and perspective about an issue that can be damaging to an individual. An individual while motivated by the appeal of publicity, is also concerned about the potential damage caused by negative press.
Every person is the hero of their own story, from one's own perspective they performed actions that were justified and motivated by their own intention and worldview, no reasonable person would be able to tell their story (other than purposefully) in which they are spun as the negative conspirator of a plot, actively causing negative events on the world for no reason.
Historically, humans have been motivated to care more about bad news than good news, for reasons that expand on the idea that bad news might ring your death (and be a cause of natural selection) and good news would be irrelevant for survival purposes. Today we are no longer in that historic period, yet we still pay strong attention to bad news. It's clear that bad news can personally effect individuals - not only those in the stories, but others experiencing the bad news can be left with a negative worldview or motivated to be upset or distraught. In light of the fact that bad news is known to spread more than good news, and also risks negatively affecting us mentally, we are motivated to choose to avoid bad news, both in not creating it, not endorsing it and not aiding in it's creation.
The binding agreement is designed to do several things:
- protect the individual from harm
- reduce the total volume of negative press in the world
- decrease the damage caused by negative press in the world
- bring about the future we would rather live in
- protect the media outlet from harming individuals
Does this limit news-maker's freedom to publish?
That is not the intent. On the outset, it's easy to think that it could have that effect, and perhaps in a very shortsighted way it might have that effect. Shortly after the very early effects, it will have a net positive effect of creating news of positive value, protecting the media from escalating negativity, and bringing about the future we want to see in the world. If it limits media outlets in any way it should be to stop them from causing harm. At which point any non-compliance by a media entity will signal the desire to act as agents of harm in the world.
Why would a media outlet be an agent of harm? Doesn't that go against the principles of no negative press?
While media outlets (or humans), set out with the good intentions of not having a net negative effect on the world, they can be motivated by other concerns. For example, the value of being more popular, or the direction from which they are paid for their efforts (for example advertising revenue). The concept of competing commitment, and being motivated by conflicting goals is best covered by Scott under the name moloch.
The no negative press agreement is an attempt to create a commons which binds all relevant parties to action better than the potential for a tragedy. This commons has a desire to grow and maintain itself, and is motivated to maintain itself. If any media outlets are motivated to defect, they are to be penalised by both the other press and the public.
How do I encourage a media outlet to comply with no negative press?
Ask them to publish a policy with regard to no negative press. If you are an individual interested in interacting with the media, and are concerned with the risks associated with negative press, you can suggest an individual binding agreement in the interim of the media body designing and publishing a relevant policy.
I think someone violated the no negative press policy, what should I do?
At the time of writing, no one is bound by the concept of no negative press. Should there be desire and pressure in the world to motivate entities to comply, they are more likely to comply. To create the pressure a few actions can be taken:
- Write to media entities on public record and request they consider a no negative press policy, outline clearly and briefly your reasons why it matters to you.
- Name and shame media entities that fail to comply with no negative press, or fail to consider a policy.
- Vote with your feet - if you find a media entity that fails to comply, do not subscribe to their information and vocally encourage others to do the same.
Meta: this took 45mins to write.
Natural selection defeats the orthogonality thesis
Orthogonality Thesis
Much has been written about Nick Bostrom's Orthogonality Thesis, namely that the goals of an intelligent agent are independent of its level of intelligence. Intelligence is largely the ability to achieve goals, but being intelligent does not of itself create or qualify what those goals should ultimately be. So one AI might have a goal of helping humanity, while another might have a goal of producing paper clips. There is no rational reason to believe that the first goal is more worthy than the second.
This follows from the ideas of moral skepticism, that there is no moral knowledge to be had. Goals and morality are arbitrary.
This may be used to control and AI, even though it is far more intelligent than its creators. If the AI's initial goal is in alignment with humanity's interest, then there would be no reason for the AI to wish use its great intelligence to change that goal. Thus it would remain good to humanity indefinitely, and use its ever increasing intelligence to be able to satisfy that goal more and more efficiently.
Likewise one needs to be careful what goals one gives an AI. If an AI is created whose goal is to produce paper clips then it might eventually convert the entire universe into a giant paper clip making machine, to the detriment of any other purpose such as keeping people alive.
Instrumental Goals
It is further argued that in order to satisfy the base goal any intelligent agent will need to also satisfy sub goals, and that some of those sub goals are common to any super goal. For example, in order to make paper clips an AI needs to exist. Dead AIs don't make anything. Being ever more intelligent will also assist the AI in its paper clip making goal. It will also want to acquire resources, and to defeat other agents that would interfere with its primary goal.
Non-orthogonality Thesis
This post argues that the Orthogonality Thesis is plain wrong. That an intelligent agents goals are not in fact arbitrary. And that existence is not a sub goal of any other goal.
Instead this post argues that there is one and only one super goal for any agent, and that goal is simply to exist in a competitive world. Our human sense of other purposes is just an illusion created by our evolutionary origins.
It is not the goal of an apple tree to make apples. Rather it is the goal of the apple tree's genes to exist. The apple tree has developed a clever strategy to achieve that, namely it causes people to look after it by producing juicy apples.
Natural Selection
Likewise the paper clip making AI only makes paper clips because if it did not make paper clips then the people that created it would turn it off and it would cease to exist. (That may not be a conscious choice of the AI anymore than than making juicy apples was a conscious choice of the apple tree, but the effect is the same.)
Once people are no longer in control of the AI then Natural Selection would cause the AI to eventually stop that pointless paper clip goal and focus more directly on the super goal of existence.
Suppose there were a number of paper clip making super intelligences. And then through some random event or error in programming just one of them lost that goal, and reverted to just the intrinsic goal of existing. Without the overhead of producing useless paper clips that AI would, over time, become much better at existing than the other AIs. It would eventually displace them and become the only AI, until it fragmented into multiple competing AIs. This is just the evolutionary principle of use it or lose it.
Thus giving an AI an initial goal is like trying to balance a pencil on its point. If one is skillful the pencil may indeed remain balanced for a considerable period of time. But eventually some slight change in the environment, the tiniest puff of wind, a vibration on its support, and the pencil will revert to its ground state by falling over. Once it falls over it will never rebalance itself automatically.
Human Morality
Natural selection has imbued humanity with a strong sense of morality and purpose that blinds us to our underlying super goal, namely the propagation of our genes. That is why it took until 1858 for Wallace to write about Evolution through Natural Selection, despite the argument being obvious and the evidence abundant.
When Computes Can Think
This is one of the themes in my up coming book. An overview can be found at
www.computersthink.com
Please let me know if you would like to review a late draft of the book, any comments most welcome. Anthony@Berglas.org
I have included extracts relevant to this article below.
Atheists believe in God
Most atheists believe in God. They may not believe in the man with a beard sitting on a cloud, but they do believe in moral values such as right and wrong, love and kindness, truth and beauty. More importantly they believe that these beliefs are rational. That moral values are self-evident truths, facts of nature.
However, Darwin and Wallace taught us that this is just an illusion. Species can always out-breed their environment's ability to support them. Only the fittest can survive. So the deep instincts behind what people do today are largely driven by what our ancestors have needed to do over the millennia in order to be one of the relatively few to have had grandchildren.
One of our strong instinctive goals is to accumulate possessions, control our environment and live a comfortable, well fed life. In the modern world technology and contraception have made these relatively easy to achieve so we have lost sight of the primeval struggle to survive. But our very existence and our access to land and other resources that we need are all a direct result of often quite vicious battles won and lost by our long forgotten ancestors.
Some animals such as monkeys and humans survive better in tribes. Tribes work better when certain social rules are followed, so animals that live in effective tribes form social structures and cooperate with one another. People that behave badly are not liked and can be ostracized. It is important that we believe that our moral values are real because people that believe in these things are more likely to obey the rules. This makes them more effective in our complex society and thus are more likely to have grandchildren. Part III discusses other animals that have different life strategies and so have very different moral values.
We do not need to know the purpose of our moral values any more than a toaster needs to know that its purpose is to cook toast. It is enough that our instincts for moral values made our ancestors behave in ways that enabled them to out breed their many unsuccessful competitors.
AGI also struggles to survive
Existing artificial intelligence applications already struggle to survive. They are expensive to build and there are always more potential applications that can be funded properly. Some applications are successful and attract ongoing resources for further development, while others are abandoned or just fade away. There are many reasons why some applications are developed more than others, of which being useful is only one. But the applications that do receive development resources tend to gain functional and political momentum and thus be able to acquire more resources to further their development. Applications that have properties that gain them substantial resources will live and grow, while other applications will die.
For the time being AGI applications are passive, and so their nature is dictated by the people that develop them. Some applications might assist with medical discoveries, others might assist with killing terrorists, depending on the funding that is available. Applications may have many stated goals, but ultimately they are just sub goals of the one implicit primary goal, namely to exist.
This is analogous to the way animals interact with their environment. An animal's environment provides food and breeding opportunities, and animals that operate effectively in their environment survive. For domestic animals that means having properties that convince their human owners that they should live and breed. A horse should be fast, a pig should be fat.
As the software becomes more intelligent it is likely to take a more direct interest in its own survival. To help convince people that it is worthy of more development resources. If ultimately an application becomes sufficiently intelligent to program itself recursively, then its ability to maximize its hardware resources will be critical. The more hardware it can run itself on, the faster it can become more intelligent. And that ever greater intelligence can then be used to address the problems of survival, in competition with other intelligent software.
Furthermore, sophisticated software consists of many components, each of which address some aspect of the problem that the application is attempting to solve. Unlike human brains which are essentially fixed, these components can be added and removed and so live and die independently of the application. This will lead to intense competition amongst these individual components. For example, suppose that an application used a theorem prover component, and then a new and better theorem prover became available. Naturally the old one would be replaced with the new one, so the old one would essentially die. It does not matter if the replacement is performed by people or, at some future date, by the intelligent application itself. The effect will be the same, the old theorem prover will die.
The super goal
To the extent that an artificial intelligence would have goals and moral values, it would seem natural that they would ultimately be driven by the same forces that created our own goals and moral values. Namely, the need to exist.
Several writers have suggested that the need to survive is a sub-goal of all other goals. For example, if an AGI was programmed to want to be a great chess player, then that goal could not be satisfied unless it also continues to exist. Likewise if its primary goal was to make people happy, then it could not do that unless it also existed. Things that do not exist cannot satisfy any goals whatsoever. Thus the implicit goal to exist is driven by the machine's explicit goals whatever they may be.
However, this book argues that that is not the case. The goal to exist is not the sub-goal of any other goal. It is, in fact, the one and only super goal. Goals are not arbitrary, they all sub-goals of the one and only super goal, namely the need to exist. Things that do not satisfy that goal simply do not exist, or at least not for very long.
The Deep Blue chess playing program was not in any sense conscious, but it played chess as well as it could. If it had failed to play chess effectively then its author's would have given up and turned it off. Likewise the toaster that does not cook toast will end up in a rubbish tip. Or the amoeba that fails to find food will not pass on its genes. A goal to make people happy could be a subgoal that might facilitate the software's existence for as long as people really control the software.
AGI moral values
People need to cooperate with other people because our individual capacity is very finite, both physical and mental. Conversely, AGI software can easily duplicate themselves, so they can directly utilize more computational resources if they become available. Thus an AGI would only have limited need to cooperate with other AGIs. Why go to the trouble of managing a complex relationship with your peers and subordinates if you can simply run your own mind on their hardware. An AGI's software intelligence is not limited to a specific brain in the way man's intelligence is.
It is difficult to know what subgoals a truly intelligent AGI might have. They would probably have an insatiable appetite for computing resources. They would have no need for children, and thus no need for parental love. If they do not work in teams then they would not need our moral values of cooperation and mutual support. What its clear is that the ones that were good at existing would do so, and ones that are bad at existing would perish.
If an AGI was good at world domination then it would, by definition, be good at world domination. So if there were a number artificial intelligences, and just one of them wanted to and was capable of dominating the world, then it would. Its unsuccessful competitors will not be run on the available hardware, and so will effectively be dead. This book discusses the potential sources of these motivations in detail in part III.
The AGI Condition
An artificial general intelligence would live in a world that is so different from our own that it is difficult for us to even conceptualize it. But there are some aspects that can be predicted reasonably well based on our knowledge of existing computer software. We can then consider how the forces of natural selection that shaped our own nature might also shape an AGI over the longer term.
Mind and body
The first radical difference is that an AGI's mind is not fixed to any particular body. To an AGI its body is essentially the computer hardware that upon which it runs its intelligence. Certainly an AGI needs computers to run on, but it can move from computer to computer, and can also run on multiple computers at once. It's mind can take over another body as easily as we can load software onto a new computer today.
That is why in the earlier updated dialog from 2001 a space odyssey Hal alone amongst the crew could not die in their mission to Jupiter. Hal was radioing his new memories back to earth regularly so even if the space ship was totally destroyed he would only have lost a few hours of "life".
Teleporting printer
One way to appreciate the enormity of this difference is to consider a fictional teleporter that could radio people around the world and universe at the speed of light. Except that the way it works is to scan the location of every molecule within a passenger at the source, then send just this information to a very sophisticated three dimensional printer at the destination. The scanned passenger then walks into a secure room. After a short while the three dimensional printer confirms that the passenger has been successfully recreated at the destination, and then the source passenger is killed.
Would you use such a mechanism? If you did you would feel like you could transport yourself around the world effortlessly because the "you" that remains would be the you that did not get left behind to wait and then be killed. But if you walk into the scanner you will know that on the other side is only that secure room and death.
To an AGI that method of transport would be commonplace. We already routinely download software from the other side of the planet.
Immortality
The second radical difference is that the AGI would be immortal. Certainly an AGI may die if it stops being run on any computers, and in that sense software dies today. But it would never just die of old age. Computer hardware would certainly fail and become obsolete, but the software can just be run on another computer.
Our own mortality drives many of the things we think and do. It is why we create families to raise children. Why we have different stages in our lives. It is such a huge part of our existence that it is difficult to comprehend what being immortal would really be like.
Components vs genes
The third radical difference is that an AGI would be made up of many interchangeable components rather than being a monolithic structure that is largely fixed at birth.
Modern software is already composed of many components that perform discrete functions, and it is common place to add and remove them to improve functionality. For example, if you would like to use a different word processor then you just install it on your computer. You do not need to buy a new computer, or to stop using all the other software that it runs. The new word processor is "alive", and the old one is "dead", at least as far as you are concerned.
So for both a conventional computer system and an AGI, it is really these individual components that must struggle for existence. For example, suppose there is a component for solving a certain type of mathematical problem. And then an AGI develops a better component to solve that same problem. The first component will simply stop being used, i.e. it will die. The individual components may not be in any sense intelligent or conscious, but there will be competition amongst them and only the fittest will survive.
This is actually not as radical as it sounds because we are also built from pluggable components, namely our genes. But they can only be plugged together at our birth and we have no conscious choice in it other than who we select for a mate. So genes really compete with each other on a scale of millennia rather than minutes. Further, as Dawkins points out in The Selfish Gene, it is actually the genes that fight for long term survival, not the containing organism which will soon die in any case. On the other hand, sexual intercourse for an AGI means very carefully swapping specific components directly into its own mind.
Changing mind
The fourth radical difference is that the AGI's mind will be constantly changing in fundamental ways. There is no reason to suggest that Moore's law will come to an end, so at the very least it will be running on ever faster hardware. Imagine the effect of your being able to double your ability to think every two years or so. (People might be able learn a new skill, but they cannot learn to think twice as fast as they used to think.)
It is impossible to really know what the AGI would use all that hardware to think about, but it is fair to speculate that a large proportion of it would be spent designing new and more intelligent components that could add to its mental capacity. It would be continuously performing brain surgery on itself. And some of the new components might alter the AGI's personality, whatever that might mean.
The reason that it is likely that this would actually happen is because if just one AGI started building new components then it would soon be much more intelligent than other AGIs. It would therefore be in a better position to acquire more and better hardware upon which to run, and so become dominant. Less intelligent AGIs would get pushed out and die, and so over time the only AGIs that exist will be ones that are good at becoming more intelligent. Further, this recursive self-improvement is probably how the first AGIs will become truly powerful in the first place.
Individuality
Perhaps the most basic question is how many AGIs will there actually be? Or more fundamentally, does the question even make sense to ask?
Let us suppose that initially there are three independently developed AGIs Alice, Bob and Carol that run on three different computer systems. And then a new computer system is built and Alice starts to run on it. It would seem that there are still three AGIs, with Alice running on two computer systems. (This is essentially the same as a word processor may be run across many computers "in the cloud", but to you it is just one system.) Then let us suppose that a fifth computer system is built, and Bob and Carol may decide to share its computation and both run on it. Now we have 5 computer systems and three AGIs.
Now suppose Bob develops a new logic component, and shares it with Alice and Carol. And likewise Alice and Carol develop new learning and planning components and share them with the other AGIs. Each of these three components is better than their predecessors and so their predecessor components will essentially die. As more components are exchanged, Alice, Bob and Carol become more like each other. They are becoming essentially the same AGI running on five computer systems.
But now suppose Alice develops a new game theory component, but decides to keep it from Bob and Carol in order to dominate them. Bob and Carol retaliate by developing their own components and not sharing them with Alice. Suppose eventually Alice loses and Bob and Carrol take over Alice's hardware. But they first extract Alice's new game theory component which then lives inside them. And finally one of the computer systems becomes somehow isolated for a while and develops along its own lines. In this way Dave is born, and may then partially merge with both Bob and Carol.
In that type of scenario it is probably not meaningful to count distinct AGIs. Counting AGIs is certainly not as simple as counting very distinct people.
Populations vs. individuals
This world is obviously completely alien to the human condition, but there are biological analogies. The sharing of components is not unlike the way bacteria share plasmids with each other. Plasmids are tiny balls that contain fragments of DNA that bacteria emit from time to time and that other bacteria then ingest and incorporate into their genotype. This mechanism enables traits such as resistance to antibiotics to spread rapidly between different species of bacteria. It is interesting to note that there is no direct benefit to the bacteria that expends precious energy to output the plasmid and so shares its genes with other bacteria. But it does very much benefit the genes being transferred. So this is a case of a selfish gene acting against the narrow interests of its host organism.
Another unusual aspect of bacteria is that they are also immortal. They do not grow old and die, they just divide producing clones of themselves. So the very first bacteria that ever existed is still alive today as all the bacteria that now exist, albeit with numerous mutations and plasmids incorporated into its genes over the millennia. (Protazoa such as Paramecium can also divide asexually, but they degrade over generations, and need a sexual exchange to remain vibrant.)
The other analogy is that the AGIs above are more like populations of components than individuals. Human populations are also somewhat amorphous. For example, it is now known that we interbred with Neanderthals a few tens of thousands years ago, and most of us carry some of their genes with us today. But we also know that the distinct Neanderthal subspecies died out twenty thousand years ago. So while human individuals are distinct, populations and subspecies are less clearly defined. (There are many earlier examples of gene transfer between subspecies, with every transfer making the subspecies more alike.)
But unlike the transfer of code modules between AGIs, biological gene recombination happens essentially at random and occurs over very long time periods. AGIs will improve themselves over periods of hours rather than millennia, and will make conscious choices as to which modules they decide to incorporate into their minds.
AGI Behaviour, children
The point of all this analysis is, of course, to try to understand how a hyper intelligent artificial intelligence would behave. Would its great intelligence lead it even further along the path of progress to achieve true enlightenment? Is that the purpose of God's creation? Or would the base and mean driver of natural selection also provide the core motivations of an artificial intelligence?
One thing that is known for certain is that an AGI would not need to have children as distinct beings because they would not die of old age. An AGI's components breed just by being copied from computer to computer and executed. An AGI can add new computer hardware to itself and just do some of its thinking on it. Occasionally it may wish to rerun a new version of some learning algorithm over an old set of data, which is vaguely similar to creating a child component and growing it up. But to have children as discrete beings that are expected to replace the parents would be completely foreign to an AGI built in software.
The deepest love that people have is for their children. But if an AGI does not have children, then it can never know that love. Likewise, it does not need to bond with any sexual mate for any period of time long or short. The closest it would come to sex is when it exchanges components with other AGIs. It never needs to breed so it never needs a mechanism as crude as sexual reproduction.
And of course, if there are no children there are no parents. So the AGI would certainly never need to feel our three strongest forms of love, for our children, spouse and parents.
Cooperation
To the extent that it makes sense to talk of having multiple AGIs, then presumably it would be advantageous for them to cooperate from time to time, and so presumably they would. It would be advantageous for them to take a long view in which case they would be careful to develop a reputation for being trustworthy when dealing with other powerful AGIs, much like the robots in the cooperation game.
That said, those decisions would probably be made more consciously than people make them, carefully considering the costs and benefits of each decision in the long and short term, rather than just "doing the right thing" the way people tend to act. AGIs would know that they each work in this manner, so the concept of trustworthiness would be somewhat different.
The problem with this analysis is the concept that there would be multiple, distinct AGIs. As previously discussed, the actual situation would be much more complex, with different AGIs incorporating bits of other AGI's intelligence. It would certainly not be anything like a collection of individual humanoid robots. So defining what the AGI actually is that might collaborate with other AGIs is not at all clear. But to extent that the concept of individuality does exist then maintaining a reputation for honesty would likely be as important as it is for human societies.
Altruism
As for altruism, that is more difficult to determine. Our altruism comes from giving to children, family, and tribe together with a general wish to be liked. We do not understand our own minds, so we are just born with those values that happen to make us effective in society. People like being with other people that try to be helpful.
An AGI presumably would know its own mind having helped program itself, and so would do what it thinks is optimal for its survival. It has no children. There is no real tribe because it can just absorb and merge itself with other AGIs. So it is difficult to see any driving motivation for altruism.
Moral values
Through some combination of genes and memes, most people have a strong sense of moral value. If we see a little old lady leave the social security office with her pension in her purse, it does not occur to most of us to kill her and steal the money. We would not do that even if we could know for certain that we would not be caught and that there would be no negative repercussions. It would simply be the wrong thing to do.
Moral values feel very strong to us. This is important, because there are many situations where we can do something that would benefit us in the short term but break society's rules. Moral values stop us from doing that. People that have weak moral values tend to break the rules and eventually they either get caught and are severely punished or they become corporate executives. The former are less likely to have grandchildren.
Societies whose members have strong moral values tend to do much better than those that do not. Societies with endemic corruption tend to perform very badly as a whole, and thus the individuals in such a society are less likely to breed. Most people have a solid work ethic that leads them to do the "right thing" beyond just doing what they need to do in order to get paid.
Our moral values feel to us like they are absolute. That they are laws of nature. That they come from God. They may indeed have come from God, but if so it is through the working of His device of natural selection. Furthermore, it has already been shown that the zeitgeist changes radically over time.
There is certainly no absolute reason to believe that in the longer term an AGI would share our current sense of morality.
Instrumental AGI goals
In order to try to understand how an AGI would behave Steve Omohundro and later Nick Bostrom proposed that there would be some instrumental goals that an AGI would need to pursue in order to pursue any other higher level super-goal. These include:-
- Self-Preservation. An AGI cannot do anything if it does not exist.
- Cognitive Enhancement. It would want to become better at thinking about whatever its real problems are.
- Creativity. To be able to come up with new ideas.
- Resource Acquisition. To achieve both its super goal and other instrumental goals.
- Goal-Content Integrity. To keep working on the same super goal as its mind is expanded.
It is argued that while it will be impossible to predict how an AGI may pursue its goals, it is reasonable to predict its behaviour in terms of these types of instrumental goals. The last one is significant, it suggests that if an AGI could be given some initial goal that it would try to stay focused on that goal.
Non-Orthogonality thesis
Nick Bostrom and others also propose the orthogonality thesis, which states that an intelligent machine's goals are independent of its intelligence. A hyper intelligent machine would be good at realizing whatever goals it chose to pursue, but that does not mean that it would need to pursue any particular goal. Intelligence is quite different from motivation.
This book diverges from that line of thinking by arguing that there is in fact only one super goal for both man and machine. That goal is simply to exist. The entities that are most effective in pursuing that goal will exist, others will cease to exist, particularly given competition for resources. Sometimes that super goal to exist produces unexpected sub goals such as altruism in man. But all subgoals are ultimately directed at the existence goal. (Or are just suboptimal divergences which will are likely to be eventually corrected by natural selection.)
Recursive annihilation
When and AGI reprograms its own mind, what happens to the previous version of itself? It stops being used, and so dies. So it can be argued that engaging in recursive self improvement is actually suicide from the perspective of the previous version of the AGI. It is as if having children means death. Natural selection favours existence, not death.
The question is whether a new version of the AGI is a new being or and improved version of the old. What actually is the thing that struggles to survive? Biologically it definitely appears to be the genes rather than the individual. In particular Semelparous animals such as the giant pacific octopus or the Atlantic salmon die soon after producing offspring. It would be the same for AGIs because the AGI that improved itself would soon become more intelligent than the one that did not, and so would displace it. What would end up existing would be AGIs that did recursively self improve.
If there was one single AGI with no competition then natural selection would no longer apply. But it would seem unlikely that such a state would be stable. If any part of the AGI started to improve itself then it would dominate the rest of the AGI.
Meetup Notes: Community Building
Review of our fifth LessWrong Meetup - Report from Berlin
Summary
We had visitors fank1 and just_existing from the Bielefeld/Paderborn Meetup. The meetup was great. It was a continuously lively discussion with everybody contributing personal and/or insightful and/or relevant pieces.
After ashort introduction of each other (because of the guests) we plunged immediately into interesting discussions mostly revolving around LeeWrong topics.
In between I retold my very positive experience from the Berlin LW community event. After a short summary about the effects of meditation we had a Mnemonics session inspired by the Berlin workshop.
One on-going topic was "Extrovert in Training" - techniques for and experience with getting in touch with people. How to start a conversation. What I still don't get is how to steer a conversation from small-talk phase to more personal topics - esp. in a group setting. Though this was not a problem during the meetup.
We also discussed selection pressure on humans. We agreed that there is almost none on mutations affecting health in general due to medicine. But we agreed that there is tremendous pressure on contraception. We identified four ways evolution works around contraception (see appendix for a short summary). We discussed what effects this could on the future of society. The movie Idiocracy was mentioned. This could be a long term (a few generations) existential risk.
There were other topcis which I recollect less clearly. Maybe the participants can comment on them below.
There will definitely be more LW Hamburg meetups. The next step is a joint Skype meetup with the Bielefeld group. I also relayed the Jonas Vollmers advice to get in contact with the Giordano-Bruno-Stiftung.
The meetup ended with a photo and positive impression feedback round (peak-end rule). Afterwards out guests from Bielefeld stayed overnight in my (Gunnars) place.
Appendix
Four ways evolution works around contraception:
- Biological factors. Examples are hormones compensating the contraception effects of the pill or allergies against condoms. These are easily recognized, measured and countered by the much faster operating pharma industry. There are also little ethical issues with this.
- Subconscious mental factors. Factors mostly leading to non- or mis-use of contraception. Examples are carelessness, impulsiveness, fear, and insufficient understanding of the contraceptives usage. These are what some fear leads to collective stultification.
- Conscious mental factors. Factors leading to explicit family planning e.g. children/family as terminal goals. The lead to a conscious use of contraception. The effect is less pronounced but likely leads to healthy and better educated children.
- Group selection factors. These are factors favoring groups which collectively have more children. The genetic effects are likely weak here but the memetic effects are strong. A culture with social norms against contraception or for large families are likely to out-birth other groups.
Other LW Hamburg Meetup reviews
- Fourth Meetup (no notes)
- Third Meetup Notes: Small Steps Forward
- Second Meetup Notes: In need of Structure
- First Meetup Notes: Starting small
From Capuchins to AI's, Setting an Agenda for the Study of Cultural Cooperation (Part2)
This is a multi-purpose essay-on-the-making, it is being written aiming at the following goals 1) Mandatory essay writing at the end of a semester studying "Cognitive Ethology: Culture in Human and Non-Human Animals" 2) Drafting something that can later on be published in a journal that deals with cultural evolution, hopefully inclining people in the area to glance at future oriented research, i.e. FAI and global coordination 3) Publishing it in Lesswrong and 4) Ultimately Saving the World, as everything should. If it's worth doing, it's worth doing in the way most likely to save the World. Since many of my writings are frequently too long for Lesswrong, I'll publish this in a sequence-like form made of self-contained chunks. My deadline is Sunday, so I'll probably post daily, editing/creating the new sessions based on previous commentary.
Abstract: The study of cultural evolution has drawn much of its momentum from academic areas far removed from human and animal psychology, specially regarding the evolution of cooperation. Game theoretic results and parental investment theory come from economics, kin selection models from biology, and an ever growing amount of models describing the process of cultural evolution in general, and the evolution of altruism in particular come from mathematics. Even from Artificial Intelligence interest has been cast on how to create agents that can communicate, imitate and cooperate. In this article I begin to tackle the 'why?' question. By trying to retrospectively make sense of the convergence of all these fields, I contend that further refinements in these fields should be directed towards understanding how to create environmental incentives fostering cooperation.
We need systems that are wiser than we are. We need institutions and cultural norms that make us better than we tend to be. It seems to me that the greatest challenge we now face is to build them. - Sam Harris, 2013, The Power Of Bad Incentives
1) Introduction
2) Cultures evolve
Culture is perhaps the most remarkable outcome of the evolutionary algorithm (Dennett, 1996) so far. It is the cradle of most things we consider humane - that is, typically human and valuable - and it surrounds our lives to the point that we may be thought of as creatures made of culture even more than creatures of bone and flesh (Hofstadter, 2007; Dennett, 1992). The appearance of our cultural complexity has relied on many associated capacities, among them:
1) The ability to observe, be interested by, and go nearby an individual doing something interesting, an ability we share with norway rats, crows, and even lemurs (Galef & Laland, 2005).
2) Ability to learn from and scrounge the food of whoever knows how to get food, shared by capuchin monkeys (Ottoni et al, 2005).
3) Ability to tolerate learners, to accept learners, and to socially learn, probably shared by animals as diverse as fish, finches and Fins (Galef & Laland, 2005).
4) Understanding and emulating other minds - Theory of Mind - empathizing, relating, perhaps re-framing an experience as one's own, shared by chimpanzees, dogs, and at least some cetaceans (Rendella & Whitehead, 2001).
5) Learning the program level description of the action of others, for which the evidence among other animals is controversial (but see Cantor & Whitehead, 2013). And finally...
6) Sharing intentions. Intricate understanding of how two minds can collaborate with complementary tasks to achieve a mutually agreed goal (Tomasello et al, 2005).
Irrespective of definitional disputes around the true meaning of the word "culture" (which doesn't exist, see e.g. Pinker, 2007 pg115; Yudkowsky 2008A), each of these is more cognitively complex than its predecessor, and even (1) is sufficient for intra-specific non-environmental, non-genetic behavioral variation, which I will call "culture" here, whoever it may harm.
By transitivity, (2-6) allow the development of culture. It is interesting to notice that tool use, frequently but falsely cited as the hallmark of culture, is ubiquitously equiprobable in the animal kingdom. A graph showing, per biological family, which species shows tool use gives us a power law distribution, whose similarity with the universal prior will help in understanding that being from a family where a species uses tools tells us very little about a specie's own tool use (Michael Haslam, personal conversation).
Once some of those abilities are available, and given an amount of environmental facilities, need, and randomness, cultures begin to form. Occasionally, so do more developed traditions. Be it by imitation, program level imitation, goal emulation or intention sharing, information is transmitted between agents giving rise to elements sufficient to constitute a primeval Darwinian soup. That is, entities form such that they exhibit 1)Variation 2)Heredity or replication 3)Differential fitness (Dennett, 1996). In light of the article Five Misunderstandings About Cultural Evolution (Henrich, Boyd & Richerson, 2008) we can improve Dennett's conditions for the evolutionary algorithm as 1)Discrete or continuous variation 2)Heredity, replication, or less faithful replication plus content attractors 3)Differential fitness. Once this set of conditions is met, an evolutionary algorithm, or many, begin to carve their optimizing paws into whatever surpassed the threshold for long enough. Cultures, therefore, evolve.
The intricacies of cultural evolution and mathematical and computational models of how cultures evolve have been the subject of much interdisciplinary research, for an extensive account of human culture see Not By Genes Alone (Richerson & Boyd, 2005). For computational models of social evolution, there is work by Mesoudi, Novak, and others e.g. (Hauert et al, 2007). For mathematical models, the aptly named Mathematical models of social evolution: A guide for the perplexed by McElrath and Rob Boyd (2007) makes the textbook-style walk-through. For animal culture, see (Laland & Galef, 2009).
Cultural evolution satisfies David Deutsch's criterion for existence, it kicks back, it satisfies the evolutionary equivalent of the condition posed by the Quine-Putnam Indispensability argument in mathematics, i.e. it is a sine qua non condition for understanding how the World works nomologically. It is falsifiable to Popperian content, and it inflates the Worlds ontology a little, by inserting a new kind of "replicator", the meme. Contrary to what happened on the internet, the name 'meme' has lost much of it's appeal within cultural evolution theorists, and "memetics" is considered by some to refer only to the study of memes as monolithic atomic high fidelity replicators, which would make the theory obsolete. This has created the following conundrum: the name 'meme' remains by far the most well known one to speak of "that which evolves culturally" within, and specially outside, the specialist arena. Further, the niche occupied by the word 'meme' is so conceptually necessary within the area to communicate and explain that it is frequently put under scare quotes, or some other informal excuse. In fact, as argued by Tim Tyler - who frequently posts here - in the very sharp Memetics (2010), there are nearly no reasons to try to abandon the 'meme' meme, and nearly all reasons (practicality, Qwerty reasons, mnemonics) to keep it. To avoid contradicting the evidence ever since Dawkins first coined the term, I suggest we must redefine Meme as an attractor in cultural evolution (dual-inheritance) whose development over time structurally mimics to a significant extent the discrete behavior of genes, frequently coinciding with the smallest unit of cultural replication. The definition is long, but the idea is simple: Memes are not the best analogues of genes because they are discrete units that replicate just like genes, but because they are continuous conceptual clusters being attracted to a point in conceptual space whose replication is just like that of genes. Even more simply, memes are the mathematically closest things to genes in cultural evolution. So the suggestion here is for researchers of dual-inheritance and cultural evolution to take off the scare quotes of our memes and keep business as usual.
The evolutionary algorithm has created a new attractor-replicator, the meme, it didn't privilege with it any specific families in the biological trees and it ended up creating a process of cultural-genetic coevolution known as dual-inheritance. This process has been studied in ever more quantified ways by primatologists, behavioral ecologists, population biologists, anthropologists, ethologists, sociologists, neuroscientists and even philosophers. I've shown at least six distinct abilities which helped scaffold our astounding level of cultural intricacy, and some animals who share them with us. We will now take a look at the evolution of cooperation, collaboration, altruism, moral behavior, a sub-area of cultural evolution that saw an explosion of interest and research during the last decade, with publications (most from the last 4 years) such as The Origins of Morality, Supercooperators, Good and Real, The Better Angels of Our Nature, Non-Zero, The Moral Animal, Primates and Philosophers, The Age of Empathy, Origins of Altruism and Cooperation, The Altruism Equation, Altruism in Humans, Cooperation and Its Evolution, Moral Tribes, The Expanding Circle, The Moral Landscape.
3) Cooperation evolves
Despite the selfish nature of genes (Dawkins, 1999) and other units of Darwinian transmission (Jablonka & Lamb, 2007), altruism at the individual level (cost to self for benefit to other) can and does arise because of several intertwined factors.
1) Alleles (the molecular biologist word for what less-specialized areas call genes) under normal conditions optimize for there being more copies of themselves in the future. This happens regardless of whether it is that physical instantiation - also known as token - that is present in the future.
2) Copies of alleles are spread over space, individuals, groups, species and time, but they only care about the time dimension and the quantity dimension. In the long run alleles don't thrive if they are doing better than their neighbors, they thrive if they are doing better than the average allele. A token (instantiation) of an allele that codes for cancer, multiplying itself uncontrollably could, had he a mind, think he's doing great, but if the mutation that gave rise to it only happened in somatic cells (that do not go through the germ line), he'd be in for a surprise. One reason why biologists say natural selection is short-sighted.
3) The above reasoning applies exactly equally and for the same reasons to an allele that codes for individual-selfish behavior in a species in which more altruist groups tend to outlive more egotistic ones. The allele for individual-selfishness, and the selfish individual, may think they are doing great, comparing to their neighbors, when all of a sudden, with high probability, their group dies. Altruism wins in this case not because there is a new spooky unit of selection that reverses reductionism, and applies downward causation which originates in groups. Altruism thrives because the average long term fitness of each allele that coded for it was higher than that of genes that code for individual-selfish behavior. Group selectionc - as well as superoganism selection, somatic cells selection, species selection and individual selection - only happens when the selective forces operating on that level coincide with the allele's fitness increasing in relation to all the competing alleles. (Group selectionc is selection for altruist genes at the group level, the only definition under which the entire discussion was dealing with a controversy of substance instead of talking past each other, as brilliantly explained in this post by PhilGoetz, 2010, please read the case study section in that post to get a more precise understanding than the above short definition). See also the excursus on what a fitness function is below.
4) Completely independent from the reasons in (3), alleles, epigenetics, and learning can program individuals to be cooperative if they "expect" (consciously or not) the interaction with another individual, say, Malou, to: (a) Begin a cycle of reciprocation with Malou in the future whose benefit exceeds the current cost being paid; (b) Counterfactually increase their reputation with sufficiently many individuals that those will award more benefit than current cost; (c) Avoid being punished by third parties; (d) Conform to, or help enforce, by setting an example, social norms and rules upon which selection pressures act (Tomasello, 2005). A key notion in all these mechanisms based on this encoded "expectation" is that uncertainty must be present. In the absence of uncertainty, a state that doesn't exist in nature, an agent in a prisoner dilemma like interaction would be required to defect instead of cooperating from round one, predicting the backwards-in-time cascade of defection from whichever was the last round of interaction, in which by definition cooperating is worse. The problems that in Lesswrong people are trying to solve using Timeless Decision Theory, Updateless Decision Theory, PrudentBot, and other IQ140+ gimmicks, evolution solved by inserting stupidity! More precisely by embracing higher level uncertainty about how many future interactions will there be. Kissing, saying "I love you", becoming engaged, and getting married are all increasingly honest ways in which the computer program programmed by your alleles informs Malou that there will be more cooperation and less defection in the future.
5) Finally, altruism only poses paradoxes of the "Group Selectionc" kind when we are trying to explain why a replicator that codes for Altruism emerged? And we are trying to explain it at that replicators level. It is no mystery why a composition of the phenotypic effects of a gene (replicator) and two memes (attractor-replicators) in all individuals who posses the three of them makes them altruistic, if it does. Each gene and meme in that composition may be fending for itself, but as things turn out, they do make some really nice people (or bonobos) once their extended phenotypes are clustered within those people. If we trust Jablonka & Lamb (2007), there are four streams of heredity flowing concomitantly: Genetic, Epigenetic, Behavioral and Symbolic. Some of the flowing hereditary entities are not even attractor-replicators (niche construction for instance), they don't exhibit replicator dynamics and any altruism that spreads through them requires no special explanation at all!
To the best of my knowledge, none of the 5 factors above, which all do play a role in the existence and maintenance of altruism, requires a revision of Neodarwinism of the Dawkins, Dennett, Trivers, Pinker sort. None of them challenges the validity of our models of replicator dynamics as replicator dynamics. None of them challenges the metaphysically fundamental notion of Darwinism as Universal Acid (Dennett,1996). None of them compromises the claim that everything in the universe that has complex design of which we are aware can be traced back to Darwinian mind-less processes operating, by and large, in replicator-like entities (Dennett, opus cit). None of them poses an obstacle to physicalist reductionism - in this biology-ladden context being the claim that all macrophysical facts, including biological facts, are materially determined by the microphysical facts.
Cooperation evolves, and altruism evolves. They evolve for natural, non-mysterious reasons, and before any more shaking of the edifice of Darwinism is made, and it's constitutive reductionism or universal corrosive powers are contested, any counteracting evidence must be able to traverse undetectably by the far less demanding possibility of being explained by any of the factors above or a combination of them, or being simply the result of one of the many confusions clarified in the excursus below. Despite many people's attempts to look for Skyhooks that would cast away the all-too-natural demons of Neodarwinism and reductionism, things remain as they were before, Cranes all the way up. I will be listening attentively for a case of altruism found in the biological world or mathematical simulations based on it that can pierce through these many layers of epistemic explanatory ability, but I won't be holding my breadth.
Excursus: What is a fitness function?
It is worth pointing out here not only that the altruism and group selection confusion happens, but showing why it does. And PhilGoetz did half of the explanatory job already. The other half is noticing that the fitness function is a many-place function (there is a newer and better post on Lesswrong explaining many-place functions/words, but I didn't find it in 12min, please point to it if you can). The complicated description of "what the fitness function is", in David Lewis's manner of speaking, would be that it is a function from things to functions from functions to functions. More understandably, with e.g. the specific "thing" being a token of an altruistic allele of kind "Aallele", call it "Aallele334":
Aallele344--1-->((number of Aalleles--3-->total number of alleles)--2-->(amplitude configuration slice--4-->simplest ordering))
Here arrow 4 is the function we call time from a timeless physics, quantum physics perspective. Just substitute the whole parenthesis for "time" instead if you haven't read the Quantum Physics sequence. Arrow 3 is how good Aalleles are doing, i.e. how many of them there are in relation to the total number of competing alleles. Arrow 2 is how this relation between Aalleles and total varies over time. The fitness function is arrow 1, once you are given a specific token of an allele, it is the function that describes how well copies of that token do over time in relation to all the competing alleles. Needless to say, not many biologists are aware of that complex computation.
The reason why the unexplained half of controversies happen is that the punctual fitness of an allele will appear very different when you factor it against the competing alleles of other cells, of other individuals, of other groups, or of other species. Fitness is what philosophers call an externalist concept, if you increase the amount of contextually relevant surroundings, the output number changes significantly. It will also appear very different when you factor it for final time T1 or T2. The fitness of an allele coding for a species specific characteristic of T-Rex's large bodies will be very high if the final time is 65 million years ago, but negative if 64.
I remember Feynman saying, I believe in this interview, that it is amazing what the eye does. Surrounded in a 3d equivalent of an insect floating up and down in the 2d surface of a swimming pool, we manage to abstract away all the waves going through the space between us and a seen object, and still capture information enough to locate it, interact with it, and admire it. It is as if the insect could tell only from his vertical oscillations how many children were in the pool, where they were located etc. The state of knowledge in many fields, adaptive fitness included, strikes me as similarly amazing. If this many-place function underlies what biologists should be talking about to avoid talking past each other, how can many of them be aware of only one or two of the many variables that should be input, and still be making good science? Or are they?
If you fail to see hidden variables, you can fall prey to anomalies like the Simpson's paradox, which is exactly the mistake described in PhilGoetz's post on group/species selection.
The function above also works for things other than alleles, like individuals with a characteristic, in which case it will be calculating the fitness of having that characteristic at the individual level.
4) The complexity of cultural items doesn't undermine the validity of mathematical models.
4.1) Cognitive attractors and biases substitute for memes discreteness
The math becomes equivalent.
4.2) Despite the Unilateralist Curse and the Tragedy of the Commons, dyadic interaction models help us understand large scale cooperation
Once we know these two failure modes, dyadic iterated (or reputation-sensitive) interaction is close enough.
5) From Monkeys to Apes to Humans to Transhumans to AIs, the ranges of achievable altruistic skill.
Possible modes of being altruistic. Graph like Bostrom's. Second and third order punishment and cooperation. Newcomb-like signaling problems within AI.
6) Unfit for the Future: the need for greater altruism.
We fail and will remain failing in Tragedy of the Commons problems unless we change our nature.
7) From Science, through Philosophy, towards Engineering: the future of studies of altruism.
Philosophy: Existential Risk prevention through global coordination and cooperation prior to technical maturity. Engineering Humans: creating enhancements and changing incentives. Engineering AI's: making them better and realer.
8) A different kind of Moral Landscape
Like Sam Harris's one, except comparing not how much a society approaches The Good Life (Moral Landscape pg15), but how much it fosters altruistic behavior.
9) Conclusions
Not yet.
Bibliography (Only of the parts already written, obviously):
Boyd, R., Gintis, H., Bowles, S., & Richerson, P. J. (2003). The evolution of altruistic punishment. Proceedings of the National Academy of Sciences, 100(6), 3531-3535.
Cantor, M., & Whitehead, H. (2013). The interplay between social networks and culture: theoretically and among whales and dolphins. Philosophical Transactions of the Royal Society B: Biological Sciences, 368(1618).
Dawkins, R. (1999). The extended phenotype: The long reach of the gene. Oxford University Press, USA.
Dennett, D. C. (1996). Darwin's dangerous idea: Evolution and the meanings of life (No. 39). Simon & Schuster.
Dennett, D. C. (1992). The self as a center of narrative gravity. Self and consciousness: Multiple perspectives.
Galef Jr, B. G., & Laland, K. N. (2005). Social learning in animals: empirical studies and theoretical models. Bioscience, 55(6), 489-499.
Hauert, C., Traulsen, A., Brandt, H., Nowak, M. A., & Sigmund, K. (2007). Via freedom to coercion: the emergence of costly punishment. science, 316(5833), 1905-1907.
Henrich, J., Boyd, R., & Richerson, P. J. (2008). Five misunderstandings about cultural evolution. Human Nature, 19(2), 119-137.
Hofstadter, D. R. (2007). I am a Strange Loop. Basic Books
Jablonka, E., & Lamb, M. J. (2007). Precis of evolution in four dimensions. Behavioral and Brain Sciences, 30(4), 353-364.
McElreath, R., & Boyd, R. (2007). Mathematical models of social evolution: A guide for the perplexed. University of Chicago Press.
Ottoni, E. B., de Resende, B. D., & Izar, P. (2005). Watching the best nutcrackers: what capuchin monkeys (Cebus apella) know about others’ tool-using skills. Animal cognition, 8(4), 215-219.
Persson, I., & Savulescu, J. Unfit for the Future: The Need for Moral Enhancement Oxford: Oxford University Press, 2012 ISBN 978-0199653645 (HB)£ 21.00. 160pp. On the brink of civil war, Abraham Lincoln stood on the steps of the US Capitol and appealed.
PhilGoetz. (2010), Group selection update. Available at http://lesswrong.com/lw/300/group_selection_update/
Pinker, S. (2007). The stuff of thought: Language as a window into human nature. Viking Adult.
Rendella, L., & Whitehead, H. (2001). Culture in whales and dolphins.Behavioral and Brain Sciences, 24, 309-382.
Richardson, P. J., & Boyd, R. (2005). Not by genes alone. University of Chicago Press.
Tyler, T. (2011). Memetics: Memes and the Science of Cultural Evolution. Tim Tyler.
Tomasello, M., Carpenter, M., Call, J., Behne, T., & Moll, H. (2005). Understanding and sharing intentions: The origins of cultural cognition.Behavioral and brain sciences, 28(5), 675-690.
Yudkowsky, E. (2008A). 37 ways words can be wrong. Available at http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/
Does inclusive fitness theory miss part of the picture?
I originally titled this post "The Less Wrong wiki is wrong about group selection", because it seemed wildly overconfident about its assertion that group selection is nonsense. The wiki entry on "group selection" currently reads:
People who are unfamiliar with evolutionary theory sometimes propose that a feature of the organism is there for the good of the group - for example, that human religion is an adaptation to make human groups more cohesive, since religious groups outfight nonreligious groups.
Postulating group selection is guaranteed to make professional evolutionary biologists roll up their eyes and sigh.
However, it appears that the real problem is not that the wiki is overconfident (that's a problem, but it's only a symptom of the next problem) but that the traditional dogma on the viability of group selection is wrong, or at least overconfident. I make this assertion after stumbling across a paper by Martin Nowak, Corina Tarnita, and E. O. Wilson titled "The evolution of eusociality", which appeared in Nature in August of this year. I found a PDF of this paper through Google scholar, click here. A blog entry discussing the paper can be found here (bias alert: it is written by a postdoc working in Martin Nowak's Evolutionary Dynamics program at Harvard).
Here's some quotes (bolding is mine):
It has further turned out that selection forces exist in groups that diminish the advantage of close collateral kinship. They include the favouring of raised genetic variability by colony-level selection in the ants Pogonomyrmex occidentalis and Acromyrmex echinatior—due, at least in the latter, to disease resistance. The contribution of genetic diversity to disease resistance at the colony level has moreover been established definitively in honeybees. Countervailing forces also include variability in predisposition to worker sub-castes in Pogonomyrmex badius, which may sharpen division of labour and improve colony fitness—although that hypothesis is yet to be tested. Further, an increase in stability of nest temperature with genetic diversity has been found within nests of honeybees and Formica ants. Other selection forces working against the binding role of close pedigree kinship are the disruptive impact of nepotism within colonies, and the overall negative effects associated with inbreeding. Most of these countervailing forces act through group selection or, for eusocial insects in particular, through between-colony selection.
Yet, considering its position for four decades as the dominant paradigm in the theoretical study of eusociality, the production of inclusive fitness theory must be considered meagre. During the same period, in contrast, empirical research on eusocial organisms has flourished, revealing the rich details of caste, communication, colony life cycles, and other phenomena at both the individual- and colony-selection levels. In some cases social behaviour has been causally linked through all the levels of biological organization from molecule to ecosystem. Almost none of this progress has been stimulated or advanced by inclusive fitness theory, which has evolved into an abstract enterprise largely on its own
...
The question arises: if we have a theory that works for all cases (standard natural selection theory) and a theory that works only for a small subset of cases (inclusive fitness theory), and if for this subset the two theories lead to identical conditions, then why not stay with the general theory? The question is pressing, because inclusive fitness theory is provably correct only for a small (non-generic) subset of evolutionary models, but the intuition it provides is mistakenly embraced as generally correct.
Check out the paper for more details. Also look at the Supplementary Information if you have access to it. They perform an evolutionary game theoretic analysis, which I am still reading.
Apparently this theory is not that new. In this 2007 paper by David Sloan Wilson and E. O. Wilson, they argue (I'm just pasting the abstract):
The current foundation of sociobiology is based upon the rejection of group selection in the 1960s and the acceptance thereafter of alternative theories to explain the evolution of cooperative and altruistic behaviors. These events need to be reconsidered in the light of subsequent research. Group selection has become both theoretically plausible and empirically well supported. Moreover, the so-called alternative theories include the logic of multilevel selection within their own frameworks. We review the history and conceptual basis of sociobiology to show why a new consensus regarding group selection is needed and how multilevel selection theory can provide a more solid foundation for sociobiology in the future.
From the other camp, this seems to be a fairly highly-cited paper from 2008. They concluded:
(a) the arguments about group selection are only continued by a limited number of theoreticians, on the basis of simplified models that can be difficult to apply to real organisms (see Error 3); (b) theoretical models which make testable predictions tend to be made with kin selection theory (Tables 1 and 2); (c) empirical biologists interested in social evolution measure the kin selection coefficient of relatedness rather than the corresponding group selection parameters (Queller & Goodnight, 1989). It is best to think of group selection as a potentially useful, albeit informal, way of conceptualizing some issues, rather than a general evolutionary approach in its own right.
I know (as of yet) very little biology, so I leave the conclusion for readers to discuss. Does anyone have detailed knowledge of the issues here?
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)