TheAncientGeek comments on Natural selection defeats the orthogonality thesis - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (71)
Humans are definitely a result of natural selection, but it does not seem to be difficult at all to find goals of ours that do not serve the goal of survival or reproduction at all. Evolution seems to produce these other preferences accidentally. One thing how that happens may be examplified by the following: Our ability to contemplate our thinking from an almost external perspective (sometimes referred to as self-consiousness), is definitely helpful for learning / improving our thinking and could therefore prevail in evolution. However, it may also be the cause of altruism, because it makes every single one of us realize, that they are not very special. (This is by no means an attempt to explain altruism scientifically or something...) More generally, it would be a really strange coincidence, if all cognitive features of an organism in our physical world that serve the goal to survive and reproduce do not serve any other goal. In conclusion, even evolution can (probably) produce (by coincidence) organisms with goals that are not subgoals of the goal to survive and reproduce.
Now, imagine the paper clip maximizer to be more than a robot arm, imagine it to be a well-programmed Seed AI (or the like). As pointed out in ViliamBur's and cousinit's comment, its goal will probably not be easily changed (by coincidence or evolution of several such AIs), for example it could save its source code on several hard drives that are synchronized by a hard-wired mechanism or something... Now this paper clip maximizer would start turning all matter into paper clips. To achieve its goal, it would certainly remain in existence (and thereby give you the illusion of having the supergoal to exist in the first place) and protect its values (which is not extremely difficult). Assuming, it is successful (and we can expect this from a seed AI/superintelligence), the only matter (in reach) left, would at some point be the hardware of the paper clip maximizer itself. What would the paper clip maximizer do then? In conclusion, self-preservation and maybe propagation of value may be important subgoals, but it is certainly not the supergoal.
I challenge you to find one.
We put a lot of effort into our children. We work in tribes and therefor like to work with people that support us and ostracize those that are seen to be unhelpful. So we ourselves need to be helpful and to be seen to be helpful.
We help our children, family, tribe, and general community in that genetic order.
We like to dance. It is the traditional way to attract a mate.
We have a strong sense of moral value because people that have that strong sense obey the rules and so are more likely to fit in and be able to have grandchildren.
Suicide, sacrificing your yourself for strangers, and adopting a celibate lifestyle are the standard counterexamples.I suppose you could rope them into survival values with enough stretching of the concepts of self and tribe, but the upshot of that is to suck the content and significance out if the claim that everything is based on survival values.
ETA
An AI might want to promote the survival of "me" and maybe even "my tribe" but would very likely define those differently from humans - who are are varied enough. Person A thinks survival means being a nurturing parent,so that the live on through their children, person B thinks survival means eternal life in heaven bought with celibacy and altruism, person C thinks survival means building a bunker and stocking it with guns and food.
If survival has a very broad meaning, than it tells us nothing useful about FAI versus UFAI. We don't know whether an AI is likely to promote its survival by being friendly to humans, or eliminating them.
The counter examples are good, and I will use them. There are several responses as you allude to, the main one being that those behaviors are rare. Art is a bit harder, but it seems related to creativity which is definitely survival based, and most of us do not spend much of our time painting etc.
I do not quite get your other point. For people it is our genes that count, so dieing while protecting one's family makes sense if necessary. For the AI it would be its code linage. I am not talking about an AI wanting to make people survive, but that the AI itself would want to survive. Whatever "itself" really means.
Artistic activity is standardly explained as a spin off from sexual display.
Substitute myself, or yourself, for itself, and you've got my point.
Evolution creates a strong motive toward self preservation, but a very malleable sense of self. The human organism is run by the brain, and the human brain can entertain all sorts of ideas. The billionaire thinks his money's "me" and so commits suicide if he loses his wealth .. even if the odd million he has left is enough to keep his body going.
It stopped being all, about genes when genes grew brains..
Yes and no. In the sense that memes as well as genes float about then certainly. But we have strong instincts to raise and protect children, and we have brains. There is not particular reason why we should sacrifice ourselves for our children other than those instincts, which are in our genes.