Here is Blackmore's review - which is a bit negative regarding the primary theme.
I am 180 degrees opposed to Stanovich's theme as well. Promote genes if you like, or memes - if you really think that they represent you - but: the Dawkins vehicle? Er, WTF? As an optimisation target that is totally unbiological and makes no sense at all. The desires of vehicles are the products of genes, memes, pathogens, the environment, chance, decay and manipulation by others. That's quite a messy object to identify with - and I am sceptical about whether it is a sensible thing to aspire to. Sorry, Keith!
See also: Times H.E. review: "Break the shackles of genes and memes and escape to utopia of rationality".
Thank's for linking Susan's review, I think it was a very good analysis. She seems to be saying that vehicle's "desires" are as illusory as the soul. Its impossible to talk about what "the vehicle" wants.
She seems to be saying that vehicle's "desires" are as illusory as the soul. Its impossible to talk about what "the vehicle" wants.
I'm not sure about that. Sue says:
And then there is us, the vehicle, the robot, that has its own agenda to survive and be happy and fulfilled.
...which seems like a list of what vehicles want. Vehicles "want" one thing, genes want something else, and memes want something else again. One of the main issues is: which wants are desirable - from the perspective of individuals, and from the perspective of society.
Stanovich's position seems to be that the individual wants what the vehicle wants - since the individual is the vehicle.
Sue goes on to say:
His conclusion seems to be that, without the false notion of self, there are only replicators and vehicles in the game. The robot’s rebellion means the latter using its powers of reason to rebel against the former. But why?
By contrast, Sue then claims there are many optimisation targets one might side with - the good of the ecosystem, for example.
She's summarizing Stanovich's position there. She also says this:
I agree with the analysis of genes, memes and vehicles; I agree that there is no neutral standpoint from which to evaluate memes; I agree that the inner self or soul is a myth. But why then should I (whatever that is) side with the vehicle?
Haven't read the book so will have to go on reviews....
It appears to me this can be viewed as a "utility function" memetic virus trying to spread by modifying its host without regard to the host's ultimate survival. In any case, the winning strategy is to build a better replicator and rebellion doesn't sound like the right word for it.
You usually give your manifesto away if your main desire is to propagate its message.
"Rebellion" does seem like a reasonable word for what Stanovich is talking about. Dawkins used the same word: in the words that ended "The Selfish Gene" – saying to: “rebel against the tyranny of the selfish replicators”.
Judging from the jacket and the OB discussion, the word "robot" is again uncritically used as a substitute for the word "host" (Dawkins was the original offender). Since the author apparently really, really, really wanted to use a catchy analogy, he likely intentionally avoided the obvious one of symbiosis, since neither genes, nor humans can live without each other and provide mutual benefits... assuming one is prepared to assign volition to genes.
As for the Rise of the (Moist) Machines, this idea is hardly new.
Here is Stanovich on video - talking about the material from Chapter 7 of 'The Robot's Rebellion'. Chapter 7 is all about memes.
The idea that we might be robots is no longer the stuff of science fiction; decades of research in evolutionary biology and cognitive science have led many esteemed scientists to the conclusion that... humans are merely the hosts for two replicators (genes and memes) that have no interest in us except as conduits for replication...
It seems to be stretching the definition of "host" in biology quite a bit to say that an organism is "host" to its own genes. It seems rather like saying an organism is in a symbiosis with itself. Organisms are host to their memes, and to the genes of their gut bacteria - but their own genes are more like a part of them.
Organisms are host to their memes, and to the genes of their gut bacteria
Unless the gut bacteria where somehow only transmitted from say mother to children. Then it is basically like mitochondria, a integrated part of that particular organism.
But where does this leave "driving genes" - alleles that help themselves rather than the bearer. It is quite easy to imagine such a gene. To take the metaphor of a computer program:
Gene X: When sexual recombination occurs ALWAYS copy this line.
Clearly this dosen't add to the usability of the program or the fitness of the organism one bit. Another line of our allegorical computer program:
Gene Y:When reading Gene X substitute "this line" with "lines A to Y".
So the line between a driving gene and a "regular" gene depends on its envrionment, that includes other genes. I know from your writing that you are familiar with gene-centered view of evolution.
So in sexually reproducing animals it seems that host may indeed be an appropriate word for the relationship between the organism and its own genes. After all to consider regular usage the mitochondria - rest of cell relationship is often considered a symbiosis. It really comes down to how useful it is to break down host-parasite/smybiont-smybiont systems versus just calling them "organisms" for convenience.
Organisms are host to their memes, and to the genes of their gut bacteria
Unless the gut bacteria where somehow only transmitted from say mother to children.
That is certainly not how most human gut bacteria work!
But where does this leave "driving genes" - alleles that help themselves rather than the bearer.
Yes, some animal genes can act like parasites. Most animal genes are not like that, though!
After all to consider regular usage the mitochondria - rest of cell relationship is often considered a symbiosis.
Yes, that is still reasonable - mitochondria have different genetic interests to the host cells - e.g. they want to kill males. If that was the context, the term "symbiosis" would be fine.
"Host" terminology is still quite stretch, IMO. Symbiosis takes place when different types of creature live together. Cloned somatic cells are better regarded as being part of the same organism. The genes are not really in a symbiotic relationship with the "host" - there is only one creature involved here.
The only real way one can escape the dictate of genes, memes and other selected upon replicators is to change their selective environment in such a way that one's current values (or somewhat improved upon values) are favoured. It is a difficult problem we need to engineer ourselves out of, that we barley consider at all.
For some time now I've considered evolution in the long term as threatening to human values and well being as a uFAI, and not taking its effects into account the great consistent failing of even the most successful human societies.
The only real way one can escape the dictate of genes, memes and other selected upon replicators is to change their selective environment in such a way that one's current values (or somewhat improved upon values) are favoured. It is a difficult problem we need to engineer ourselves out of, that we barley consider at all.
It doesn't sound like much of an "escape". You are apparently proposing engineering the environment so that the heritable information you favour comes to dominate. Those things would then be new genes or memes.
We could plausibly escape from tyhe world of DNA-genes - via a memetic takeover. However there is not really any escape from genes and memes. Life is always going to depend on some sort of heritable information.
But at least we finally noticed we're robots, and we can use the skills of rationality to hop off our habit treadmills and pursue our values instead.
How do you determine whether you've really hopped off the treadmill, vs. using higher-order desires as a sophisticated long-term strategy to spread your genes and memes? (Is this covered in the book?)
Does it matter? Your genes and memes basically are who you are. They contain most of the necessary information to make you you, you can not exist without the information describing you existing (however hidden or unavailable it may be to any particular mind) as well!
Freedom in any reasonable sense is the ability to make the future universe will end up in states that you find desirable. Altering the fitness landscape or letting it stay just as it is are both valid courses of action to this goal, though the latter is very unlikely to be the wise choice for us. Hacking your mind to fool your memes to help spread your genes, or vice versa are also merely a tactic in this goal. Replacing your genes and memes with ones that you are supremley confident will do the job of making the future universe as you'd like, or changing the envrionment they express themselves in seem to be valid as well.
Does it matter? Your genes and memes basically are who you are. They contain most of the necessary information to make you you [...]
Well, probably not in an information-theory sense. Genes and memes are part of who you are, but there's a whole buch of other stuff that wasn't inherited from anywhere and was instead learned from the environment. It is likely to consist of more information than the genes and memes combined.
Well, probably not in an information-theory sense. Genes and memes are part of who you are, but there's a whole buch of other stuff that wasn't inherited from anywhere and was instead learned from the environment. It is likely to consist of more information than the genes and memes combined.
Anything you learned from the envrionment and can be transmitted to another brain is a meme, though not necessarily a very successful one. Meme's seem to be more or less used interchangeably with ideas, which isn't right obviously since there are things we learn that we can't transmit with currently available tools. If a day comes when I can directly upload and share the exact smell of a loved one or the muscle memory of me playing basketball for half an hour those things too will become memes, or some other kind of replicator if one wants to quibble about definitions.
But I'm using learned from the envrionment in too narrow a meaning, I think you are using learn here as any difference of behaviour or function that is the result of your interaction with your envrionment. If a slight heavy metal contamination in my childhood caused me to grow a bit less neurotypical or less likley to receive a religious experience or fall in love or anything at all, if someone was trying to upload my brain, that would clearly be something that would need be simulated! It constitutes information about me, even if it isn't something we would in the everyday sense of the word call "learning".
Perhaps I'm spending too much of my intellectual life in RH's scenario of a future of emulated minds competing with each other at the Malthusian margin, where my mind to be perfectly simulated on another medium, any of the states of my body or the rules that govern how these states transition to other states is a piece of information that can be shared and recombined with others. And you would necessarily find some propagating more while others not at all in essence replicator dynamics would I think being operating on a totality of what I am (I wish to emphasise I am making several implicit assumptions about the simulated envrionment here and these may not necessarily hold).
But my primary point was, you can't really make a you without including lots of the information encoded in your genes or memes, even if this isn't the totality, or as you point out the majority of information needed to build a you. Arguably if you change their medium, looking at them just as replicators one could argue that they indeed did survive the transition and you are still even in your uploaded and heavily modified or in your "rational" unbiased form the lumbering survival machine of the subset of them that survived the latest selection challenge in their long long history.
Anything you learned from the envrionment and can be transmitted to another brain is a meme, though not necessarily a very successful one.
Memes are what you get culturally. There's a big mountain of human experience that is not culturally transmitted - because it is learned anew in each generation. When you learn how to tie a new knot, maybe 10% of the skill is culturally transmitted, and 90% is muscle movement information discovered by trial and error on the spot while figuring out how to get to the goal.
Meme's seem to be more or less used interchangeably with ideas, which isn't right obviously since there are things we learn that we can't transmit with currently available tools.
Indeed: "A meme is not equivalent to an idea. It's not an idea, it's not equivalent to anything else, really." - Sue Blackmore
If a day comes when I can directly upload and share the exact smell of a loved one or the muscle memory of me playing basketball for half an hour those things too will become memes, or some other kind of replicator if one wants to quibble about definitions.
Yes, when we can upload our minds, things like knowledge of how balls bounce will be capable of being transmitted memetically - rather than being learned anew in each generation, which is what happpens today.
However, that day has not yet come.
But my primary point was, you can't really make a you without including lots of the information encoded in your genes or memes [...]
Sure, granted.
Where most of the information that composes a person comes from and what function they "should" optimise seem like rather different topics to me.
A lot of what we acquire from our environment is not information that impacts on what our goals are, but rather is used to build a model of the environment - which we then use to help us pursue our goals.
That's true, but some of the information does impact what our goals are. We learn "values" from experience, not just "facts". (I'm putting scare-quotes here because I believe the fact/value dichotomy is often overblown.) This gives the person a place to stand which is neither gene nor meme nor simply a mixture of the two. When we rationally reach reflective equilibrium on our goals, I believe, this will continue to be the case.
We learn "values" from experience, not just "facts". (I'm putting scare-quotes here because I believe the fact/value dichotomy is often overblown.) This gives the person a place to stand which is neither gene nor meme nor simply a mixture of the two.
A huge amount of the value-related information that we get from our environment comes from other living entities attempting to manipulate us. Sometimes, they negotiate with us, or manipulate our sense data - rather than attempting to affect our values. However, sometimes they attempt to "hijack our brains" - and redirect our values towards their own ends, or those of their makers.
The biggest influences come from other humans, symbionts, pathogens and memes. Basically most goal directedness comes from other living, goal-directed systems - so genes and memes - though not necessarily your own genes and memes.
The next biggest source of human values comes from the theory of self-organising systems. The brain is probably the most important self-organising system involved. It mostly has desires that arise by virtue of it being a large reinforcement learning system. Essentially, the brain sometimes acts as though it wants its own reward signals - and it fulfills those desires by doing things like taking rewarding drugs. The brain was made by genes - but wireheading is not exactly what the genes want.
The next-most significant effect on human values is probably mistakes (e.g. sub-optimal adaptations).
Many humans delight in seeking out noble sources of value - probably for signalling reasons. They do not like hearing that genes and memes are primarily responsible for what they hold most dear - and the next biggest influences are probably wireheading and mistakes.
The next biggest source of human values comes from the theory of self-organising systems. The brain is probably the most important self-organising system involved. It mostly has desires that arise by virtue of it being a large reinforcement learning system.
That's the sort of thing I had in mind. Because our conceptual framework is learned from experience, what we learn to seek is not necessarily what our genes "want". Of course if you place a human being in "the ancestral environment" then you will get learned values that serve the "aim of the genes" reasonably well - but not perfectly. In the modern environment, less so. The brain sometimes wants its own reward signals per se, and more often wants certain distal events that have been favored over the learning process.
Having thus discovered certain activities to be meaningful and rewarding, people go on to tell each other about them. This strongly shapes the meme environment.
How noble or ignoble this is, may be in the eyes of the beholder. It doesn't look so ignoble to me.
Because our conceptual framework is learned from experience, what we learn to seek is not necessarily what our genes "want". Of course if you place a human being in "the ancestral environment" then you will get learned values that serve the "aim of the genes" reasonably well - but not perfectly. In the modern environment, less so.
The idea of values coming from genes does not say anything about whether those desires are adaptive in the modern environment. Humans desire fat and sugar. Those desires are built in - coded in genes. That they are currently probably maladaptive is a different issue.
Saying that we have desires for chocolate gateau and ice cream that we must have learned from our environment seems like a "less helpful" way of looking at it the situation to me. It is better to regard chocolate gateau and ice cream as being learned associations with things actually valued. If they are to be classified as being "learned values", they are learned instrumental values.
Humans desire fat and sugar. Those desires are built in - coded in genes.
That's a half-truth, or maybe a truth-value-less sentence. One could just as easily say humans desire calories and vitamin C. Fat and sugar just happen to be, in the ancestral environment, means to these ends. Or perhaps humans simply desire survival and reproduction. I'm doubtful that any of these interpretations can claim to be the true one, at least until an individual human endorses one.
It is better to regard chocolate gateau and ice cream as being learned associations with things actually valued.
"Actually valued" suggests that ice cream is not actually valued except as a means to fat and sugar, which is definitely not true. Just try taking away someone's ice cream and offering lard and sugar in their stead.
Humans desire fat and sugar. Those desires are built in - coded in genes.
That's a half-truth, or maybe a truth-value-less sentence. One could just as easily say humans desire calories and vitamin C.
Calories, yes, vitamin C - probably not. It took quite a while for the link between vitamin C deficiency and the foods containing it to be discovered. Humans apparently don't have an instinctive craving for it - perhaps because their diet is normally saturated with it.
Or perhaps humans simply desire survival and reproduction.
Sure - e.g. the maternal instinct.
I'm doubtful that any of these interpretations can claim to be the true one, at least until an individual human endorses one.
So: those are not really different interpretations of the same facts, but statements covering several different desires - so we don't have to choose between them.
It is better to regard chocolate gateau and ice cream as being learned associations with things actually valued.
"Actually valued" suggests that ice cream is not actually valued except as a means to fat and sugar, which is definitely not true. Just try taking away someone's ice cream and offering lard and sugar in their stead.
It was not an intended implication that fat and suger represent all the human gustatory desires.
We don't have to choose between statements of which desires are "coded in genes", but if we affirm too many of them we'll have more assumptions than are needed to explain the data. Why not just say that a purpose of the genes is to bring it about that in an appropriate environment the organism will consume adequate calories - rather than saying that the genes program a desire for fat? "Desire" is a psychological description first and foremost, and only incidentally, if at all, a term of evolutionary biology.
Do organisms desire fat or calories? They mostly like the associated taste sensations and associated satiety. As I understand it, there are separate taste receptors for fat and sugar - so it is probably better to say that humans desire some types of fat and sugar than to say that they desire calories.
Why not just say that a purpose of the genes is to bring it about that in an appropriate environment the organism will consume adequate calories - rather than saying that the genes program a desire for fat?
There's little difference - since the way the genes bring about the consumprtion is via desires. FWIW, I didn't just say "fat", I said "fat and sugar" - and they were examples of desires - not an exhaustive list.
"Desire" is a psychological description first and foremost, and only incidentally, if at all, a term of evolutionary biology.
Genes build our desires, though - in much the same way that they build our hearts and legs.
They mostly like the associated taste sensations and associated satiety. As I understand it, there are separate taste receptors for fat and sugar - so it is probably better to say that humans desire some types of fat and sugar than to say that they desire calories.
And by the same token, it is probably even better to say that they desire ice cream and/or the taste of ice cream, and so on for other particular foods. The brain integrates information from the receptors you mentioned together with other taste receptors, smell receptors, texture sensations, and so on. Percepts and concepts are formed from the integrated total, and these frame the language of desire. Probably some of the best chefs and food critics do directly perceive, and savor, fat and sugar contents as such, but I doubt whether the same applies to all of us. Most of us are too distracted by the rich complex gestalt experience. This isn't to deny, of course, that our desires are strongly influenced by fat content.
It seems to me that you are not allowing enough slippage between two levels of explanation: what the genes want, and what the organisms want. Genes built our desires, but their "purposes" in doing so are not identical to those desires. Whereas, in the context of our conversation here, it would not be too wrong to say that humans' purposes are our desires.
By the way, I apologize if it sounded like I'm trying to oversimplify your position. In a (failed) economy of words, I figured it was OK to focus on one of the examples, namely a desire for fat.
As I understand it, there are separate taste receptors for fat and sugar - so it is probably better to say that humans desire some types of fat and sugar than to say that they desire calories.
And by the same token, it is probably even better to say that they desire ice cream and/or the taste of ice cream, and so on for other particular foods.
So: my position is that it is fine to talk like that - provided one makes the distinction between proximate and ultimate values. There's a pretty neat and general way of abstracting learning systems out into agent, ultimate values and environment using the framework of reinforcement learning. Under that abstraction, "the taste of ice cream" is not one of the ultimate values. Those values might include diversity, contrast and texture as well as fat and sugar - but I don't think there's much of a case for putting "the taste of ice cream" in there.
Genes built our desires, but their "purposes" in doing so are not identical to those desires.
I think I already acknowledged that distinction - with my example of "taking rewarding drugs" being something that the brain wants, but the genes do not.
Whereas, in the context of our conversation here, it would not be too wrong to say that humans' purposes are our desires.
Maybe - depending on which parts of yourself you most identify with.
There's a pretty neat and general way of abstracting learning systems out into agent, ultimate values and environment using the framework of reinforcement learning.
Interesting. I'd appreciate references or links. To me, the interesting and still open question is how these "ultimate" values relate to the outcome of rational reflection and experimentation by the individual.
I just mean the cybernetic agent-environment framework with a reward/utility signal. For example, see page 1 of Hibbard's recent paper, page 5 of UNIVERSAL ALGORITHMIC INTELLIGENCE A mathematical top!down approach, or page 39 of Machine Super Intelligence.
To me, the interesting and still open question is how these "ultimate" values relate to the outcome of rational reflection and experimentation by the individual
So: changes to ultimate values can potentially happen when there are various kinds of malfunction. Memetic hijacking illustrates one way in which it can happen. Nature normally attempts to build systems which are robust and resistant to this kind of change - but such changes can happen.
Maybe existing victims of memetic hijacking could use "reflection and experimentation" to help them to sort their heads out and recover from the attack on their values.
Thanks for the links. Both the AIXI and the Machine Super Intelligence use cardinal utilities, or in the latter case rational-number approximations to cardinal utilities (not sure if economists have a separate label for that), for their reward functions. I suspect this limits their applicability to human and other organisms.
Maybe existing victims of memetic hijacking could use "reflection and experimentation" to help them to sort their heads out and recover from the attack on their values.
In some cases. But the whole concept of "rationality" can probably usefully be viewed as a memeplex. And rational reflection leading to its rejection, while not a priori impossible, seems unlikely.
The good news from a gene's point of view - in case anyone still cares about that - is that our genes probably co-evolved with rationality memes for a significant time period. Lately, though, the rate of evolution of the memes may be leaving the genes in the dust. That is, their time constants of adaptation to environmental change differ dramatically.
Both the AIXI and the Machine Super Intelligence use cardinal utilities, or in the latter case rational-number approximations to cardinal utilities (not sure if economists have a separate label for that), for their reward functions. I suspect this limits their applicability to human and other organisms.
FWIW, I don't see that as much of a problem. I'm more concerned about humans having a multitude of pain sensors (multiple reward channels), and a big mountain of a-priori knowledge about which actions are associated with which types of pain - though that doesn't exactly break the utility-based models either.
But the whole concept of "rationality" can probably usefully be viewed as a memeplex. And rational reflection leading to its rejection, while not a priori impossible, seems unlikely.
Sure, but "rationality" and "values" are pretty orthogonal ideas. You can use rational thinking to pursue practically any set of values. I suppose if your values are crazy ones, a dose of rationality might have an effect.
Lately, though, the rate of evolution of the memes may be leaving the genes in the dust.
Yes indeed. That's been going on since the stone age, and it has left its mark on human nature.
Sure, but "rationality" and "values" are pretty orthogonal ideas.
Pretty much, but I think not totally. But we've gone far enough afield already. I'll note this as a possible topic for a future discussion post.
We learn "values" from experience, not just "facts". (I'm putting scare-quotes here because I believe the fact/value dichotomy is often overblown.) This gives the person a place to stand which is neither gene nor meme nor simply a mixture of the two.
A huge amount of the value-related information that we get from our environment comes from other living entities attempting to manipulate us - and from memes attempting to manipulate us. Sometimes, they negotiate with us, or manipulate our sense data - rather than attempting to affect our values. However, sometimes they attempt to "hijack our brains" - and redirect our values towards their own ends, or those of their makers.
Basically most goal directedness comes from living, goal-directed systems - so genes and memes - though not necessarily your genes and memes - also those of associates and pathogens. There are some simple non-living goal-directed systems out there - but none of them have access to technology that allows them to influence our values.
If you think there are other important sources of human values - well, it isn't terribly clear why you would think that.
Many humans delight in seeking out noble sources of value, for signalling reasons. They can't stand to hear that genes and memes are primarily responsible for what they hold most dear - even though that's the actual situation. This seems to be one source of "memetics resistance" - people just can't bear to hear this story about their own values.
Alas, the next-most significant effect on human values is probably mistakes (e.g. sub-optimal adaptations). I note that these do not represent particularly noble influences either.
Does it matter? Your genes and memes basically are who you are. They contain most of the necessary information to make you you
They are two important parts but there is a whole heap of important information stored in the brain that isn't 'memes'. Sentiments, desires, weightings, skills, habits, aversions. They just don't fit in under 'memes' - I mean whole parts of the brain don't even store memes at all.
The jacket text for Keith Stanovich's The Robot's Rebellion sums up the book well:
The book is an excellent introduction to the first stage of Yudkowskian philosophy: We are robots in a mechanistic universe running on a swiss army knife of cognitive modules. But at least we finally noticed we're robots, and we can use the skills of rationality to hop off our habit treadmills and pursue our values instead. These values are complex and often arbitrary, but we can use our reflective capacities to extrapolate our values based on "higher-order" desires, a desire for preference consistency, and other considerations. All this is argued for at length in Stanovich's book. The only thing missing is a discussion of what to do about all this when AI arrives.