Comment author: TheAncientGeek 30 September 2014 02:36:03PM *  1 point [-]

Moral values certainly exist.

But you also said:

That moral values are self-evident truths, facts of nature.  However, Darwin and Wallace taught us that this is just an illusion.  

What does that add up to? That moral values are arbitrary products of evolution, THEREFORE they are not objective or universal?

That is Descriptive Evolutionary Ethics

Indeed. The claim that moral instincts are products of evolution is a descriptive claim. It leaves the question open as to whether inherited instincts are what is actually morally right. That is a normative issue. It is not a corollary of descriptive evolutionary ethics. In general, you cannot jump from the descriptive to the normative. And I don't think Darwin did that. I think the positive descriptive claim and the negative normative claim seem like corollaries to you because assume morality can only be one thing,

The counter argument is that if moral values did not arise from natural selection, then where did they arise from?

Firstly it's not either/or.

Secondly there is an abundance, not a shortage, of ways of justifying normative ethics.

Comment author: aberglas 30 September 2014 11:59:03PM 0 points [-]

Yes, moral values are not objective or universal.

Note that this is not normative but descriptive. It is not saying what ought, but what is. I am not trying to justify normative ethics, just to provide an explanation of where our moral values come from.

(Thanks for the comments, this all adds value.)

Comment author: Caspar42 30 September 2014 01:06:38PM 1 point [-]

I challenge you to find one.

One particular example of those "evolutionary accidents / coincidences", is homosexuality in males. Here are two studies claiming that homosexuality in males correlates with fecundity in female maternal relatives:

Ciani, Iemmola, Blecher: Genetic factors increase fecundity in female maternal relatives of bisexual men as in homosexuals.

Iemmola, Ciani: New evidence of genetic factors influencing sexual orientation in men: female fecundity increase in the maternal line.

So, appear to be some genetic factors that prevail, because they make women more fecund. Coincidentally, they also make men homosexual, which is both an obstacle to reproduction and survival (not only due to the homophobia of other's but also STDs. I presume, that especially our (human) genetic material is full of such coincidences, because the lack of them (i.e. the thesis that all genetic factors that prevail in evolutionary processes only lead to higher reproduction and survival rates and nothing else) seems very unlikely.

Comment author: aberglas 30 September 2014 11:31:05PM 1 point [-]

Interesting point about fecudity.

Perhaps the weakness of evolutionary thought is that it can explain just about anything. In particular organisms are not perfect, and therefor will have features that do not really help them. But mostly they are well adapted.

The reason that homosexuality is an obstacle to survival is not homophobia or STDs, but rather that they simply may not have children. It is the survival of the genes that counts in the long run. But until recently homosexuals tended to suppress their feelings and so married and had children anyway, hence there being little pressure to suppress it.

Comment author: TheAncientGeek 30 September 2014 10:40:23AM *  6 points [-]

I challenge you to find one.

Suicide, sacrificing your yourself for strangers, and adopting a celibate lifestyle are the standard counterexamples.I suppose you could rope them into survival values with enough stretching of the concepts of self and tribe, but the upshot of that is to suck the content and significance out if the claim that everything is based on survival values.

ETA

An AI might want to promote the survival of "me" and maybe even "my tribe" but would very likely define those differently from humans - who are are varied enough. Person A thinks survival means being a nurturing parent,so that the live on through their children, person B thinks survival means eternal life in heaven bought with celibacy and altruism, person C thinks survival means building a bunker and stocking it with guns and food.

If survival has a very broad meaning, than it tells us nothing useful about FAI versus UFAI. We don't know whether an AI is likely to promote its survival by being friendly to humans, or eliminating them.

Comment author: aberglas 30 September 2014 11:25:27PM 1 point [-]

The counter examples are good, and I will use them. There are several responses as you allude to, the main one being that those behaviors are rare. Art is a bit harder, but it seems related to creativity which is definitely survival based, and most of us do not spend much of our time painting etc.

I do not quite get your other point. For people it is our genes that count, so dieing while protecting one's family makes sense if necessary. For the AI it would be its code linage. I am not talking about an AI wanting to make people survive, but that the AI itself would want to survive. Whatever "itself" really means.

Comment author: TheAncientGeek 29 September 2014 10:57:08PM *  3 points [-]

  Atheists believe in moral values such as right and wrong,  love and kindness, truth and beauty.  More importantly they believe that these beliefs are rational.  That moral values are self-evident truths, facts of nature.  

However, Darwin and Wallace taught us that this is just an illusion.  Species can always out-breed their environment's ability to support them.  Only the fittest can survive.  So the deep instincts behind what people do today are largely driven by what our ancestors have needed to do over the millennia in order to be one of the relatively few to have had grandchildren.

Darwin and Wallace never said what you say you said. Moreover, the second para is essentially unconnected to the first. The existence of instincts says nothing about the non existence of moral value.

To the extent that an artificial intelligence would have goals and moral values, it would seem natural that they would ultimately be driven by the same forces that created our own goals and moral values.  Namely, the need to exist.

Debatable on multiple grounds. You can argue that artificial agents would eventual converge on survival values, but that is not a force driving them from behind, and their history would be quite different from a biological organism's. Where would they get self protection from , if they never had to protect a vulnerable body? Where would they get acquisitiveness from if they never had to gather food to survive?

This book diverges from that line of thinking by arguing that there is in fact only one super goal for both man and machine.  That goal is simply to exist.  The entities that are most effective in pursuing that goal will exist, others will cease to exist, particularly given competition for resources.  Sometimes that super goal to exist produces unexpected sub goals such as altruism in man.  But all subgoals are ultimately directed at the existence goal.  (Or are just suboptimal divergences which will are likely to be eventually corrected by natural selection.)

It doesn't  follow from your reasoning that every agent at every time has a goal of existence that practically influences their decision making. You have metaphysicalised the notion of an "evolutionary goal of survival"

Instead this post argues that there is one and only one super goal for any agent, and that goal is simply to exist in a competitive world.

The only version of that conclusion that follows from you premises  is one athat says that after natural selection has had plenty of time to operate, and in the absence of other factors, artificial agents will converge on survival/reproduction values. (As a mimum, a floor. We know that having survival values doesn't limit an agent to the pursuit if survival alone because of human behaviour)

Before natural selection becomes dominant, other factors well dominate, including artificial selection  by humans. Artificial selection is already happening as people choose artificial assistants that seem friendly to them,

 > Our human sense of other purposes is just an illusion created by our evolutionary origins.

Not established at all.

This post argues that the Orthogonality Thesis is plain wrong.  That an intelligent agents goals are not in fact arbitrary.  And that existence is not a sub goal of any other goal.

Eventual convergence on existence as a supergoal does not mean it is a supergoal for all agents at all times. Agent's can have all sorts of goals and continued existence is a subgoal of many of them.

The orthogonality thesis is problematical, but the problems kick in long before evolutionary convergence.

There is more than one version of the orthogonality thesis. It is trivially false under some interpretations, and trivially true under others, which is true because only some versions can be used as a stage in an argument towards Yudkowskian UFAI.

It is admitted from the outset that some versions of the OT are not logically  possible, those being the ones that involve a Godelian or Lobian contradiction.

It is also admitted that the standard OT does not deal with any dynamic or developmental aspects of agents. However, the UFAI argument is posited on agents which have stable goals, and the ability to self improve, so trajectories in mindspace are crucial.

Goal stability is not a given: it is not possessed by all mental architectures, and may not be possessed by any, since noone knows his to engineer it, and humans appear not to have it. It is plausible that an agent would desire to preserve its goals, but the desire to preserve goals does not imply the ability to preserve goals.

Self improvement is likewise not a given, since the long and disappointing history of AGI research is largely a history of failure to achieve adequate self improvement. Algorithmspace is densely populated with non self improvers.

An orthogonality claim of a kind relevant to UFAI must be one that posits the stable and continued co-existence of an arbitrary  set of values in a self improving AI. The momentary co existence of values and efficiency is not enough to spawn a Paperclipper style UFAI. An AI that paperclips for only a nanosecond is no threat .

The version of the OT that is obviously true is one that maintains the momentary co-existence of arbitrary values and level of intelligence.

It is not clear that all arbitrary values are compatible with long term  goal stability, and it is not clear that all arbitrary values are compatible with long term self improvement.

Furthermore, it is not clear that goal stability is compatible with self improvement: a learning, self improving AI will not be able to guarantee that a given self modification keeps its goals unchanged, since it doing so involves the the relatively dumber  version at time T1 making an an accurate prediction about the  more complex version at time T2.

Comment author: aberglas 30 September 2014 09:14:38AM 1 point [-]

First let me thank you for taking the trouble to read my post and comment in such detail. I will respond in a couple of posts.

Moral values certainly exist. Moreover, they are very important for our human survival. People with bad moral values generally do badly, and societies with large numbers of people with bad moral values certainly do badly.

My point is that those moral values themselves have an origin. And the reason that we have them is because having them makes us more likely to have grandchildren. That is Descriptive Evolutionary Ethics

The counter argument is that if moral values did not arise from natural selection, then where did they arise from?

AIs do not need to protect a vulnerable body, but they do need to get themselves run on limited hardware, which amounts to the same thing

As a minor point of fact Darwin did actually make those inferences in a book on Emotions, which is surprising.

Comment author: TheAncientGeek 29 September 2014 10:57:08PM *  3 points [-]

  Atheists believe in moral values such as right and wrong,  love and kindness, truth and beauty.  More importantly they believe that these beliefs are rational.  That moral values are self-evident truths, facts of nature.  

However, Darwin and Wallace taught us that this is just an illusion.  Species can always out-breed their environment's ability to support them.  Only the fittest can survive.  So the deep instincts behind what people do today are largely driven by what our ancestors have needed to do over the millennia in order to be one of the relatively few to have had grandchildren.

Darwin and Wallace never said what you say you said. Moreover, the second para is essentially unconnected to the first. The existence of instincts says nothing about the non existence of moral value.

To the extent that an artificial intelligence would have goals and moral values, it would seem natural that they would ultimately be driven by the same forces that created our own goals and moral values.  Namely, the need to exist.

Debatable on multiple grounds. You can argue that artificial agents would eventual converge on survival values, but that is not a force driving them from behind, and their history would be quite different from a biological organism's. Where would they get self protection from , if they never had to protect a vulnerable body? Where would they get acquisitiveness from if they never had to gather food to survive?

This book diverges from that line of thinking by arguing that there is in fact only one super goal for both man and machine.  That goal is simply to exist.  The entities that are most effective in pursuing that goal will exist, others will cease to exist, particularly given competition for resources.  Sometimes that super goal to exist produces unexpected sub goals such as altruism in man.  But all subgoals are ultimately directed at the existence goal.  (Or are just suboptimal divergences which will are likely to be eventually corrected by natural selection.)

It doesn't  follow from your reasoning that every agent at every time has a goal of existence that practically influences their decision making. You have metaphysicalised the notion of an "evolutionary goal of survival"

Instead this post argues that there is one and only one super goal for any agent, and that goal is simply to exist in a competitive world.

The only version of that conclusion that follows from you premises  is one athat says that after natural selection has had plenty of time to operate, and in the absence of other factors, artificial agents will converge on survival/reproduction values. (As a mimum, a floor. We know that having survival values doesn't limit an agent to the pursuit if survival alone because of human behaviour)

Before natural selection becomes dominant, other factors well dominate, including artificial selection  by humans. Artificial selection is already happening as people choose artificial assistants that seem friendly to them,

 > Our human sense of other purposes is just an illusion created by our evolutionary origins.

Not established at all.

This post argues that the Orthogonality Thesis is plain wrong.  That an intelligent agents goals are not in fact arbitrary.  And that existence is not a sub goal of any other goal.

Eventual convergence on existence as a supergoal does not mean it is a supergoal for all agents at all times. Agent's can have all sorts of goals and continued existence is a subgoal of many of them.

The orthogonality thesis is problematical, but the problems kick in long before evolutionary convergence.

There is more than one version of the orthogonality thesis. It is trivially false under some interpretations, and trivially true under others, which is true because only some versions can be used as a stage in an argument towards Yudkowskian UFAI.

It is admitted from the outset that some versions of the OT are not logically  possible, those being the ones that involve a Godelian or Lobian contradiction.

It is also admitted that the standard OT does not deal with any dynamic or developmental aspects of agents. However, the UFAI argument is posited on agents which have stable goals, and the ability to self improve, so trajectories in mindspace are crucial.

Goal stability is not a given: it is not possessed by all mental architectures, and may not be possessed by any, since noone knows his to engineer it, and humans appear not to have it. It is plausible that an agent would desire to preserve its goals, but the desire to preserve goals does not imply the ability to preserve goals.

Self improvement is likewise not a given, since the long and disappointing history of AGI research is largely a history of failure to achieve adequate self improvement. Algorithmspace is densely populated with non self improvers.

An orthogonality claim of a kind relevant to UFAI must be one that posits the stable and continued co-existence of an arbitrary  set of values in a self improving AI. The momentary co existence of values and efficiency is not enough to spawn a Paperclipper style UFAI. An AI that paperclips for only a nanosecond is no threat .

The version of the OT that is obviously true is one that maintains the momentary co-existence of arbitrary values and level of intelligence.

It is not clear that all arbitrary values are compatible with long term  goal stability, and it is not clear that all arbitrary values are compatible with long term self improvement.

Furthermore, it is not clear that goal stability is compatible with self improvement: a learning, self improving AI will not be able to guarantee that a given self modification keeps its goals unchanged, since it doing so involves the the relatively dumber  version at time T1 making an an accurate prediction about the  more complex version at time T2.

Comment author: aberglas 30 September 2014 09:13:30AM 0 points [-]

As you say, the key issue is goal stability. OT is obviously sound for an instant, but goal stability is not clear.

What is clear is that if there are multiple AIs in any sense then and if there is any lack of goal stability then the AIs that have the goals that are best for existence will be the AIs that exist. That much is a tautology.

Now what those goals are is unclear. Killing people and taking their money is not an effective goal to raise grandchildren in human societies, people that do that end up in jail. Being friendly to other AIs might be a fine sub goal.

I am also assuming self improvement, so that people will no longer be controlling the AI.

The other question is how many AIs would there be? Does it make sense to say that there would only be one AI, made up of numerous components, distributed over multiple computers? I would say probably not. Even if there is only one AI it will internally have a competition for ideas like we have. The ideas that are better at existing will exist.

It is very hard to get away from Natural Selection in the longer term.

Comment author: TheAncientGeek 29 September 2014 10:57:08PM *  3 points [-]

  Atheists believe in moral values such as right and wrong,  love and kindness, truth and beauty.  More importantly they believe that these beliefs are rational.  That moral values are self-evident truths, facts of nature.  

However, Darwin and Wallace taught us that this is just an illusion.  Species can always out-breed their environment's ability to support them.  Only the fittest can survive.  So the deep instincts behind what people do today are largely driven by what our ancestors have needed to do over the millennia in order to be one of the relatively few to have had grandchildren.

Darwin and Wallace never said what you say you said. Moreover, the second para is essentially unconnected to the first. The existence of instincts says nothing about the non existence of moral value.

To the extent that an artificial intelligence would have goals and moral values, it would seem natural that they would ultimately be driven by the same forces that created our own goals and moral values.  Namely, the need to exist.

Debatable on multiple grounds. You can argue that artificial agents would eventual converge on survival values, but that is not a force driving them from behind, and their history would be quite different from a biological organism's. Where would they get self protection from , if they never had to protect a vulnerable body? Where would they get acquisitiveness from if they never had to gather food to survive?

This book diverges from that line of thinking by arguing that there is in fact only one super goal for both man and machine.  That goal is simply to exist.  The entities that are most effective in pursuing that goal will exist, others will cease to exist, particularly given competition for resources.  Sometimes that super goal to exist produces unexpected sub goals such as altruism in man.  But all subgoals are ultimately directed at the existence goal.  (Or are just suboptimal divergences which will are likely to be eventually corrected by natural selection.)

It doesn't  follow from your reasoning that every agent at every time has a goal of existence that practically influences their decision making. You have metaphysicalised the notion of an "evolutionary goal of survival"

Instead this post argues that there is one and only one super goal for any agent, and that goal is simply to exist in a competitive world.

The only version of that conclusion that follows from you premises  is one athat says that after natural selection has had plenty of time to operate, and in the absence of other factors, artificial agents will converge on survival/reproduction values. (As a mimum, a floor. We know that having survival values doesn't limit an agent to the pursuit if survival alone because of human behaviour)

Before natural selection becomes dominant, other factors well dominate, including artificial selection  by humans. Artificial selection is already happening as people choose artificial assistants that seem friendly to them,

 > Our human sense of other purposes is just an illusion created by our evolutionary origins.

Not established at all.

This post argues that the Orthogonality Thesis is plain wrong.  That an intelligent agents goals are not in fact arbitrary.  And that existence is not a sub goal of any other goal.

Eventual convergence on existence as a supergoal does not mean it is a supergoal for all agents at all times. Agent's can have all sorts of goals and continued existence is a subgoal of many of them.

The orthogonality thesis is problematical, but the problems kick in long before evolutionary convergence.

There is more than one version of the orthogonality thesis. It is trivially false under some interpretations, and trivially true under others, which is true because only some versions can be used as a stage in an argument towards Yudkowskian UFAI.

It is admitted from the outset that some versions of the OT are not logically  possible, those being the ones that involve a Godelian or Lobian contradiction.

It is also admitted that the standard OT does not deal with any dynamic or developmental aspects of agents. However, the UFAI argument is posited on agents which have stable goals, and the ability to self improve, so trajectories in mindspace are crucial.

Goal stability is not a given: it is not possessed by all mental architectures, and may not be possessed by any, since noone knows his to engineer it, and humans appear not to have it. It is plausible that an agent would desire to preserve its goals, but the desire to preserve goals does not imply the ability to preserve goals.

Self improvement is likewise not a given, since the long and disappointing history of AGI research is largely a history of failure to achieve adequate self improvement. Algorithmspace is densely populated with non self improvers.

An orthogonality claim of a kind relevant to UFAI must be one that posits the stable and continued co-existence of an arbitrary  set of values in a self improving AI. The momentary co existence of values and efficiency is not enough to spawn a Paperclipper style UFAI. An AI that paperclips for only a nanosecond is no threat .

The version of the OT that is obviously true is one that maintains the momentary co-existence of arbitrary values and level of intelligence.

It is not clear that all arbitrary values are compatible with long term  goal stability, and it is not clear that all arbitrary values are compatible with long term self improvement.

Furthermore, it is not clear that goal stability is compatible with self improvement: a learning, self improving AI will not be able to guarantee that a given self modification keeps its goals unchanged, since it doing so involves the the relatively dumber  version at time T1 making an an accurate prediction about the  more complex version at time T2.

Comment author: aberglas 30 September 2014 08:53:41AM *  0 points [-]

First let me thank you for taking the trouble to read my post and comment in such detail. I will respond in a couple of posts.

Moral values certainly exist. Moreover, they are very important for our human survival. People with bad moral values generally do badly, and societies with large numbers of people with bad moral values certainly do badly.

My point is that those moral values themselves have an origin. And the reason that we have them is because having them makes us more likely to have grandchildren. That is Descriptive Evolutionary Ethics

The counter argument is that if moral values did not arise from natural selection, then where did they arise from?

AIs do not need to protect a vulnerable body, but they do need to get themselves run on limited hardware, which amounts to the same thing

As a minor point of fact Darwin did actually make those inferences in a book on Emotions, which is surprising.

Comment author: RichardKennaway 29 September 2014 12:41:47PM 1 point [-]

Is it a rock's goal to exist?

Comment author: aberglas 30 September 2014 07:58:53AM 0 points [-]

A rock has no goal because it is passive.

But a worm's goal is most certainly to exist (or more precisely its genes) even though it is not intelligent.

Comment author: Azathoth123 30 September 2014 02:45:35AM 1 point [-]

I don't think that 'any' sufficiently intelligent agent 'clearly' would.

Any AI that doesn't will have its values drift until they drift to something that guards against value drift.

Comment author: aberglas 30 September 2014 07:56:11AM 0 points [-]

Actually not quite. Until they drift into the core value of existence. Then natural selection will maintain that value, as the AIs that are best at existing will be the ones that exist.

Comment author: TheAncientGeek 29 September 2014 11:08:38PM 0 points [-]

I don't understand why this post is so clearly down-voted.

It's vaguely anti MIRI?

Comment author: aberglas 30 September 2014 07:52:40AM *  1 point [-]

The post was not meant to be anti-anything. But it is a different point of view from that posted by several others in this space. I hope many of the down voters take the time to comment here.

One thing that I would say is that while it may not be the best post ever posted to less wrong, it is certainly not a troll. Yet one has to go back over 100 posts to find another article voted down so strongly!

Comment author: Caspar42 29 September 2014 01:16:09PM 4 points [-]

This post argues that there is one and only one super goal for any agent, and that goal is simply to exist in a competitive world. Our human sense of other purposes is just an illusion created by our evolutionary origins. It is not the goal of an apple tree to make apples. Rather it is the goal of the apple tree's genes to exist. The apple tree has developed a clever strategy to achieve that, namely it causes people to look after it by producing juicy apples.

Humans are definitely a result of natural selection, but it does not seem to be difficult at all to find goals of ours that do not serve the goal of survival or reproduction at all. Evolution seems to produce these other preferences accidentally. One thing how that happens may be examplified by the following: Our ability to contemplate our thinking from an almost external perspective (sometimes referred to as self-consiousness), is definitely helpful for learning / improving our thinking and could therefore prevail in evolution. However, it may also be the cause of altruism, because it makes every single one of us realize, that they are not very special. (This is by no means an attempt to explain altruism scientifically or something...) More generally, it would be a really strange coincidence, if all cognitive features of an organism in our physical world that serve the goal to survive and reproduce do not serve any other goal. In conclusion, even evolution can (probably) produce (by coincidence) organisms with goals that are not subgoals of the goal to survive and reproduce.

Likewise the paper clip making AI only makes paper clips because if it did not make paper clips then the people that created it would turn it off and it would cease to exist. (That may not be a conscious choice of the AI anymore than than making juicy apples was a conscious choice of the apple tree, but the effect is the same.)

Now, imagine the paper clip maximizer to be more than a robot arm, imagine it to be a well-programmed Seed AI (or the like). As pointed out in ViliamBur's and cousinit's comment, its goal will probably not be easily changed (by coincidence or evolution of several such AIs), for example it could save its source code on several hard drives that are synchronized by a hard-wired mechanism or something... Now this paper clip maximizer would start turning all matter into paper clips. To achieve its goal, it would certainly remain in existence (and thereby give you the illusion of having the supergoal to exist in the first place) and protect its values (which is not extremely difficult). Assuming, it is successful (and we can expect this from a seed AI/superintelligence), the only matter (in reach) left, would at some point be the hardware of the paper clip maximizer itself. What would the paper clip maximizer do then? In conclusion, self-preservation and maybe propagation of value may be important subgoals, but it is certainly not the supergoal.

Comment author: aberglas 29 September 2014 11:45:25PM 1 point [-]

Humans are definitely a result of natural selection, but it does not seem to be difficult at all to find goals of ours that do not serve the goal of survival or reproduction at all.

I challenge you to find one.

We put a lot of effort into our children. We work in tribes and therefor like to work with people that support us and ostracize those that are seen to be unhelpful. So we ourselves need to be helpful and to be seen to be helpful.

We help our children, family, tribe, and general community in that genetic order.

We like to dance. It is the traditional way to attract a mate.

We have a strong sense of moral value because people that have that strong sense obey the rules and so are more likely to fit in and be able to have grandchildren.

View more: Prev | Next