timtyler comments on General purpose intelligence: arguing the Orthogonality thesis - Less Wrong

20 Post author: Stuart_Armstrong 15 May 2012 10:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 17 May 2012 12:18:10PM 2 points [-]

How about morality as an attractor

Why do we have any reason to think this is the case?

Comment author: timtyler 17 May 2012 11:22:49PM *  0 points [-]

So: game theory: reciprocity, kin selection/tag-based cooperation and virtue signalling.

As J. Storrs-Hall puts it in: "Intelligence Is Good"

There is but one good, namely, knowledge; and but one evil, namely ignorance.

—Socrates, from Diogenes Laertius's Life of Socrates

As a matter of practical fact, criminality is strongly and negatively correlated with IQ in humans. The popular image of the tuxedo-wearing, suave jet-setter jewel thief to the contrary notwithstanding, almost all career criminals are of poor means as well as of lesser intelligence."

Defecting typically ostracises you - and doesn't make much sense in a smart society which can track repuations.

We already know about universal instrumental values. They illustrate what moral attractors look like.

I discussed this issue some more in Handicapped Superintelligence.

Comment author: JoshuaZ 17 May 2012 11:28:58PM 1 point [-]

Doesn't most of this amount to morality as an attractor for evolved social species?

Comment author: timtyler 17 May 2012 11:37:00PM *  0 points [-]

Evolution creates social species, though. Machines will be social too - their memetic relatedness might well be very high - an enormous win for kin selection-based theories based on shared memes. Of course they are evolving, and will evolve too - cultural evolution is still evolution.

Comment author: JoshuaZ 17 May 2012 11:42:17PM 1 point [-]

So this presumes that the machines in question will evolve in social settings? That's a pretty big assumption. Moreover, empirically speaking having in-group loyalty of that sort isn't nearly enough to ensure that you are friendly with nearby entities- look at how many hunter-gatherer groups are in a state of almost constant war with their neighbors. The attitude towards other sentients (such as humans) isn't going to be great even if there is some approximate moral attractor of that sort.

Comment author: timtyler 17 May 2012 11:50:00PM *  0 points [-]

So this presumes that the machines in question will evolve in social settings? That's a pretty big assumption.

I'm not sure what you mean. It presumes that there will be more than one machine. The 'lumpiness' of the universe is likely to produce natural boundaries. It seems to be a small assumption.

Moreover, empirically speaking having in-group loyalty of that sort isn't nearly enough to ensure that you are friendly with nearby entities- look at how many hunter-gatherer groups are in a state of almost constant war with their neighbors.

Sure, but cultural evolution produces cooperation on a massive scale.

The attitude towards other sentients (such as humans) isn't going to be great even if there is some approximate moral attractor of that sort.

Right - so: high morality seems to be reasonably compatible with some ant-squishing. The point here is about moral attractors - not the fate of humans.

Comment author: JoshuaZ 17 May 2012 11:59:02PM *  4 points [-]

I'm not sure what you mean. It presumes that there will be more than one machine. The 'lumpiness' of the universe is likely to produce natural boundaries. It seems to be a small assumption.

It is a major assumption. To use the most obvious issue if someone is starting up an attempted AGI on a single computer (say it is the only machine that has enough power) then this won't happen. It also won't happen if one isn't having a large variety of machines which are actually engaging in generational copying. That means that say if one starts with ten slightly different machines, if the population doesn't grow in distinct entities this isn't going to do what you want. And if the entities lack a distinction between genotype and phenotype (as computer programs unlikely biological entities actually do) then this is also off because one will not be subject to a Darwinian system but rather a pseudo-Lamarckian one which doesn't act the same way.

The point here is about moral attractors - not the fate of humans.

So your point seems to come down purely to the fact that evolved entities will do this, and a vague hope that people will deliberately put entities into this situation. This is both not helpful for the fundamental philosophical claim (which doesn't care about what empirically is likely to happen) and is not practically helpful since there's no good reason to think that any machine entities will actually be put into such a situation.

Comment author: timtyler 18 May 2012 12:22:47AM *  1 point [-]

I'm not sure what you mean. It presumes that there will be more than one machine. The 'lumpiness' of the universe is likely to produce natural boundaries. It seems to be a small assumption.

It is a major assumption. To use the most obvious issue if someone is starting up an attempted AGI on a single computer (say it is the only machine that has enough power) then this won't happen.

A multi-planetary living system is best described as being multiple agents, IMHO. The unity you suggest would represent relatedness approaching 1 - the ultimate win in terms of altruism and cooperation.

It also won't happen if one isn't having a large variety of machines which are actually engaging in generational copying.

Without copying there's no life. Copying is unavoidable. Variation is practically ineviable too - for instance, local adaptation.

And if the entities lack a distinction between genotype and phenotype (as computer programs unlikely biological entities actually do) then this is also off because one will not be subject to a Darwinian system but rather a pseudo-Lamarckian one which doesn't act the same way.

Computer programs do have the split between heredity and non heritble elements - which is the basic idea here, or it should be.

Darwin believed in cultural evolution: "The survival or preservation of certain favoured words in the struggle for existence is natural selection" - so surely cultural evolution is Darwinian.

Most of the game theory that underlies cooperation applies to both cultural and organic evolution. In particular, reciprocity, kin selection, and reputations apply in both domains.

So your point seems to come down purely to the fact that evolved entities will do this, and a vague hope that people will deliberately do so. This is both not helpful for the fundamental philosophical claim (which doesn't care about what empirically is likely to happen) and is not practically helpful since there's no good reason to think that any machine entities will actually be put into such a situation.

I didn't follow that bit - though I can see that it sounds a bit negative.

Evolution has led to social, technological, intellectual and moral progress. It's conservative to expect these trends to continue.