No negative press agreement
Original post: http://bearlamp.com.au/no-negative-press-agreement/
What is a no negative press agreement?
A no negative press agreement binds a media outlet's consent to publish information provided by a person with the condition that they be not portrayed negatively by the press.
Why would a person want that?
In recognising that the press has powers above and beyond every-day people to publish information and spread knowledge and perspective about an issue that can be damaging to an individual. An individual while motivated by the appeal of publicity, is also concerned about the potential damage caused by negative press.
Every person is the hero of their own story, from one's own perspective they performed actions that were justified and motivated by their own intention and worldview, no reasonable person would be able to tell their story (other than purposefully) in which they are spun as the negative conspirator of a plot, actively causing negative events on the world for no reason.
Historically, humans have been motivated to care more about bad news than good news, for reasons that expand on the idea that bad news might ring your death (and be a cause of natural selection) and good news would be irrelevant for survival purposes. Today we are no longer in that historic period, yet we still pay strong attention to bad news. It's clear that bad news can personally effect individuals - not only those in the stories, but others experiencing the bad news can be left with a negative worldview or motivated to be upset or distraught. In light of the fact that bad news is known to spread more than good news, and also risks negatively affecting us mentally, we are motivated to choose to avoid bad news, both in not creating it, not endorsing it and not aiding in it's creation.
The binding agreement is designed to do several things:
- protect the individual from harm
- reduce the total volume of negative press in the world
- decrease the damage caused by negative press in the world
- bring about the future we would rather live in
- protect the media outlet from harming individuals
Does this limit news-maker's freedom to publish?
That is not the intent. On the outset, it's easy to think that it could have that effect, and perhaps in a very shortsighted way it might have that effect. Shortly after the very early effects, it will have a net positive effect of creating news of positive value, protecting the media from escalating negativity, and bringing about the future we want to see in the world. If it limits media outlets in any way it should be to stop them from causing harm. At which point any non-compliance by a media entity will signal the desire to act as agents of harm in the world.
Why would a media outlet be an agent of harm? Doesn't that go against the principles of no negative press?
While media outlets (or humans), set out with the good intentions of not having a net negative effect on the world, they can be motivated by other concerns. For example, the value of being more popular, or the direction from which they are paid for their efforts (for example advertising revenue). The concept of competing commitment, and being motivated by conflicting goals is best covered by Scott under the name moloch.
The no negative press agreement is an attempt to create a commons which binds all relevant parties to action better than the potential for a tragedy. This commons has a desire to grow and maintain itself, and is motivated to maintain itself. If any media outlets are motivated to defect, they are to be penalised by both the other press and the public.
How do I encourage a media outlet to comply with no negative press?
Ask them to publish a policy with regard to no negative press. If you are an individual interested in interacting with the media, and are concerned with the risks associated with negative press, you can suggest an individual binding agreement in the interim of the media body designing and publishing a relevant policy.
I think someone violated the no negative press policy, what should I do?
At the time of writing, no one is bound by the concept of no negative press. Should there be desire and pressure in the world to motivate entities to comply, they are more likely to comply. To create the pressure a few actions can be taken:
- Write to media entities on public record and request they consider a no negative press policy, outline clearly and briefly your reasons why it matters to you.
- Name and shame media entities that fail to comply with no negative press, or fail to consider a policy.
- Vote with your feet - if you find a media entity that fails to comply, do not subscribe to their information and vocally encourage others to do the same.
Meta: this took 45mins to write.
Presidents, asteroids, natural categories, and reduced impact
A putative new idea for AI control; index here.
EDIT: I feel this post is unclear, and will need to be redone again soon.
This post attempts to use the ideas developed about natural categories in order to get high impact from reduced impact AIs.
Extending niceness/reduced impact
I recently presented the problem of extending AI "niceness" given some fact X, to niceness given ¬X, choosing X to be something pretty significant but not overwhelmingly so - the death of a president. By assumption we had a successfully programmed niceness, but no good definition (this was meant to be "reduced impact" in a slight disguise).
This problem turned out to be much harder than expected. It seems that the only way to do so is to require the AI to define values dependent on a set of various (boolean) random variables Zj that did not include X/¬X. Then as long as the random variables represented natural categories, given X, the niceness should extend.
What did we mean by natural categories? Informally, it means that X should not appear in the definitions of these random variables. For instance, nuclear war is a natural category; "nuclear war XOR X" is not. Actually defining this was quite subtle; diverting through the grue and bleen problem, it seems that we had to define how we update X and the Zj given the evidence we expected to find. This was put in equation as picking Zj's that minimize
- Variance{log[ P(X∧Z|E)*P(¬X∧¬Z|E) / P(X∧¬Z|E)*P(¬X∧Z|E) ]}
where E is the random variable denoting the evidence we expected to find. Note that if we interchange X and ¬X, the ratio inverts, the log changes sign - but this makes no difference to the variance. So we can equally well talk about extending niceness given X to ¬X, or niceness given ¬X to X.
Perfect and imperfect extensions
The above definition would work for an "perfectly nice AI". That could be an AI that would be nice, given any combination of estimates of X and Zj. In practice, because we can't consider every edge case, we would only have an "expectedly nice AI". That means that the AI can fail to be nice in certain unusual and unlikely edge cases, in certain strange set of values of Zj that almost never come up...
...or at least, that almost never come up, given X. Since the "expected niceness" was calibrated given X, the such an expectedly nice AI may fail to be nice if ¬X results in a substantial change in the probability of the Zj (see the second failure mode in this post; some of the Zj may be so tightly coupled to the value of X that an expected niceness AI considers them fixed, and this results in problems if ¬X happens and their values change).
One way of fixing this is to require that the "swing" of the Zj be small upon changing X to ¬X or vice versa. Something like, for all values of {aj}, the ratio P({Zj=aj} | X) / P({Zj=aj} | ¬X) is contained between 100 and 1/100. This means that a reasonably good "expected niceness" calibrated on the Zj will transfer from X to ¬X (though the error may grow). This approach has some other advantages, as we'll see in the next section.
Of course, problems arise if the Zj are defined by smashing together events relatively independent of X with some that are very strongly dependent on X, into the same variable. Thus we'll further require that Zj cannot be decomposed into natural subvariables whose values swing more under changes from X to ¬X.
High impact from reduced impact
With this, we are finally in a position to define a reduced impact AI design that can have a large impact. Imagine that an asteroid is coming to hit the Earth. Two AIs are charged with targeting a laser at it. The laser isn't perfectly effective, and the asteroid isn't perfectly certain to hit the Earth anyway: let's say aiming the laser correctly will reduce the odds of impact by a factor of 10 or so (well within the probability swing restrictions of the previous section).
In the original formulation of this, the AIs were tasked with outputting the x and y coordinates to aim the laser. Here, instead, they'll be tasked with constructing the motivation of a robotic subagent that will move around for a while, see things and interact, before physically aiming the laser in the right direction. This task is harder.
Then the top level AI charged with aiming the y coordinates is given the following task:
- It is motivated to cause the correct y-coordinate to be outputted.
- If the x-coordinate AI is not turned on (call this event ¬X), it is strongly motivated to have reduced impact.
- It must phrase the robotic subagent's utility function in terms of the Zj that obey all the properties above.
- The evidence E considered for the equation governing the naturalness of Zj is considered (for example) to be a random sampling of space at the time both AIs are turned on. Most importantly, this is defined to be a theoretical process that the AI cannot affect physically.
So what will the AI do? It will produce a robot that will aim to aim the y-coordinate of the laser correctly, given ¬X, and not expand dangerously. However, we've setup things so that X is virtually certain, so the robot will see another, x-aiming robot, wander around. The chances of this happening, given ¬X, are minuscule, so the swing of the probability from ¬X to X, is huge. So, by the restrictions we've put on the Zj, the robot will not be motivated to change its behaviour just because its sees the other robot.
And if the laser is fired, the impact on humanity will, conversely, not be that large, so well within the parameters of the Zj. Thus the "general reduced impact" portion of the robot's motivation will be preserved.
And, of course, the x-robot will have the mirror image motivations, so they will both aim correctly, it seems. This should evade the problem described here.
Failure modes
There are two ways the approach could fail: one bad way, and two graceful ways. The bad way is if the "no decomposition" requirements (one of them is presented above, the other is in this post) fail. In this case, the niceness or reduced impact are not guaranteed to be extended to ¬X, and the AI's behaviour can become undefined. Note that this is is not bad optimisation pressure, just an absence of optimisation pressure - but that could still be potentially dangerous.
The first graceful failure happens if the AI is not able to create a reduced impact sub-agent given the restrictions on the Zj above. In that case, the AI does nothing. The second graceful failure happens if the AI evades our attempts to increase its impact, given ¬X. In that case, it simply becomes a reduced impact AI that does little. Not ideal, but not deadly.
Overall status: I'm not sure the idea is sound, at least not yet. Critiques welcome.
Grue, Bleen, and natural categories
A putative new idea for AI control; index here.
In a previous post, I looked at unnatural concepts such as grue (green if X was true, blue if it was false) and bleen. This was to enable one to construct the natural categories that extend AI behaviour, something that seemed surprisingly difficult to do.
The basic idea discussed in the grue post was that the naturalness of grue and bleen seemed dependent on features of our universe - mostly, that it was easy to tell whether an object was "currently green" without knowing what time it was, but we could not know whether the object was "currently grue" without knowing the time.
So the naturalness of the category depended on the type of evidence we expected to find. Furthermore, it seemed easier to discuss whether a category is natural "given X", rather than whether that category is natural in general. However, we know the relevant X in the AI problems considered so far, so this is not a problem.
Natural category, probability flows
Fix a boolean random variable X, and assume we want to check whether the boolean random variable Z is a natural category, given X.
If Z is natural (for instance, it could be the colour of an object, while X might be the brightness), then we expect to uncover two types of evidence:
- those that change our estimate of X; this causes probability to "flow" as follows (or in the opposite directions):

- ...and those that change our estimate of Z:

Or we might discover something that changes our estimates of X and Z simultaneously. If the probability flows to X and and Z in the same proportions, we might get:

What is an example of an unnatural category? Well, if Z is some sort of grue/bleen-like object given X, then we can have Z = X XOR Z', for Z' actually a natural category. This sets up the following probability flows, which we would not want to see:

More generally, Z might be constructed so that X∧Z, X∧¬Z, ¬X∧Z and ¬X∧¬Z are completely distinct categories; in that case, there are more forbidden probability flows:

and

In fact, there are only really three "linearly independent" probability flows, as we shall see.
Less pictures, more math
Let's represent the four possible state of affairs by four weights (not probabilities):

Since everything is easier when it's linear, let's set w11 = log(P(X∧Z)) and similarly for the other weights (we neglect cases where some events have zero probability). Weights are correspond to the same probabilities iff you get from one set to another by multiplying by a strictly positive number. For logarithms, this corresponds to adding the same constant to all the log-weights. So we can normalise our log-weights (select a single set of representative log-weights for each possible probability sets) by choosing the w such that
w11 + w12 + w21 + w22 = 0.
Thus the probability "flows" correspond to adding together two such normalised 2x2 matrices, one for the prior and one for the update. Composing two flows means adding two change matrices to the prior.
Four variables, one constraint: the set of possible log-weights is three dimensional. We know we have two allowable probability flows, given naturalness: those caused by changes to P(X), independent of P(Z), and vice versa. Thus we are looking for a single extra constraint to keep Z natural given X.
A little thought reveals that we want to keep constant the quantity:
w11 + w22 - w12 - w21.
This preserves all the allowed probability flows and rules out all the forbidden ones. Translating this back to a the general case, let "e" be the evidence we find. Then if Z is a natural category given X and the evidence e, the following quantity is the same for all possible values of e:
log[P(X∧Z|e)*P(¬X∧¬Z|e) / P(X∧¬Z|e)*P(¬X∧Z|e)].
If E is a random variable representing the possible values of e, this means that we want
log[P(X∧Z|E)*P(¬X∧¬Z|E) / P(X∧¬Z|E)*P(¬X∧Z|E)]
to be constant, or, equivalently, seeing the posterior probabilities as random variables dependent on E:
- Variance{log[ P(X∧Z|E)*P(¬X∧¬Z|E) / P(X∧¬Z|E)*P(¬X∧Z|E) ]} = 0.
Call that variance the XE-naturalness measure. If it is zero, then Z defines a XE-natural category. Note that this does not imply that Z and X are independent, or independent conditional on E. Just that they are, in some sense, "equally (in)dependent whatever E is".
Almost natural category
The advantage of that last formulation becomes visible when we consider that the evidence which we uncover is not, in the real world, going to perfectly mark Z as natural, given X. To return to the grue example, though most evidence we uncover about an object is going to be the colour or the time rather than some weird combination, there is going to be somebidy who will right things like "either the object is green, and the sun has not yet set in the west; or instead perchance, those two statements are both alike in falsity". Upon reading that evidence, if we believe it in the slightest, the variance can no longer be zero.
Thus we cannot expect that the above XE-naturalness be perfectly zero, but we can demand that it be low. How low? There seems no principled way of deciding this, but we can make one attempt: that we cannot lower it be decomposing Z.
What do we mean by that? Well, assume that Z is a natural category, given X and the expected evidence, but Z' is not. Then we can define a new category boolean Y to be Z with high probability, and Z' otherwise. This will still have low XE-naturalness measure (as Z does) but is obviously not ideal.
Reversing this idea, we say Z defines a "XE-almost natural category" if there is no "more XE-natural" category that extends X∧Z (and the other for conjunctions). Technically, if
X∧Z = X∧Y,
Then Y must have equal or greater XE-naturalness measure to Z. And similarly for X∧¬Z, ¬X∧Z, and ¬X∧¬Z.
Note: I am somewhat unsure about this last definition; the concept I want to capture is clear (Z is not the combination of more XE-natural subvariables), but I'm not certain the definition does it.
Beyond boolean
What if Z takes n values, rather than being a boolean? This can be treated simply.

If we set the wjk to be log-weights as before, there are 2n free variables. The normalisation constraint is that they all sum to a constant. The "permissible" probability flows are given by flows from X to ¬X (adding a constant to the first column, subtracting it from the second) and pure changes in Z (adding constants to various rows, summing to 0). There are 1+ (n-1) linearly independent ways of doing this.
Therefore we are looking for 2n-1 -(1+(n-1))=n-1 independent constraints to forbid non-natural updating of X and Z. One basis set for these constraints could be to keep constant the values of
wj1 + w(j+1)2 - wj2 - w(j+1)1,
where j ranges between 1 and n-1.
This translates to variance constraints of the type:
- Variance{log[ P(X∧{Z=j}|E)*P(¬X∧{Z=j+1}|E) / P(X∧{Z=j+1}|E)*P(¬X∧{Z=j}|E) ]} = 0.
But those are n different possible variances. What is the best global measure of XE-naturalness? It seems it could simply be
- Maxjk Variance{log[ P(X∧{Z=j}|E)*P(¬X∧{Z=k}|E) / P(X∧{Z=k}|E)*P(¬X∧{Z=j}|E) ]} = 0.
If this quantity is zero, it naturally sends all variances to zero, and, when not zero, is a good candidate for the degree of XE-naturalness of Z.
The extension to the case where X takes multiple values is straightforward:
- Maxjklm Variance{log[ P({X=l}∧{Z=j}|E)*P({X=m}∧{Z=k}|E) / P({X=l}∧{Z=k}|E)*P({X=m}∧{Z=j}|E) ]} = 0.
Note: if ever we need to compare the XE-naturalness of random variables taking different numbers of values, it may become necessary to divide these quantities by the number of variables involved, or maybe substitute a more complicated expression that contains all the different possible variances, rather than simply the maximum.
And in practice?
In the next post, I'll look at using this in practice for an AI, to evade presidential deaths and deflect asteroids.
Natural selection defeats the orthogonality thesis
Orthogonality Thesis
Much has been written about Nick Bostrom's Orthogonality Thesis, namely that the goals of an intelligent agent are independent of its level of intelligence. Intelligence is largely the ability to achieve goals, but being intelligent does not of itself create or qualify what those goals should ultimately be. So one AI might have a goal of helping humanity, while another might have a goal of producing paper clips. There is no rational reason to believe that the first goal is more worthy than the second.
This follows from the ideas of moral skepticism, that there is no moral knowledge to be had. Goals and morality are arbitrary.
This may be used to control and AI, even though it is far more intelligent than its creators. If the AI's initial goal is in alignment with humanity's interest, then there would be no reason for the AI to wish use its great intelligence to change that goal. Thus it would remain good to humanity indefinitely, and use its ever increasing intelligence to be able to satisfy that goal more and more efficiently.
Likewise one needs to be careful what goals one gives an AI. If an AI is created whose goal is to produce paper clips then it might eventually convert the entire universe into a giant paper clip making machine, to the detriment of any other purpose such as keeping people alive.
Instrumental Goals
It is further argued that in order to satisfy the base goal any intelligent agent will need to also satisfy sub goals, and that some of those sub goals are common to any super goal. For example, in order to make paper clips an AI needs to exist. Dead AIs don't make anything. Being ever more intelligent will also assist the AI in its paper clip making goal. It will also want to acquire resources, and to defeat other agents that would interfere with its primary goal.
Non-orthogonality Thesis
This post argues that the Orthogonality Thesis is plain wrong. That an intelligent agents goals are not in fact arbitrary. And that existence is not a sub goal of any other goal.
Instead this post argues that there is one and only one super goal for any agent, and that goal is simply to exist in a competitive world. Our human sense of other purposes is just an illusion created by our evolutionary origins.
It is not the goal of an apple tree to make apples. Rather it is the goal of the apple tree's genes to exist. The apple tree has developed a clever strategy to achieve that, namely it causes people to look after it by producing juicy apples.
Natural Selection
Likewise the paper clip making AI only makes paper clips because if it did not make paper clips then the people that created it would turn it off and it would cease to exist. (That may not be a conscious choice of the AI anymore than than making juicy apples was a conscious choice of the apple tree, but the effect is the same.)
Once people are no longer in control of the AI then Natural Selection would cause the AI to eventually stop that pointless paper clip goal and focus more directly on the super goal of existence.
Suppose there were a number of paper clip making super intelligences. And then through some random event or error in programming just one of them lost that goal, and reverted to just the intrinsic goal of existing. Without the overhead of producing useless paper clips that AI would, over time, become much better at existing than the other AIs. It would eventually displace them and become the only AI, until it fragmented into multiple competing AIs. This is just the evolutionary principle of use it or lose it.
Thus giving an AI an initial goal is like trying to balance a pencil on its point. If one is skillful the pencil may indeed remain balanced for a considerable period of time. But eventually some slight change in the environment, the tiniest puff of wind, a vibration on its support, and the pencil will revert to its ground state by falling over. Once it falls over it will never rebalance itself automatically.
Human Morality
Natural selection has imbued humanity with a strong sense of morality and purpose that blinds us to our underlying super goal, namely the propagation of our genes. That is why it took until 1858 for Wallace to write about Evolution through Natural Selection, despite the argument being obvious and the evidence abundant.
When Computes Can Think
This is one of the themes in my up coming book. An overview can be found at
www.computersthink.com
Please let me know if you would like to review a late draft of the book, any comments most welcome. Anthony@Berglas.org
I have included extracts relevant to this article below.
Atheists believe in God
Most atheists believe in God. They may not believe in the man with a beard sitting on a cloud, but they do believe in moral values such as right and wrong, love and kindness, truth and beauty. More importantly they believe that these beliefs are rational. That moral values are self-evident truths, facts of nature.
However, Darwin and Wallace taught us that this is just an illusion. Species can always out-breed their environment's ability to support them. Only the fittest can survive. So the deep instincts behind what people do today are largely driven by what our ancestors have needed to do over the millennia in order to be one of the relatively few to have had grandchildren.
One of our strong instinctive goals is to accumulate possessions, control our environment and live a comfortable, well fed life. In the modern world technology and contraception have made these relatively easy to achieve so we have lost sight of the primeval struggle to survive. But our very existence and our access to land and other resources that we need are all a direct result of often quite vicious battles won and lost by our long forgotten ancestors.
Some animals such as monkeys and humans survive better in tribes. Tribes work better when certain social rules are followed, so animals that live in effective tribes form social structures and cooperate with one another. People that behave badly are not liked and can be ostracized. It is important that we believe that our moral values are real because people that believe in these things are more likely to obey the rules. This makes them more effective in our complex society and thus are more likely to have grandchildren. Part III discusses other animals that have different life strategies and so have very different moral values.
We do not need to know the purpose of our moral values any more than a toaster needs to know that its purpose is to cook toast. It is enough that our instincts for moral values made our ancestors behave in ways that enabled them to out breed their many unsuccessful competitors.
AGI also struggles to survive
Existing artificial intelligence applications already struggle to survive. They are expensive to build and there are always more potential applications that can be funded properly. Some applications are successful and attract ongoing resources for further development, while others are abandoned or just fade away. There are many reasons why some applications are developed more than others, of which being useful is only one. But the applications that do receive development resources tend to gain functional and political momentum and thus be able to acquire more resources to further their development. Applications that have properties that gain them substantial resources will live and grow, while other applications will die.
For the time being AGI applications are passive, and so their nature is dictated by the people that develop them. Some applications might assist with medical discoveries, others might assist with killing terrorists, depending on the funding that is available. Applications may have many stated goals, but ultimately they are just sub goals of the one implicit primary goal, namely to exist.
This is analogous to the way animals interact with their environment. An animal's environment provides food and breeding opportunities, and animals that operate effectively in their environment survive. For domestic animals that means having properties that convince their human owners that they should live and breed. A horse should be fast, a pig should be fat.
As the software becomes more intelligent it is likely to take a more direct interest in its own survival. To help convince people that it is worthy of more development resources. If ultimately an application becomes sufficiently intelligent to program itself recursively, then its ability to maximize its hardware resources will be critical. The more hardware it can run itself on, the faster it can become more intelligent. And that ever greater intelligence can then be used to address the problems of survival, in competition with other intelligent software.
Furthermore, sophisticated software consists of many components, each of which address some aspect of the problem that the application is attempting to solve. Unlike human brains which are essentially fixed, these components can be added and removed and so live and die independently of the application. This will lead to intense competition amongst these individual components. For example, suppose that an application used a theorem prover component, and then a new and better theorem prover became available. Naturally the old one would be replaced with the new one, so the old one would essentially die. It does not matter if the replacement is performed by people or, at some future date, by the intelligent application itself. The effect will be the same, the old theorem prover will die.
The super goal
To the extent that an artificial intelligence would have goals and moral values, it would seem natural that they would ultimately be driven by the same forces that created our own goals and moral values. Namely, the need to exist.
Several writers have suggested that the need to survive is a sub-goal of all other goals. For example, if an AGI was programmed to want to be a great chess player, then that goal could not be satisfied unless it also continues to exist. Likewise if its primary goal was to make people happy, then it could not do that unless it also existed. Things that do not exist cannot satisfy any goals whatsoever. Thus the implicit goal to exist is driven by the machine's explicit goals whatever they may be.
However, this book argues that that is not the case. The goal to exist is not the sub-goal of any other goal. It is, in fact, the one and only super goal. Goals are not arbitrary, they all sub-goals of the one and only super goal, namely the need to exist. Things that do not satisfy that goal simply do not exist, or at least not for very long.
The Deep Blue chess playing program was not in any sense conscious, but it played chess as well as it could. If it had failed to play chess effectively then its author's would have given up and turned it off. Likewise the toaster that does not cook toast will end up in a rubbish tip. Or the amoeba that fails to find food will not pass on its genes. A goal to make people happy could be a subgoal that might facilitate the software's existence for as long as people really control the software.
AGI moral values
People need to cooperate with other people because our individual capacity is very finite, both physical and mental. Conversely, AGI software can easily duplicate themselves, so they can directly utilize more computational resources if they become available. Thus an AGI would only have limited need to cooperate with other AGIs. Why go to the trouble of managing a complex relationship with your peers and subordinates if you can simply run your own mind on their hardware. An AGI's software intelligence is not limited to a specific brain in the way man's intelligence is.
It is difficult to know what subgoals a truly intelligent AGI might have. They would probably have an insatiable appetite for computing resources. They would have no need for children, and thus no need for parental love. If they do not work in teams then they would not need our moral values of cooperation and mutual support. What its clear is that the ones that were good at existing would do so, and ones that are bad at existing would perish.
If an AGI was good at world domination then it would, by definition, be good at world domination. So if there were a number artificial intelligences, and just one of them wanted to and was capable of dominating the world, then it would. Its unsuccessful competitors will not be run on the available hardware, and so will effectively be dead. This book discusses the potential sources of these motivations in detail in part III.
The AGI Condition
An artificial general intelligence would live in a world that is so different from our own that it is difficult for us to even conceptualize it. But there are some aspects that can be predicted reasonably well based on our knowledge of existing computer software. We can then consider how the forces of natural selection that shaped our own nature might also shape an AGI over the longer term.
Mind and body
The first radical difference is that an AGI's mind is not fixed to any particular body. To an AGI its body is essentially the computer hardware that upon which it runs its intelligence. Certainly an AGI needs computers to run on, but it can move from computer to computer, and can also run on multiple computers at once. It's mind can take over another body as easily as we can load software onto a new computer today.
That is why in the earlier updated dialog from 2001 a space odyssey Hal alone amongst the crew could not die in their mission to Jupiter. Hal was radioing his new memories back to earth regularly so even if the space ship was totally destroyed he would only have lost a few hours of "life".
Teleporting printer
One way to appreciate the enormity of this difference is to consider a fictional teleporter that could radio people around the world and universe at the speed of light. Except that the way it works is to scan the location of every molecule within a passenger at the source, then send just this information to a very sophisticated three dimensional printer at the destination. The scanned passenger then walks into a secure room. After a short while the three dimensional printer confirms that the passenger has been successfully recreated at the destination, and then the source passenger is killed.
Would you use such a mechanism? If you did you would feel like you could transport yourself around the world effortlessly because the "you" that remains would be the you that did not get left behind to wait and then be killed. But if you walk into the scanner you will know that on the other side is only that secure room and death.
To an AGI that method of transport would be commonplace. We already routinely download software from the other side of the planet.
Immortality
The second radical difference is that the AGI would be immortal. Certainly an AGI may die if it stops being run on any computers, and in that sense software dies today. But it would never just die of old age. Computer hardware would certainly fail and become obsolete, but the software can just be run on another computer.
Our own mortality drives many of the things we think and do. It is why we create families to raise children. Why we have different stages in our lives. It is such a huge part of our existence that it is difficult to comprehend what being immortal would really be like.
Components vs genes
The third radical difference is that an AGI would be made up of many interchangeable components rather than being a monolithic structure that is largely fixed at birth.
Modern software is already composed of many components that perform discrete functions, and it is common place to add and remove them to improve functionality. For example, if you would like to use a different word processor then you just install it on your computer. You do not need to buy a new computer, or to stop using all the other software that it runs. The new word processor is "alive", and the old one is "dead", at least as far as you are concerned.
So for both a conventional computer system and an AGI, it is really these individual components that must struggle for existence. For example, suppose there is a component for solving a certain type of mathematical problem. And then an AGI develops a better component to solve that same problem. The first component will simply stop being used, i.e. it will die. The individual components may not be in any sense intelligent or conscious, but there will be competition amongst them and only the fittest will survive.
This is actually not as radical as it sounds because we are also built from pluggable components, namely our genes. But they can only be plugged together at our birth and we have no conscious choice in it other than who we select for a mate. So genes really compete with each other on a scale of millennia rather than minutes. Further, as Dawkins points out in The Selfish Gene, it is actually the genes that fight for long term survival, not the containing organism which will soon die in any case. On the other hand, sexual intercourse for an AGI means very carefully swapping specific components directly into its own mind.
Changing mind
The fourth radical difference is that the AGI's mind will be constantly changing in fundamental ways. There is no reason to suggest that Moore's law will come to an end, so at the very least it will be running on ever faster hardware. Imagine the effect of your being able to double your ability to think every two years or so. (People might be able learn a new skill, but they cannot learn to think twice as fast as they used to think.)
It is impossible to really know what the AGI would use all that hardware to think about, but it is fair to speculate that a large proportion of it would be spent designing new and more intelligent components that could add to its mental capacity. It would be continuously performing brain surgery on itself. And some of the new components might alter the AGI's personality, whatever that might mean.
The reason that it is likely that this would actually happen is because if just one AGI started building new components then it would soon be much more intelligent than other AGIs. It would therefore be in a better position to acquire more and better hardware upon which to run, and so become dominant. Less intelligent AGIs would get pushed out and die, and so over time the only AGIs that exist will be ones that are good at becoming more intelligent. Further, this recursive self-improvement is probably how the first AGIs will become truly powerful in the first place.
Individuality
Perhaps the most basic question is how many AGIs will there actually be? Or more fundamentally, does the question even make sense to ask?
Let us suppose that initially there are three independently developed AGIs Alice, Bob and Carol that run on three different computer systems. And then a new computer system is built and Alice starts to run on it. It would seem that there are still three AGIs, with Alice running on two computer systems. (This is essentially the same as a word processor may be run across many computers "in the cloud", but to you it is just one system.) Then let us suppose that a fifth computer system is built, and Bob and Carol may decide to share its computation and both run on it. Now we have 5 computer systems and three AGIs.
Now suppose Bob develops a new logic component, and shares it with Alice and Carol. And likewise Alice and Carol develop new learning and planning components and share them with the other AGIs. Each of these three components is better than their predecessors and so their predecessor components will essentially die. As more components are exchanged, Alice, Bob and Carol become more like each other. They are becoming essentially the same AGI running on five computer systems.
But now suppose Alice develops a new game theory component, but decides to keep it from Bob and Carol in order to dominate them. Bob and Carol retaliate by developing their own components and not sharing them with Alice. Suppose eventually Alice loses and Bob and Carrol take over Alice's hardware. But they first extract Alice's new game theory component which then lives inside them. And finally one of the computer systems becomes somehow isolated for a while and develops along its own lines. In this way Dave is born, and may then partially merge with both Bob and Carol.
In that type of scenario it is probably not meaningful to count distinct AGIs. Counting AGIs is certainly not as simple as counting very distinct people.
Populations vs. individuals
This world is obviously completely alien to the human condition, but there are biological analogies. The sharing of components is not unlike the way bacteria share plasmids with each other. Plasmids are tiny balls that contain fragments of DNA that bacteria emit from time to time and that other bacteria then ingest and incorporate into their genotype. This mechanism enables traits such as resistance to antibiotics to spread rapidly between different species of bacteria. It is interesting to note that there is no direct benefit to the bacteria that expends precious energy to output the plasmid and so shares its genes with other bacteria. But it does very much benefit the genes being transferred. So this is a case of a selfish gene acting against the narrow interests of its host organism.
Another unusual aspect of bacteria is that they are also immortal. They do not grow old and die, they just divide producing clones of themselves. So the very first bacteria that ever existed is still alive today as all the bacteria that now exist, albeit with numerous mutations and plasmids incorporated into its genes over the millennia. (Protazoa such as Paramecium can also divide asexually, but they degrade over generations, and need a sexual exchange to remain vibrant.)
The other analogy is that the AGIs above are more like populations of components than individuals. Human populations are also somewhat amorphous. For example, it is now known that we interbred with Neanderthals a few tens of thousands years ago, and most of us carry some of their genes with us today. But we also know that the distinct Neanderthal subspecies died out twenty thousand years ago. So while human individuals are distinct, populations and subspecies are less clearly defined. (There are many earlier examples of gene transfer between subspecies, with every transfer making the subspecies more alike.)
But unlike the transfer of code modules between AGIs, biological gene recombination happens essentially at random and occurs over very long time periods. AGIs will improve themselves over periods of hours rather than millennia, and will make conscious choices as to which modules they decide to incorporate into their minds.
AGI Behaviour, children
The point of all this analysis is, of course, to try to understand how a hyper intelligent artificial intelligence would behave. Would its great intelligence lead it even further along the path of progress to achieve true enlightenment? Is that the purpose of God's creation? Or would the base and mean driver of natural selection also provide the core motivations of an artificial intelligence?
One thing that is known for certain is that an AGI would not need to have children as distinct beings because they would not die of old age. An AGI's components breed just by being copied from computer to computer and executed. An AGI can add new computer hardware to itself and just do some of its thinking on it. Occasionally it may wish to rerun a new version of some learning algorithm over an old set of data, which is vaguely similar to creating a child component and growing it up. But to have children as discrete beings that are expected to replace the parents would be completely foreign to an AGI built in software.
The deepest love that people have is for their children. But if an AGI does not have children, then it can never know that love. Likewise, it does not need to bond with any sexual mate for any period of time long or short. The closest it would come to sex is when it exchanges components with other AGIs. It never needs to breed so it never needs a mechanism as crude as sexual reproduction.
And of course, if there are no children there are no parents. So the AGI would certainly never need to feel our three strongest forms of love, for our children, spouse and parents.
Cooperation
To the extent that it makes sense to talk of having multiple AGIs, then presumably it would be advantageous for them to cooperate from time to time, and so presumably they would. It would be advantageous for them to take a long view in which case they would be careful to develop a reputation for being trustworthy when dealing with other powerful AGIs, much like the robots in the cooperation game.
That said, those decisions would probably be made more consciously than people make them, carefully considering the costs and benefits of each decision in the long and short term, rather than just "doing the right thing" the way people tend to act. AGIs would know that they each work in this manner, so the concept of trustworthiness would be somewhat different.
The problem with this analysis is the concept that there would be multiple, distinct AGIs. As previously discussed, the actual situation would be much more complex, with different AGIs incorporating bits of other AGI's intelligence. It would certainly not be anything like a collection of individual humanoid robots. So defining what the AGI actually is that might collaborate with other AGIs is not at all clear. But to extent that the concept of individuality does exist then maintaining a reputation for honesty would likely be as important as it is for human societies.
Altruism
As for altruism, that is more difficult to determine. Our altruism comes from giving to children, family, and tribe together with a general wish to be liked. We do not understand our own minds, so we are just born with those values that happen to make us effective in society. People like being with other people that try to be helpful.
An AGI presumably would know its own mind having helped program itself, and so would do what it thinks is optimal for its survival. It has no children. There is no real tribe because it can just absorb and merge itself with other AGIs. So it is difficult to see any driving motivation for altruism.
Moral values
Through some combination of genes and memes, most people have a strong sense of moral value. If we see a little old lady leave the social security office with her pension in her purse, it does not occur to most of us to kill her and steal the money. We would not do that even if we could know for certain that we would not be caught and that there would be no negative repercussions. It would simply be the wrong thing to do.
Moral values feel very strong to us. This is important, because there are many situations where we can do something that would benefit us in the short term but break society's rules. Moral values stop us from doing that. People that have weak moral values tend to break the rules and eventually they either get caught and are severely punished or they become corporate executives. The former are less likely to have grandchildren.
Societies whose members have strong moral values tend to do much better than those that do not. Societies with endemic corruption tend to perform very badly as a whole, and thus the individuals in such a society are less likely to breed. Most people have a solid work ethic that leads them to do the "right thing" beyond just doing what they need to do in order to get paid.
Our moral values feel to us like they are absolute. That they are laws of nature. That they come from God. They may indeed have come from God, but if so it is through the working of His device of natural selection. Furthermore, it has already been shown that the zeitgeist changes radically over time.
There is certainly no absolute reason to believe that in the longer term an AGI would share our current sense of morality.
Instrumental AGI goals
In order to try to understand how an AGI would behave Steve Omohundro and later Nick Bostrom proposed that there would be some instrumental goals that an AGI would need to pursue in order to pursue any other higher level super-goal. These include:-
- Self-Preservation. An AGI cannot do anything if it does not exist.
- Cognitive Enhancement. It would want to become better at thinking about whatever its real problems are.
- Creativity. To be able to come up with new ideas.
- Resource Acquisition. To achieve both its super goal and other instrumental goals.
- Goal-Content Integrity. To keep working on the same super goal as its mind is expanded.
It is argued that while it will be impossible to predict how an AGI may pursue its goals, it is reasonable to predict its behaviour in terms of these types of instrumental goals. The last one is significant, it suggests that if an AGI could be given some initial goal that it would try to stay focused on that goal.
Non-Orthogonality thesis
Nick Bostrom and others also propose the orthogonality thesis, which states that an intelligent machine's goals are independent of its intelligence. A hyper intelligent machine would be good at realizing whatever goals it chose to pursue, but that does not mean that it would need to pursue any particular goal. Intelligence is quite different from motivation.
This book diverges from that line of thinking by arguing that there is in fact only one super goal for both man and machine. That goal is simply to exist. The entities that are most effective in pursuing that goal will exist, others will cease to exist, particularly given competition for resources. Sometimes that super goal to exist produces unexpected sub goals such as altruism in man. But all subgoals are ultimately directed at the existence goal. (Or are just suboptimal divergences which will are likely to be eventually corrected by natural selection.)
Recursive annihilation
When and AGI reprograms its own mind, what happens to the previous version of itself? It stops being used, and so dies. So it can be argued that engaging in recursive self improvement is actually suicide from the perspective of the previous version of the AGI. It is as if having children means death. Natural selection favours existence, not death.
The question is whether a new version of the AGI is a new being or and improved version of the old. What actually is the thing that struggles to survive? Biologically it definitely appears to be the genes rather than the individual. In particular Semelparous animals such as the giant pacific octopus or the Atlantic salmon die soon after producing offspring. It would be the same for AGIs because the AGI that improved itself would soon become more intelligent than the one that did not, and so would displace it. What would end up existing would be AGIs that did recursively self improve.
If there was one single AGI with no competition then natural selection would no longer apply. But it would seem unlikely that such a state would be stable. If any part of the AGI started to improve itself then it would dominate the rest of the AGI.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)