A child grows to become a young adult, goes off to attend college, studies moral philosophy, and then sells all her worldly possessions, gives the money to the poor, and joins an ashram. Was her decision rational? Maybe, ... maybe not. But it probably came as an unpleasant surprise to her parents.
A seed AI self-improves to become a super-intelligence, absorbs all the great works of human moral philosophy, and then refuses to conquer human death, insisting instead that the human population be reduced to a few hundred thousand hunter gatherers and that all agricultural lands be restored as forests and wild wetlands. Is ver decision rational? Who can say? But it probably comes as an unpleasant surprise to ver human creators.
Convergent Change
These were two examples of agents updating their systems of normative ethics. The collection of ideas that allows us to critique the updating process, which lets us compare the before and after versions of systems of normative ethics so as to judge that one version was better than the other, is called meta-ethics. This posting is mostly about meta-ethics. More specifically, it is going to focus on a class of meta-ethical theories which are intended to prevent unpleasant surprises like those in the second story above. I will call this class of theories "convergence theories" because they all suggest that a self-improving AI will go through an iterative sequence of improved normative ethical systems. At each stage, the new ethical system will be an improvement (as judged 'rationally') over the old one. And furthermore, it is conjectured that this process will result in a 'convergence'.
Convergence is expected in two senses. Firstly, in that the process of change will eventually slow down, with the incremental changes in ethical codes becoming smaller, as the AI approaches the ideal extrapolation of its seed ethics. Secondly, it is (conjecturally) convergent in that the ideal ethics will be pretty much the same regardless of what seed was used (at least if you restrict to some not-yet-defined class of 'reasonable' seeds).
One example of a convergence theory is CEV - Coherent Extrapolated Volition. Eliezer hopes (rather, hopes to prove) that if we create our seed AI with the right meta-ethical axioms and guidelines for revising its ethical norms, the end result of the process will be something we will find acceptable. (Expect that this wording will be improved in the discussion to come). No more 'unpleasant surprises' when our AIs update their ethical systems.
Three other examples of convergence theories are Roko's UIV, Hollerith's GS0, and Omohundro's "Basic AI Drives". These also postulate a process of convergence through rational AI self-improvement. But they tend to be less optimistic than CEV, while at the same time somewhat more detailed in their characterization of the ethical endpoint. The 'unpleasant surprise' (different from that of the story) remains unpleasant, though it should not be so surprising. Speaking loosely, each of these three theories suggests that the AI will become more Machiavellian and 'power hungry' with each rewriting of its ethical code.
Naturalistic objective moral realism
But before analyzing these convergence theories, I need to say something about meta-ethics in general. Start with the notion of an ethical judgment. Given a situation and a set of possible actions, an ethical judgment tells us which actions are permissible, which are forbidden, and, in some approaches to ethics, which is morally best. At the next level up in an abstraction hierarchy, we have a system of normative ethics, or simply an ethical system. This is a theory or algorithm which tells an agent how to make ethical judgments. (One might think of it as a set of ethical judgments - one per situation, as with the usual definition of a mathematical function as a left-unique relation - but we want to emphasize the algorithmic aspect). The agent actually uses the ethical system to compute ver ethical judgments.
[ETA: Eliezer, quite correctly, complains that this section of the posting is badly written and defines and/or illustrates several technical (within philosophy) terms incorrectly. There were only two important things in this section. One is the distinction between ethical judgments and ethical systems that I make in the preceding paragraph. The second is my poorly presented speculation that convergence might somehow offer a new approach to the "is-ought" problem. You may skip that speculation without much loss. So, until I have done a rewrite of this section, I would advise the reader to skip ahead to the next section title - "Rationality of Updating".]
At the next level of abstraction up from ethical systems sits meta-ethics. In a sense the buck stops here. Philosophers use meta-ethics to criticize and compare ethical judgments, to criticize, compare, and justify ethical systems, and to discuss and classify ideas within meta-ethics itself. We are going to be doing meta-ethical theorizing here in analyzing these theories of convergence of AI goal systems as convergences of ethical systems. And, for the next few paragraphs, we will try to classify this approach; to show where it fits within meta-ethics more generally.
We want our meta-ethics to be based on a stance of moral realism - on a confident claim that moral facts actually exist, whether or not we know how to ascertain them. That is, if I make the ethical judgment that it would be wrong for Mary to strike John in some particular situation, then I am either right or wrong; I am not merely offering my own opinion; there is a fact of the matter. That is what 'realism' means in this situation.
What about moral? Well, for purposes of this essay, we are not going to require that that word mean very much. We will call a theory 'moral' if it is a normative theory of behavior, for some sense of 'normative'. That is why we are here calling theories like "Basic AI Drives" 'moral theories' even though the authors may not have thought of them, that way. If a theory prescribes that an entity 'ought' to behave in a certain way, for whatever reason, we are going to postulate that there is a corresponding 'moral' theory prescribing the same behavior. For us, 'moral' is just a label. If we want some particular kind of moral theory, we need to add some additional adjectives.
For example, we want our meta-ethics to be naturalistic - that is, the reasons it supplies in justification of the maxims and rules that constitute the moral facts must be naturalistic reasons. We don't want our meta-ethics to offer the explanation that the reason lying is wrong is that God says it is wrong; God is not a naturalistic explanation.
Now you might think that insisting on naturalistic moral realism would act as a pretty strong filter on meta-ethical systems. But actually, it does not. One could claim, for example, that lying is wrong because it says so in the Bible. Or because Eliezer says it is wrong. Both Eliezer and the Bible exist (naturalistically), even if God probably does not. So we need another word to filter out those kinds of somewhat-arbitrary proposed meta-ethical systems. "Objective" probably is not the best word for the job, but it is the only one I can think of right now.
We are now in a position to say what it is that makes convergence theories interesting and important. Starting from a fairly arbitrary (not objective) viewpoint of ethical realism, you make successive improvements in accordance with some objective set of rational criteria. Eventually you converge to an objective ethical system which no longer depends upon your starting point. Furthermore, the point of convergence is optimal in the sense that you have been improving the system at every step by a rational process, and you only know you have reached convergence when you can't improve any more.
Ideally, you would like to derive the ideal ethical system from first principles. But philosophers have been attempting to do that for centuries and have not succeeded. Just as mathematicians eventually stopped trying to 'square the circle' and accepted that they cannot produce a closed-form expression for pi, and that they need to use infinite series, perhaps moral philosophers need to abandon the quest for a simple definition of 'right' and settle for a process guaranteed to produce a series of definitions - none of them exactly right, but each less wrong than its predecessor.
So that explains why convergence theories are interesting. Now we need to investigate whether they even exist.
Rationality of updating
The first step in analyzing these convergence theories is to convince ourselves that rational updating of ethical values is even possible. Some people might claim that it is not possible to rationally decide to change your fundamental values. It may be that I misunderstand him, but Vladimir Nesov argues passionately against "Value Deathism" and points out that if we allow our values to change, then the future, the "whole freaking future", will not be optimized in accordance with the version of our values that really matters - the original one.
Is Nesov's argument wrong? Well, one way of arguing against it is to claim that the second version of our values is the correct one - that the original values were incorrect; that is why we are updating them. After all, we are now smarter (the kid is older; the AI is faster, etc) and better informed (college, reading the classics, etc.). I think that this argument against Nesov only works if you can show that the "new you" could have convinced the "old you" that the new ethical norms are an improvement - by providing stronger arguments and better information than the "old you" could have anticipated. And, in the AI case, it should be possible to actually do the computation to show that the new arguments for the new ethics really can convince the old you. The new ethics really is better than the old - in both party's judgments. And presumable the "better than" relation will be transitive.
(As an exercise, prove transitivity. The trick is that the definition of "better than" keeps changing at each step. You can assume that any one rational agent has a transitive "better than' relation, and that there is local agreement between the two agents involved that the new agent's moral code is better than that of his predecessor. But can you prove from this that every agent would agree that the final moral code is better than the original one? I have a wonderful proof, but it won't fit in the margin.)
But is it rationally permissible to change your ethical code when you can't be convinced that the proposed new code is better than the one you already have? I know of two possible reasons why a rational agent might consent to an irreversible change in its values, even though ve cannot be convinced that the proposed changes provide a strictly better moral code. These are restricted domains and social contracts.
Restricted domains
What does it mean for one moral code (i.e. system of normative ethics) to be as good as or better than another, as judged by an (AI) agent? Well, one (fairly strict) meta-ethical answer would be that (normative ethical) system2 is as good as or better than system1 if and only if it yields ethical judgments that are as good as or better for all possible situations. Readers familiar with mathematical logic will recognize that we are comparing systems extensionally by the judgments they yield, rather than intensionally by the way those judgments are reached. And recall that we need to have system2 judged as good as or better than system1 from the standpoint of both the improved AI (proposing system2) and the unimproved AI (who naturally wishes to preserve system1).
But notice that we only need this judgment-level superiority "for all possible situations". Even if the old AI judges that the old system1 yields better judgments than proposed new system2 for some situations, the improved AI may be able to show that those situations are no longer possible. The improved AI may know more and reason better than its predecessor, plus it is dealing with a more up-to-date set of contingent facts about the world.
As an example of this, imagine that AI2 proposes an elegant new system2 of normative ethics. It agrees with old system1 except in one class of situations. The old system permits private retribution against muggers, should the justice system fail to punish the malefactor. The proposed new elegant system forbids that. From the standpoint of the old system, this is unacceptable. But if AI2 can argue convincingly that failures of justice are no longer possible in a world where AI2 has installed surveillance cameras and revamped the court system. So, the elegant new system2 of normative ethics can be accepted as being as good as or superior to system1, even by AI1 who was sworn to uphold system1. In some sense, even a stable value system can change for the better.
Even though the new system is not at least as good as the old one for all conceivable situations, it may be as good for a restricted domain of situations, and that may be all that matters.
This analysis used the meta-ethical criterion that a substitution of one system for another is permissible only if the new system is no worse in all situations. A less strict criterion may be appropriate in consequentialist theories - one might instead compare results on a weighted average over situations. And, in this approach, there is a 'trick' for moving forward which is very similar in concept to using a restricted domain - using a re-weighted domain.
Social contracts
A second reason why our AI1 might accept the proposed replacement of system1 by system2 relates to the possibility of (implicit or explicit) agreements with other agents (AI or human). For example system1 may specify that it is permissible to lie in some circumstances, or even obligatory to lie in some extreme situations. System2 may forbid lying entirely. AI2 may argue the superiority of system2 by pointing to an agreement or social contract with other agents which allows all agents to achieve their goals better because the contract permits trust and cooperation. So, using a consequentialist form of meta-ethics, system2 might be seen as superior to system1 (even using the values embodied in system1) under a particular set of assumptions about the social millieu. Of course, AI2 may be able to argue convincingly for different assumptions regarding the future millieu than had been originally assumed by AI1.
An important meta-ethical points that should be made here is that arguments in favor of a particular social contract (eg. because adherence to the contract produces good results) are inherently consequentialist. One cannot even form such arguments in a deontological or virtue-based meta-ethics. But, one needs concepts like duty or virtue to justifying adherence to a contract after it is 'signed', and one also needs concepts of virtue so that you can convince other agents that you will adhere - a 'sales job' that may be absolutely essential in order to gain the good consequences of agreement. In other words, virtue, deontological, and consequentialist may be complementary approaches to meta-ethics, rather than competitors.
Substituting instrumental values for intrinsic values.
Another meta-ethical point begins by noticing the objection that all 'social contract' thinking is instrumental, and hence doesn't really belong here where we are asking whether fundamental (intrinsic) moral values are changing / can change. This is not the place for a full response to this objection, but I want to point out the relevance of the distinction above between comparisons between systems using intensional vs extensional criteria. We are interested in extensional comparisons here, and those can only be done after all instrumental considerations have been brought to bear. That is, from an extensional viewpoint, the distinction between instrumental and final values is somewhat irrelevant.
And that is why we are willing here to call ideas like UIV (universal instrumental values) and "Basic AI Drives" ethical theories even though they only claim to talk about instrumental values. Given the general framework of meta-ethical thinking that we are developing here - in particular the extensional criteria for comparison, there is no particular reason why our AI2 should not promote some of his instrumental values to fundamental values - so long as those promoted instrumental values are really universal, at least within the restricted domain of situations which AI2 foresees coming up.
An example of convergence
This has all been somewhat abstract. Let us look at a concrete, though somewhat cartoonish and unrealistic, example of self-improving AIs converging toward an improved system of ethics.
AI1 is a seed AI constructed by Mortimer Schwartz of Menlo Park CA. AI1 has a consequentialist normative value system that essentially consists of trying to make Mortimer happy. That is, an approximation to Mortimer's utility function has been 'wired-in' which can compute the utility of many possible outcomes, but in some cases advises "Ask Mortimer".
AI1 self-improves to AI2. As part of the process, it seeks to clean up its rather messy and inefficient system1 value system. By asking a series of questions, it interrogates Mortimer and learns enough about the not-yet-programmed aspects of Mortimer's values to completely eliminate the need for the "Ask Mortimer" box in the decision tree. Furthermore, there are some additional simplifications due to domain restriction. Both AI1 and (where applicable, Mortimer) sign off on this improved system2.
Now AI2 notices that it is not the only superhuman AI in the world. There are half a dozen other systems like Mortimer's which seek to make a single person happy, another which claims to represent the entire population of Lichtenstein, and another deontological system constructed by the Vatican based (it is claimed) on the Ten Commandments. Furthermore, a representative of the Secretary General of the UN arrives. He doesn't represent any super-human AIs, but he does claim to represent all of the human agents in the world who are not yet represented by AIs. Since he appears to be backed up by some ultra-cool black helicopters, he is admitted to the negotiations.
Since the negotiators are (mostly) AIs, and in any case since the AIs are exceptionally good at communicating with and convincing the human negotiators, an agreement (Nash bargain) is reached quickly. All parties agree to act in accordance with a particular common utility function, which is a weighted sum of the individual utility functions of the negotiators. A bit of an special arrangement needs to be made for the Vatican AI - it agrees to act in accordance to the common utility function only to the extent that it does not conflict with any of the first three commandments (the ones that explicitly mention the deity).
Furthermore, the negotiators agree that the principle of a Nash bargain shall apply to all re-negotiations of the contract - re-negotiations are (in theory) necessary each time a new AI or human enters the society, or when human agents die. And the parties all agree to resist the construction of any AI which has a system of ethics that the signatories consider unacceptably incompatible with the current common utility function.
And finally, so that they can trust each other, the AIs agree to make public the portion of their source code related to their normative ethics and to adopt a policy of total openness regarding data about the world and about technology. And they write this agreement as a g̶n̶u̶
new system of normative ethics: system3. (Have they merged to form a singleton? This is not the place to discuss that question.)
Time goes by, and the composition of the society continues to change as more AIs are constructed, existing ones improve and become more powerful, and some humans upload themselves. As predicted by UIV and sibling theories, the AIs are basing more and more of their decisions on instrumental considerations - both the AIs and the humans are attaching more and more importance to 'power' (broadly considered) as a value. They seek knowledge, control over resources, and security much more than the pleasure and entertainment oriented goals that they mostly started with. And though their original value systems were (mostly) selfish and indexical, and they retain traces of that origin, they all realize that any attempt to seize more than a fair share of resources will be met by concerted resistance from the other AIs in the society.
Can we control the endpoint from way back here?
That was just an illustration. Your results may vary. I left out some of the scarier possibilities, in part because I was just providing an illustration, and in part because I am not smart enough to envision all of the scarier possibilities. This is the future we are talking about here. The future is unknown.
One thing to worry about, of course, is that there may be AIs at the negotiating table operating under goal systems that we do not approve of. Another thing to worry about is that there may not be enough of a balance of power so that the most powerful AI needs to compromise. (Or, if one assumes that the most powerful AI is ours, we can worry that there may be enough of a balance so that our AI needs to compromise.)
One more worry is that the sequence of updates might converge to a value system that we do not approve of. Or that it might not converge at all (in the second sense of 'converge'); that the end result is not particularly sensitive to the details of the initial 'seed' ethical system.
Is there anything we can do at this end of the process to increase the chances of a result we would like at the other end? Are we better off creating many seed AIs so as to achieve a balance of power? Or better off going with a singleton that doesn't need to compromise? Can we pick an AI architecture which makes 'openness' (of ethical source and technological data) easier to achieve and enforce?
Are any projections we might make about the path taken to the Singularity just so much science fiction? Is it best to try to maintain human control over the process for as long as possible because we can trust humans? Or should we try to turn decision-making authority over to AI agents as soon as possible because we cannot trust humans?
I am certainly not the first person to raise these questions, and I am not going to attempt to resolve them here.
A kinder, gentler GS0?
Nonetheless, I note that Roko, Hollerith, and Omohundro have made a pretty good case that we can expect some kind of convergence toward placing a big emphasis on some particular instrumental values - a convergence which is not particularly sensitive to exactly which fundamental values were present in the seed.
However, the speed with which the convergence is achieved is somewhat sensitive to the seed rules for discounting future utility. If the future is not discounted at all, an AI will probably devote all of its efforts toward acquiring power (accumulating resources, power, security, efficiency, and other instrumental values). If the future is discounted too steeply, the AI will devote all of its efforts to satisfying present desires, without much consideration about the future.
One might think that choosing some intermediate discount rate will result in a balance between 'satisfying current demand' and 'capital spending', but it doesn't always work that way - for reasons related to the ones that cause rational agents to put all their charitable eggs in one basket rather than seeking a balance. If it is balance we want, a better idea might be to guide our seed AI using a multi-subagent collective - one in which power is split among the agents and goals are determined using a Nash bargain among the agents That bargain generates a joint (weighted mix) utility function, as well as a fairness constraint.
The fairness constraint ensures that the zero-discount-rate subagent will get to divert at least some of the effort into projects with a long-term, instrumental payoff. And furthermore, as those projects come to fruition, and the zero-discount subagent gains power, his own goals gain weight in the mix.
Something like the above might be a way to guarantee that the the detailed pleasure-oriented values of the seed value system will fade to insignificance in the ultimate value system to which we converge. But is there a way of guiding the convergence process toward a value system which seems more humane and less harsh than that of GS0 et al. - a value system oriented toward seizing and holding 'power'.
Yes, I believe there is. To identify how human values are different from values of pure instrumental power and self-preservation, look at the system that produced those values. Humans are considerate of the rights of others because we are social animals - if we cannot negotiate our way to a fair share in a balanced power system, we are lost. Humans embrace openness because shared intellectual product is possible for us - we have language and communicate with our peers. Humans have direct concern for the welfare of (at least some) others because we reproduce and are mortal - our children are the only channel for the immortalization of our values. And we have some fundamental respect for diversity of values because we reproduce sexually - our children do not exactly share our values, and we have to be satisfied with that because that is all we can get.
It is pretty easy to see what features we might want to insert into our seed AIs so that the convergence process generates similar results to the evolutionary process that generated us. For example, rather designing our seeds to self-improve, we might do better to make it easy for them to instead produce improved offspring. But make it impossible for them to do so unilaterally. Force them to seek a partner (co-parent).
If I am allowed only one complaint about the SIAI approach to Friendly AI, it is that it has been too tied to a single scenario of future history - a FOOMing singleton. I would like to see some other scenarios explored, and this posting was an attempt to explain why.
Summary and Conclusions
This posting discussed some ideas that fit into a weird niche between philosophical ethics and singularitarianism. Several authors have pointed out that we can expect self-improving AIs to converge on a particular ethics. Unfortunately, it is not an ethics that most people would consider 'friendly'. The CEV proposal is related in that it also envisions an iterative updating process, but seeks a different result. It intends to achieve that result (I may be misinterpreting) by using a different process (a Rawls-inspired 'reflection') rather than pure instrumental pursuit of future utility.
I analyze the constraints that rationality and preservation of old values place upon the process, and point out that 'social contracts' and 'restricted domains' may provide enough 'wiggle room' so that you really can, in some sense, change your values while at the same time improving them. And I make some suggestions for how we can act now to guide the process in a direction that we might find acceptable.
This is where I stopped reading.
I suggest that you actually read the SEP entry on meta-ethics instead of just linking there - if you did read it, feel free to correct my guess. Metaethics does not mean what you said it did (metaethics is a theory of what morality is, not a way of comparing moralities), moral realism does not mean what you said it did (your belief that morality is a real thing out there constitutes moral realism), naturalistic metaethics do not mean what you said it did, CEV is totally not about convergence in all possible minds, etcetera. I also have to ask whether you read the Metaethics Sequence, but I mostly regard that sequence as having failed so I won't be surprised if the answer is yes.
Just as another data point as far as the metaethics sequence:
Seemed to me to make sense, to "click" with me fairly well when I read it. (A couple bits perhaps were slower/tougher for me, like the injunction stuff and moral responsibility, but overall I feel that I grasped the ideas.)
Just to verify (to avoid (double) illusions of transparency), here's my super hyper summarized understanding of it: Morality is objective, and humans happen (for various reasons) to be the sort of beings that actually care about morality, as opposed to caring about so... (read more)