The text is very general in its analysis, so some examples would be helpful. Why not start talking about some sets of goals that people who build an optimizing AI system might actually install in it, and see how the AI uses them?
To avoid going too broad, here's one: "AI genie, abolish hunger in India!"
So the first thing people will complain about with this is that the easiest thing for the system to do is to kill all the Indians in another way. Let's add:
"Without violating Indian national laws."
Now lesswrongers will complain that the AI will:
-Set about changing Indian national law in unpredictable ways. -Violate other country's laws -Brilliantly seek loopholes in any system of laws -Try to become rich too fast -Escape being useful by finding issues with the definition of "hunger," such as whether it is OK for someone to briefly become hungry 15 minutes before dinner. -Build an army and a police force to stop people from getting in the way of this goal.
So, we continue to debug:
"AI, insure that everyone in India is provisioned with at least 1600 calories of healthy (insert elaborate definition of healthy HERE) food each day. Allow them to decline the food in exchange for a small sum of money. Do not break either Indian law or a complex set of other norms and standards for interacting with people and the environment (insert complex set of norms and standards HERE)."
So, we continue to simulate what the AI might do wrong, and patch the problems with more and more sets of specific rules and clauses.
It feels like people will still want to add "stop using resource X/performing behavior Y to pursue this goal if we tell you to," because people may have other plans for resource X or see problems with behavior Y which are not specified in laws and rules.
People may also want to something like, "ask us (or this designated committee) first before you take steps that are too dramatic (insert definition of dramatic HERE)."
Then, I suppose, the AI may brilliantly anticipate, outsmart, provide legal quid-pro-quos or trick the committee. Some of these activities we would endorse (because the committee is sometimes doing the wrong thing), others we would not.
Thus, we continue to evolve a set of checks and balances on what the AI can and cannot do. Respect for the diverse goals and opinions of people seems to be at the core of this debugging process. However, this respect is not limitless, since people's goals are often contradictory and sometimes mistaken.
The AI is constantly probing these checks and balances for loopholes and alternative means, just as a set of well-meaning food security NGO workers would do. Unlike them, however, if it finds a loophole it can go through it very quickly and with tremendous power.
Note that if we substitute the word "NGO," "government" or "corporation" for "AI," we end up with all of the same set of issues as the AI system has. We deal with this by limiting these organization's resource level.
We could designate precisely what resources the AI has to meet its goal. That tool might work to a great extent, but the optimizing AI will still continue to try to find loopholes.
We could limit the amount of time the AI has to achieve its goal, try to limit the amount of processing power it can use or the other hardware.
We could develop computer simulations for what would happen if an AI was given a particular set of rules and goals and disallow many options based on these simulations. This is a kind of advanced consequentialism.
Even if the rules and the simulations work very well, each time we give the AI a set of rules for its behavior, there is a non-zero probability of unintended consequences.
Looking on the bright side, however, as we continue this hypothetical debugging process, the probability (perhaps its better to call it a hazard rate) seems to be falling.
Note also that we do get the same problems with governments, NGOs or corporations as well. Perhaps what we are seeking is not perfection but some advantage over existing approaches to organizing groups of people to solve their problems.
Existing structures are themselves hazardous. The threshold for supplementing them with AI is not zero hazard. It is hazard reduction.
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome. This week we discuss the tenth section in the reading guide: Instrumentally convergent goals. This corresponds to the second part of Chapter 7.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. And if you are behind on the book, don't let it put you off discussing. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: Instrumental convergence from Chapter 7 (p109-114)
Summary
Notes
1. Why do we care about CIVs?
CIVs to acquire resources and to preserve oneself and one's values play important roles in the argument for AI risk. The desired conclusions are that we can already predict that an AI would compete strongly with humans for resources, and also than an AI once turned on will go to great lengths to stay on and intact.
2. Related work
Steve Omohundro wrote the seminal paper on this topic. The LessWrong wiki links to all of the related papers I know of. Omohundro's list of CIVs (or as he calls them, 'basic AI drives') is a bit different from Bostrom's:
3. Convergence for values and situations
It seems potentially helpful to distinguish convergence over situations and convergence over values. That is, to think of instrumental goals on two axes - one of how universally agents with different values would want the thing, and one of how large a range of situations it is useful in. A warehouse full of corn is useful for almost any goals, but only in the narrow range of situations where you are a corn-eating organism who fears an apocalypse (or you can trade it). A world of resources converted into computing hardware is extremely valuable in a wide range of scenarios, but much more so if you don't especially value preserving the natural environment. Many things that are CIVs for humans don't make it onto Bostrom's list, I presume because he expects the scenario for AI to be different enough. For instance, procuring social status is useful for all kinds of human goals. For an AI in the situation of a human, it would appear to also be useful. For an AI more powerful than the rest of the world combined, social status is less helpful.
4. What sort of things are CIVs?
Arguably all CIVs mentioned above could be clustered under 'cause your goals to control more resources'. This implies causing more agents to have your values (e.g. protecting your values in yourself), causing those agents to have resources (e.g. getting resources and transforming them into better resources) and getting the agents to control the resources effectively as well as nominally (e.g. cognitive enhancement, rationality). It also suggests convergent values we haven't mentioned. To cause more agents to have one's values, one might create or protect other agents with your values, or spread your values to existing other agents. To improve the resources held by those with one's values, a very convergent goal in human society is to trade. This leads to a convergent goal of creating or acquiring resources which are highly valued by others, even if not by you. Money and social influence are particularly widely redeemable 'resources'. Trade also causes others to act like they have your values when they don't, which is a way of spreading one's values.
As I mentioned above, my guess is that these are left out of Superintelligence because they involve social interactions. I think Bostrom expects a powerful singleton, to whom other agents will be irrelevant. If you are not confident of the singleton scenario, these CIVs might be more interesting.
5. Another discussion
John Danaher discusses this section of Superintelligence, but not disagreeably enough to read as 'another view'.
Another view
I don't know of any strong criticism of the instrumental convergence thesis, so I will play devil's advocate.
In-depth investigations
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about the treacherous turn. To prepare, read “Existential catastrophe…” and “The treacherous turn” from Chapter 8. The discussion will go live at 6pm Pacific time next Monday 24th November. Sign up to be notified here.