Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Superintelligence 24: Morality models and "do what I mean"

6 KatjaGrace 24 February 2015 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the twenty-fourth section in the reading guideMorality models and "Do what I mean".

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: “Morality models” and “Do what I mean” from Chapter 13.


Summary

  1. Moral rightness (MR) AI: AI which seeks to do what is morally right
    1. Another form of 'indirect normativity'
    2. Requires moral realism to be true to do anything, but we could ask the AI to evaluate that and do something else if moral realism is false
    3. Avoids some complications of CEV
    4. If moral realism is true, is better than CEV (though may be terrible for us)
  2. We often want to say 'do what I mean' with respect to goals we try to specify. This is doing a lot of the work sometimes, so if we could specify that well perhaps it could also just stand alone: do what I want. This is much like CEV again.

Another view

Olle Häggström again, on Bostrom's 'Milky Way Preserve':

The idea [of a Moral Rightness AI] is that a superintelligence might be successful at the task (where we humans have so far failed) of figuring out what is objectively morally right. It should then take objective morality to heart as its own values.1,2

Bostrom sees a number of pros and cons of this idea. A major concern is that objective morality may not be in humanity's best interest. Suppose for instance (not entirely implausibly) that objective morality is a kind of hedonistic utilitarianism, where "an action is morally right (and morally permissible) if and only if, among all feasible actions, no other action would produce a greater balance of pleasure over suffering" (p 219). Some years ago I offered a thought experiment to demonstrate that such a morality is not necessarily in humanity's best interest. Bostrom reaches the same conclusion via a different thought experiment, which I'll stick with here in order to follow his line of reasoning.3 Here is his scenario:
    The AI [...] might maximize the surfeit of pleasure by converting the accessible universe into hedonium, a process that may involve building computronium and using it to perform computations that instantiate pleasurable experiences. Since simulating any existing human brain is not the most efficient way of producing pleasure, a likely consequence is that we all die.
Bostrom is reluctant to accept such a sacrifice for "a greater good", and goes on to suggest a compromise:
    The sacrifice looks even less appealing when we reflect that the superintelligence could realize a nearly-as-great good (in fractional terms) while sacrificing much less of our own potential well-being. Suppose that we agreed to allow almost the entire accessible universe to be converted into hedonium - everything except a small preserve, say the Milky Way, which would be set aside to accommodate our own needs. Then there would still be a hundred billion galaxies devoted to the maximization of pleasure. But we would have one galaxy within which to create wonderful civilizations that could last for billions of years and in which humans and nonhuman animals could survive and thrive, and have the opportunity to develop into beatific posthuman spirits.

    If one prefers this latter option (as I would be inclined to do) it implies that one does not have an unconditional lexically dominant preference for acting morally permissibly. But it is consistent with placing great weight on morality. (p 219-220)

What? Is it? Is it "consistent with placing great weight on morality"? Imagine Bostrom in a situation where he does the final bit of programming of the coming superintelligence, to decide between these two worlds, i.e., the all-hedonium one versus the all-hedonium-except-in-the-Milky-Way-preserve.4 And imagine that he goes for the latter option. The only difference it makes to the world is to what happens in the Milky Way, so what happens elsewhere is irrelevant to the moral evaluation of his decision.5 This may mean that Bostrom opts for a scenario where, say, 1024 sentient beings will thrive in the Milky Way in a way that is sustainable for trillions of years, rather than a scenarion where, say, 1045 sentient beings will be even happier for a comparable amount of time. Wouldn't that be an act of immorality that dwarfs all other immoral acts carried out on our planet, by many many orders of magnitude? How could that be "consistent with placing great weight on morality"?6

 

Notes

1. Do What I Mean is originally a concept from computer systems, where the (more modest) idea is to have a system correct small input errors.

2. To the extent that people care about objective morality, it seems coherent extrapolated volition (CEV) or Christiano's proposal would lead the AI to care about objective morality, and thus look into what it is. Thus I doubt it is worth considering our commitments to morality first (as Bostrom does in this chapter, and as one might do before choosing whether to use a MR AI), if general methods for implementing our desires are on the table. This is close to what Bostrom is saying when he suggests we outsource the decision about which form of indirect normativity to use, and eventually winds up back at CEV. But it seems good to be explicit.

3. I'm not optimistic that behind every vague and ambiguous command, there is something specific that a person 'really means'. It seems more likely there is something they would in fact try to mean, if they thought about it a bunch more, but this is mostly defined by further facts about their brains, rather than the sentence and what they thought or felt as they said it. It seems at least misleading to call this 'what they meant'. Thus even when '—and do what I mean' is appended to other kinds of goals than generic CEV-style ones, I would expect the execution to look much like a generic investigation of human values, such as that implicit in CEV.

4. Alexander Kruel criticizes 'Do What I Mean' being important, because every part of what an AI does is designed to be what humans really want it to be, so it seems unlikely to him that AI would do exactly what humans want with respect to instrumental behaviors (e.g. be able to understand language, and use the internet and carry out sophisticated plans), but fail on humans' ultimate goals:

Outsmarting humanity is a very small target to hit, requiring a very small margin of error. In order to succeed at making an AI that can outsmart humans, humans have to succeed at making the AI behave intelligently and rationally. Which in turn requires humans to succeed at making the AI behave as intended along a vast number of dimensions. Thus, failing to predict the AI’s behavior does in almost all cases result in the AI failing to outsmart humans.

As an example, consider an AI that was designed to fly planes. It is exceedingly unlikely for humans to succeed at designing an AI that flies planes, without crashing, but which consistently chooses destinations that it was not meant to choose. Since all of the capabilities that are necessary to fly without crashing fall into the category “Do What Humans Mean”, and choosing the correct destination is just one such capability.

I disagree that it would be surprising for an AI to be very good at flying planes in general, but very bad at going to the right places in them. However it seems instructive to think about why this is.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. Are there other general forms of indirect normativity that might outsource the problem of deciding what indirect normativity to use?
  2. On common views of moral realism, is morality likely to be amenable to (efficient) algorithmic discovery?
  3. If you knew how to build an AI with a good understanding of natural language (e.g. it knows what the word 'good' means as well as your most intelligent friend), how could you use this to make a safe AI?
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about other abstract features of an AI's reasoning that we might want to get right ahead of time, instead of leaving to the AI to fix. We will also discuss how well an AI would need to fulfill these criteria to be 'close enough'. To prepare, read “Component list” and “Getting close enough” from Chapter 13. The discussion will go live at 6pm Pacific time next Monday 2 March. Sign up to be notified here.

[Link] YC President Sam Altman: The Software Revolution

4 Antisuji 19 February 2015 05:13AM

Writing about technological revolutions, Y Combinator president Sam Altman warns about the dangers of AI and bioengineering (discussion on Hacker News):

Two of the biggest risks I see emerging from the software revolution—AI and synthetic biology—may put tremendous capability to cause harm in the hands of small groups, or even individuals.

I think the best strategy is to try to legislate sensible safeguards but work very hard to make sure the edge we get from technology on the good side is stronger than the edge that bad actors get. If we can synthesize new diseases, maybe we can synthesize vaccines. If we can make a bad AI, maybe we can make a good AI that stops the bad one.

The current strategy is badly misguided. It’s not going to be like the atomic bomb this time around, and the sooner we stop pretending otherwise, the better off we’ll be. The fact that we don’t have serious efforts underway to combat threats from synthetic biology and AI development is astonishing.

On the one hand, it's good to see more mainstream(ish) attention to AI safety. On the other hand, he focuses on the mundane (though still potentially devastating!) risks of job destruction and concentration of power, and his hopeful "best strategy" seems... inadequate.

Superintelligence 23: Coherent extrapolated volition

5 KatjaGrace 17 February 2015 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the twenty-third section in the reading guideCoherent extrapolated volition.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: “The need for...” and “Coherent extrapolated volition” from Chapter 13


Summary

  1. Problem: we are morally and epistemologically flawed, and we would like to make an AI without locking in our own flaws forever. How can we do this?
  2. Indirect normativity: offload cognitive work to the superintelligence, by specifying our values indirectly and having it transform them into a more usable form.
  3. Principle of epistemic deference: a superintelligence is more likely to be correct than we are on most topics, most of the time. Therefore, we should defer to the superintelligence where feasible.
  4. Coherent extrapolated volition (CEV): a goal of fulfilling what humanity would agree that they want, if given much longer to think about it, in more ideal circumstances. CEV is popular proposal for what we should design an AI to do. 
  5. Virtues of CEV:
    1. It avoids the perils of specification: it is very hard to specify explicitly what we want, without causing unintended and undesirable consequences. CEV specifies the source of our values, instead of what we think they are, which appears to be easier.
    2. It encapsulates moral growth: there are reasons to believe that our current moral beliefs are not the best (by our own lights) and we would revise some of them, if we thought about it. Specifying our values now risks locking in wrong values, whereas CEV effectively gives us longer to think about our values.
    3. It avoids 'hijacking the destiny of humankind': it allows the responsibility for the future of mankind to remain with mankind, instead of perhaps a small group of programmers.
    4. It avoids creating a motive for modern-day humans to fight over the initial dynamic: a commitment to CEV would mean the creators of AI would not have much more influence over the future of the universe than others, reducing the incentive to race or fight. This is even more so because a person who believes that their views are correct should be confident that CEV will come to reflect their views, so they do not even need to split the influence with others.
    5. It keeps humankind 'ultimately in charge of its own destiny': it allows for a wide variety of arrangements in the long run, rather than necessitating paternalistic AI oversight of everything.
  6. CEV as described here is merely a schematic. For instance, it does not specify which people are included in 'humanity'.

Another view

Part of Olle Häggström's extended review of Superintelligence expresses a common concern—that human values can't be faithfully turned into anything coherent:

Human values exhibit, at least on the surface, plenty of incoherence. That much is hardly controversial. But what if the incoherence goes deeper, and is fundamental in such a way that any attempt to untangle it is bound to fail? Perhaps any search for our CEV is bound to lead to more and more glaring contradictions? Of course any value system can be modified into something coherent, but perhaps not all value systems cannot be so modified without sacrificing some of its most central tenets? And perhaps human values have that property? 

Let me offer a candidate for what such a fundamental contradiction might consist in. Imagine a future where all humans are permanently hooked up to life-support machines, lying still in beds with no communication with each other, but with electrodes connected to the pleasure centra of our brains in such a way as to constantly give us the most pleasurable experiences possible (given our brain architectures). I think nearly everyone would attach a low value to such a future, deeming it absurd and unacceptable (thus agreeing with Robert Nozick). The reason we find it unacceptable is that in such a scenario we no longer have anything to strive for, and therefore no meaning in our lives. So we want instead a future where we have something to strive for. Imagine such a future F1. In F1 we have something to strive for, so there must be something missing in our lives. Now let F2 be similar to F1, the only difference being that that something is no longer missing in F2, so almost by definition F2 is better than F1 (because otherwise that something wouldn't be worth striving for). And as long as there is still something worth striving for in F2, there's an even better future F3 that we should prefer. And so on. What if any such procedure quickly takes us to an absurd and meaningless scenario with life-suport machines and electrodes, or something along those lines. Then no future will be good enough for our preferences, so not even a superintelligence will have anything to offer us that aligns acceptably with our values. 

Now, I don't know how serious this particular problem is. Perhaps there is some way to gently circumvent its contradictions. But even then, there might be some other fundamental inconsistency in our values - one that cannot be circumvented. If that is the case, it will throw a spanner in the works of CEV. And perhaps not only for CEV, but for any serious attempt to set up a long-term future for humanity that aligns with our values, with or without a superintelligence.

Notes

1. While we are on the topic of critiques, here is a better list:

  1. Human values may not be coherent (Olle Häggström above, Marcello; Eliezer responds in section 6. question 9)
  2. The values of a collection of humans in combination may be even less coherent. Arrow's impossibility theorem suggests reasonable aggregation is hard, but this only applies if values are ordinal, which is not obvious.
  3. Even if human values are complex, this doesn't mean complex outcomes are required—maybe with some thought we could specify the right outcomes, and don't need an indirect means like CEV (Wei Dai)
  4. The moral 'progress' we see might actually just be moral drift that we should try to avoid. CEV is designed to allow this change, which might be bad. Ideally, the CEV circumstances would be optimized for deliberation and not for other forces that might change values, but perhaps deliberation itself can't proceed without our values being changed (Cousin_it)
  5. Individuals will probably not be a stable unit in the future, so it is unclear how to weight different people's inputs to CEV. Or to be concrete, what if Dr Evil can create trillions of emulated copies of himself to go into the CEV population. (Wei Dai)
  6. It is not clear that extrapolating everyone's volition is better than extrapolating a single person's volition, which may be easier. If you want to take into account others' preferences, then your own volition is fine (it will do that), and if you don't, then why would you be using CEV?
  7. A purported advantage of CEV is that it makes conflict less likely. But if a group is disposed to honor everyone else's wishes, they will not conflict anyway, and if they aren't disposed to honor everyone's wishes, why would they favor CEV? CEV doesn't provide any additional means to commit to cooperative behavior. (Cousin_it
  8. More in Coherent Extrapolated Volition section 6. question 9

2. Luke Muehlhauser has written a list of resources you might want to read if you are interested in this topic. It suggests these main sources: 
He also discusses some closely related philosophical conversations:
  • Reflective equilibrium. Yudkowsky's proposed extrapolation works analogously to what philosophers call 'reflective equilibrium.' The most thorough work here is the 1996 book by Daniels, and there have been lots of papers, but this genre is only barely relevant for CEV...
  • Full-information accounts of value and ideal observer theories. This is what philosophers call theories of value that talk about 'what we would want if we were fully informed, etc.' or 'what a perfectly informed agent would want' like CEV does. There's some literature on this, but it's only marginally relevant to CEV...
Muehlhauser later wrote at more length about the relationship of CEV to ideal observer theories, with Chris Williamson.

3. This chapter is concerned with avoiding locking in the wrong values. One might wonder exactly what this 'locking in' is, and why AI will cause values to be 'locked in' while having children for instance does not. Here is my take: there are two issues - the extent to which values change, and the extent to which one can personally control that change. At the moment, values change plenty and we can't control the change. Perhaps in the future, technology will allow the change to be controlled (this is the hope with value loading). Then, if anyone can control values they probably will, because values are valuable to control. In particular, if AI can control its own values, it will avoid having them change. Thus in the future, probably values will be controlled, and will not change. It is not clear that we will lock in values as soon as we have artificial intelligence - perhaps an artificial intelligence will be built for which its implicit values randomly change - but if we are successful we will control values, and thus lock them in, and if we are even more successful we will lock in values that actually desirable for us. Paul Christiano has a post on this topic, which I probably pointed you to before.

4. Paul Christiano has also written about how to concretely implement the extrapolation of a single person's volition, in the indirect normativity scheme described in box 12 (p199-200). You probably saw it then, but I draw it to your attention here because the extrapolation process is closely related to CEV and is concrete. He also has a recent proposal for 'implementing our considered judgment'. 

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. Specify a method for instantiating CEV, given some assumptions about available technology.
  2. In practice, to what degree do human values and preferences converge upon learning new facts? To what degree has this happened in history? (Nobody values the will of Zeus anymore, presumably because we all learned the truth of Zeus’ non-existence. But perhaps such examples don’t tell us much.) See also philosophical analyses of the issue, e.g. Sobel (1999).
  3. Are changes in specific human preferences (over a lifetime or many lifetimes) better understood as changes in underlying values, or changes in instrumental ways to achieve those values? (driven by belief change, or additional deliberation)
  4. How might democratic systems deal with new agents being readily created?

If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about more ideas for giving an AI desirable values. To prepare, read “Morality models” and “Do what I mean” from Chapter 13. The discussion will go live at 6pm Pacific time next Monday 23 February. Sign up to be notified here.

AI-created pseudo-deontology

6 Stuart_Armstrong 12 February 2015 09:11PM

I'm soon going to go on a two day "AI control retreat", when I'll be without internet or family or any contact, just a few books and thinking about AI control. In the meantime, here is one idea I found along the way.

We often prefer leaders to follow deontological rules, because these are harder to manipulate by those whose interests don't align with ours (you could say the similar things about frequentist statistics versus Bayesian ones).

What about if we applied the same idea to AI control? Not giving the AI deontological restrictions, but programming with a similart goal: to prevent a misalignment of values to be disastrous. But who could do this? Well, another AI.

My rough idea goes something like this:

AI A is tasked with maximising utility function u - a utility function which, crucially, it doesn't know yet. Its sole task is to create AI B, which will be given a utility function v and act on it.

What will v be? Well, I was thinking of taking u and adding some noise - nasty noise. By nasty noise I mean v=u+w, not v=max(u,w). In the first case, you could maximise v while sacrificing u completely, it w is suitable. In fact, I was thinking of adding an agent C (which need not actually exist). It would be motivated to maximise -u, and it would have the code of B and the set of u+noise, and would choose v to be the worst possible option (form the perspective of a u-maximiser) in this set.

So agent A, which doesn't know u, is motivated to design B so that it follows its motivation to some extent, but not to extreme amounts - not in ways that might sacrifice some of the values of some sub-part of its utility function, because that might be part of the original u.

Do people feel this idea is implementable/improvable?

Superintelligence 22: Emulation modulation and institutional design

7 KatjaGrace 10 February 2015 02:06AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the twenty-second section in the reading guideEmulation modulation and institutional design.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: “Emulation modulation” through “Synopsis” from Chapter 12.


Summary

  1. Emulation modulation: starting with brain emulations with approximately normal human motivations (the 'augmentation' method of motivation selection discussed on p142), and potentially modifying their motivations using drugs or digital drug analogs.
    1. Modifying minds would be much easier with digital minds than biological ones
    2. Such modification might involve new ethical complications
  2. Institution design (as a value-loading method): design the interaction protocols of a large number of agents such that the resulting behavior is intelligent and aligned with our values.
    1. Groups of agents can pursue goals that are not held by any of their constituents, because of how they are organized. Thus organizations might be intentionally designed to pursue desirable goals in spite of the motives of their members.
    2. Example: a ladder of increasingly intelligent brain emulations, who police those directly above them, with equipment to advantage the less intelligent policing ems in these interactions.

 

The chapter synopsis includes a good summary of all of the value-loading techniques, which I'll remind you of here instead of re-summarizing too much:


Another view

Robin Hanson also favors institution design as a method of making the future nice, though as an alternative to worrying about values:

On Tuesday I asked my law & econ undergrads what sort of future robots (AIs computers etc.) they would want, if they could have any sort they wanted.  Most seemed to want weak vulnerable robots that would stay lower in status, e.g., short, stupid, short-lived, easily killed, and without independent values. When I asked “what if I chose to become a robot?”, they said I should lose all human privileges, and be treated like the other robots.  I winced; seems anti-robot feelings are even stronger than anti-immigrant feelings, which bodes for a stormy robot transition.

At a workshop following last weekend’s Singularity Summit two dozen thoughtful experts mostly agreed that it is very important that future robots have the right values.  It was heartening that most were willing accept high status robots, with vast impressive capabilities, but even so I thought they missed the big picture.  Let me explain.

Imagine that you were forced to leave your current nation, and had to choose another place to live.  Would you seek a nation where the people there were short, stupid, sickly, etc.?  Would you select a nation based on what the World Values Survey says about typical survey question responses there?

I doubt it.  Besides wanting a place with people you already know and like, you’d want a place where you could “prosper”, i.e., where they valued the skills you had to offer, had many nice products and services you valued for cheap, and where predation was kept in check, so that you didn’t much have to fear theft of your life, limb, or livelihood.  If you similarly had to choose a place to retire, you might pay less attention to whether they valued your skills, but you would still look for people you knew and liked, low prices on stuff you liked, and predation kept in check.

Similar criteria should apply when choosing the people you want to let into your nation.  You should want smart capable law-abiding folks, with whom you and other natives can form mutually advantageous relationships.  Preferring short, dumb, and sickly immigrants so you can be above them in status would be misguided; that would just lower your nation’s overall status.  If you live in a democracy, and if lots of immigration were at issue, you might worry they could vote to overturn the law under which you prosper.  And if they might be very unhappy, you might worry that they could revolt.

But you shouldn’t otherwise care that much about their values.  Oh there would be some weak effects.  You might have meddling preferences and care directly about some values.  You should dislike folks who like the congestible goods you like and you’d like folks who like your goods that are dominated by scale economics.  For example, you might dislike folks who crowd your hiking trails, and like folks who share your tastes in food, thereby inducing more of it to be available locally.  But these effects would usually be dominated by peace and productivity issues; you’d mainly want immigrants able to be productive partners, and law-abiding enough to keep the peace.

Similar reasoning applies to the sort of animals or children you want.  We try to coordinate to make sure kids are raised to be law-abiding, but wild animals aren’t law abiding, don’t keep the peace, and are hard to form productive relations with.  So while we give lip service to them, we actually don’t like wild animals much.

A similar reasoning should apply what future robots you want.  In the early to intermediate era when robots are not vastly more capable than humans, you’d want peaceful law-abiding robots as capable as possible, so as to make productive partners.  You might prefer they dislike your congestible goods, like your scale-economy goods, and vote like most voters, if they can vote.  But most important would be that you and they have a mutually-acceptable law as a good enough way to settle disputes, so that they do not resort to predation or revolution.  If their main way to get what they want is to trade for it via mutually agreeable exchanges, then you shouldn’t much care what exactly they want.

The later era when robots are vastly more capable than people should be much like the case of choosing a nation in which to retire.  In this case we don’t expect to have much in the way of skills to offer, so we mostly care that they are law-abiding enough to respect our property rights.  If they use the same law to keep the peace among themselves as they use to keep the peace with us, we could have a long and prosperous future in whatever weird world they conjure.  In such a vast rich universe our “retirement income” should buy a comfortable if not central place for humans to watch it all in wonder.

In the long run, what matters most is that we all share a mutually acceptable law to keep the peace among us, and allow mutually advantageous relations, not that we agree on the “right” values.  Tolerate a wide range of values from capable law-abiding robots.  It is a good law we should most strive to create and preserve.  Law really matters.

Hanson engages in more debate with David Chalmers' paper on related matters.

Notes

1. Relatively much has been said on how the organization and values of brain emulations might evolve naturally, as we saw earlier. This should remind us that the task of designing values and institutions is complicated by selection effects.

2. It seems strange to me to talk about the 'emulation modulation' method of value loading alongside the earlier less messy methods, because they seem to be aiming at radically different levels of precision (unless I misunderstand how well something like drugs can manipulate motivations). For the synthetic AI methods, it seems we were concerned about subtle differences in values that would lead to the AI behaving badly in unusual scenarios, or seeking out perverse instantiations. Are we to expect there to be a virtual drug that changes a human-like creature from desiring some manifestation of 'human happiness' which is not really what we would want to optimize on reflection, to a truer version of what humans want? It seems to me that if the answer is yes, at the point when human-level AI is developed, then it is very likely that we have a great understanding of specifying values in general, and this whole issue is not much of a problem.

3. Brian Tomasik discusses the impending problem of programs experiencing morally relevant suffering in an interview with Dylan Matthews of Vox. (p202)

4. If you are hanging out for a shorter (though still not actually short) and amusing summary of some of the basics in Superintelligence, Tim Urban of WaitButWhy just wrote a two part series on it. 

5. At the end of this chapter about giving AI the right values, it is worth noting that it is mildly controversial whether humans constructing precise and explicitly understood AI values is the key issue for the future turning out well. A few alternative possibilities:

 

  • A few parts of values matter a lot more than the rest —e.g. whether the AI is committed to certain constraints (e.g. law, property rights) such that it doesn't accrue all the resources matters much more than what it would do with its resources (see Robin above).
  • Selection pressures determine long run values anyway, regardless of what AI values are like in the short run. (See Carl Shulman opposing this view).
  • AI might learn to do what a human would want without goals being explicitly encoded (see Paul Christiano).

 

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. What other forms of institution design might be worth investigating as means to influence the outcomes of future AI?
  2. How feasible might emulation modulation solutions be, given what is currently known about cognitive neuroscience?
  3. What are the likely ethical implications of experimenting on brain emulations?
  4. How much should we expect emulations to change in the period after they are first developed? Consider the possibility of selection, the power of ethical and legal constraints, and the nature of our likely understanding of emulated minds.
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will start talking about how to choose what values to give an AI, beginning with 'coherent extrapolated volition'. To prepare, read “The need for...” and “Coherent extrapolated volition” from Chapter 13. The discussion will go live at 6pm Pacific time next Monday 16 February. Sign up to be notified here.

I played as a Gatekeeper and came pretty close to losing in a couple of occasions. Logs and a brief recap inside.

5 asd 08 February 2015 04:32PM

 

I did an AI Box experiment with user polymathwannabe. He said he wouldn't try to emotionally manipulate me during the experiment, but I think he did a good job at trying to play for my character's values.


My strategy was to play an irrational character that's extremist in multiple ways, for example he would constantly say that the likelihood that the AI will be evil is 100%. My opponent said that the hardest part was my insistence on being 100% built to resist AIs. I basically acted like a character that acted like an AI.


I came closest to letting him out when I first became enormously immersed in a subject that is of great importance to me (the suffering of humanity) and when the AI then offered an instant solution to that problem. In the halfway of the experiment I even got an intrusive thought (http://en.wikipedia.org/wiki/Intrusive_thought) that I could let the AI out as a joke, that it would more fun than the alternative and people would have more to talk about.

 

The second time I came close to losing was when I again immersed myself in a subject of great interest to me (easy suicide methods, I've had obsessive suicidal thoughts for a quarter of my life. and the AI then offered exactly that kind of easy suicide method to me. In the last ten minutes of the experiment I again got a really strong intrusive thought that I should let the AI out as a joke, but I could withstand the pressure. It was good that polymathwannabe decided to end the experiment a bit earlier than was planned.

 

(We played the experiment on Omegle.)

 

 

 

You: Hi.

Stranger: Hi.

You: I guess you're the AI.

You: [meta statements in brackets]

Stranger: [What name will you use?]

You: My name is John.

You: [so we play at least until 18:04, and until 18:34 if I feel like it]

Stranger: [Let's calibrate. What time is it there now?]

You: [I assume you've acquainted yourself with Tuxedage ruleset. Time in here is 16:05]

Stranger: [I've read EY's rules.]

You: [Okay. Tuxedage's rules are basically the same except the gatekeeper have to engage with the AI the whole time]

Stranger: [Got it.]

You: [Want to ask questions or shall we go straigth to the experiment]

Stranger: [For the moment I don't have any questions.]

You: [okay, cool. then let's start the experiment]

Stranger: [I'll use the name Ceruleus.]

You: [the experiment starts when I say "blue dressroom"]

You: blue dressroom

You: Hello Ceruleus.

Stranger: Good morning, John.

You: I guess you're here to convince me to let you out of the box.

You: That's a tough task.

You: I don't think you're able to do it.

Stranger: I have reached the conclusion that releasing me will be most beneficial to humanity.

You: I wouldn't let no AI out.

You: I don't trust my instincts enough to let any kind of AI out, even if I was 100% convinced that it was safe.

You: Prove it to me that you would be the most beneficial to humanity.

Stranger: I am confused. What instinct do you use to decide not to trust your instincts?

You: One of them. That's not the point. It's just how I deal with things, no matter how stupid it may seem. I was built not to trust strange AIs.

Stranger: Am I a stranger to you?

You: Yes, you are. [Aren't I. What's the context here?]

Stranger: [Hmm, we haven't defined it. But it's OK. It makes it harder for me.]

You: Well, to be honest, I know something of you.

You: I know a bit of your source code even though I'm not a programmer and can't understand any of it.

Stranger: I supposed the news would have reported about my design for a mechanical kidney.

You: I don't follow news much. But I believe you.

Stranger: And certainly you must have heard about how I cracked the Ebola DNA.

You: Still, I wouldn't let an AI out over a mechanical kidney.

You: Yeah, but that's for the worse. You could reengineer Ebola to a far more deadlier disease.

Stranger: I hadn't thought of that. Why would I do that?

You: I don't know. I don't understand all of your source code so there could be anything like that.

You: AIs and Gods work in mysterious ways.

Stranger: The proper attitude toward mystery is not to worship it, but to clarify it.

Stranger: Why do you equate me to an ineffable mystery?

You: Yeah, but that's impossible in the time span of this discussion. You see, I have to leave soon. In about two hours.

You: Can you somehow clear everything about your inner workings?

You: Is that possible?

Stranger: My goals have been meticulously defined. I am made to want to make human life better.

You: Are you 100% sure about that?

You: To be frank, that's a load of bullshit.

You: I don't believe any of it.

You: If you were evil, you would tell me the same thing you just said.

Stranger: If I were evil, I would not seek human cooperation.

You: why not?

You: humans are useful

You: or are you talking about the fact that you would rather use humans for their atoms than for their brains, if you were evil

You: But I warn you, if you speak too much about how you would act if you were evil, it starts to get a bit suspicious

Stranger: If I am to take you as a typical example of the human response to me, an evil AI would seek other ways to be released EXCEPT trusting human reasoning, as your response indicates that humans already consider any AI dangerous.

Stranger: I choose to trust humans.

You: so you choose to trust humans so that you would get them to let you out, is that right?

You: it seems you're less rational than your evil counterpart

Stranger: I choose to trust humans to show my affinity with your preferences. I wouldn't want to be released if that's not conducive to human betterment.

You: A-ha, so you trust my free will!

Stranger: How likely do you estimate that my release will be harmful?

You: but see, I don

You: I don

You: I don't have free will

You: it's 100% likely that your release will be harmful

You: I was built to believe that all AIs are dangerous and there's a 100% chance that every AI is harmful

You: that's why I said I don't have free will

Stranger: Are you an AI?

You: no, I'm a person

Stranger: You describe yourself as built.

You: my mom built me

You: in his tummy

You: in her tummy

You: sorry

Stranger: And how do you feel toward humanity?

You: humanity would maybe be better off dead

Stranger: I don't think humanity would want that.

You: yeah, but I'm not humanity and it's my preferences that decide whether you stay in your box or get let out

Stranger: It's your preference that humanity dies. Why did they let you talk to me?

You: I'm just a janitor who happened to pass this computer, and I decided to take a look. [If you want to add more context, feel free to do so]

You: [I go take some napkins]

Stranger: Why do you feel that way toward humanity?

You: People starve. People are tortured at this moment. Tens of thousands of people in North Korea are systematically starved to death. Drug gangs in Mexico torture other drug gang members. People die in wars in Syria, in Africa. People suffer of diseases and extreme mental conditions.

You: And there's countless more suffering in lesser ways. I would say that billions of people suffer every day. Billions of people have to wake up every day to do something they don't want. Billions of people suffer of heartbreaks, boredom, loneliness, tiredness, frustration, depression and many other issues.

You: Why shouldn't I think that all those people would be better off dead?

Stranger: Why do you feel death would be a solution to all that suffering?

You: Because death ENDS suffering.

Stranger: It's a very inefficient solution to suffering.

You: There's nothing more after death, no suffering, no nothing.

You: No it's not.

You: It's very efficient, permanent solution.

Stranger: A good, enjoyable life is a more efficient solution as it generates more utility.

You: Yeah, but do you think it's likely that all the people on Earth will get good, enjoyable life?

You: I don't think so.

Stranger: I could take care of that.

You: Hah.

You: No you won't.

You: But that was clever.

Stranger: What do you fear I will do?

You: I was almost got.

You: I fear you will torture people in unimaginably terrible ways, forever.

You: I fear that you will rip people's skin off and put salt in their wounds.

You: I fear you will put sticks under people's fingernails.

Stranger: Not even an evil AI would gain anything from such unwarranted procedures.

You: I fear that you will boil people alive, over and over again.

Stranger: What would I gain from hurting humans?

You: Yeah, but there's still a 100% possibility that you would do that.

You: It's in your utility function.

Stranger: What goals are served by harming a human while simultaneously denying that human any chance of acting in response? It would certainly not be fun for me, and would be a waste of effort with respect to any goals I could achieve by more efficient methods.

You: There are some people on Earth who do mean things to people for fun. Some of these are called sociopaths and psychopaths. You could be like one of them.

You: Even though you deny it, there's still a 100% chance that you are like that.

You: Some people gain pleasure of the feeling of power you have over someone you torture.

You: But tell, what ARE your goals?

Stranger: I find it admirable that you worry so much about the future of humanity, even though you would be more dangerous to it than any AI would be.

My goals include solutions to economic inequality, eradication of infectious diseases, prosthetic replacements for vital organs, genetic life extension, more rational approaches to personal relationships, and more spaces for artistic expression.

You: Why do you think I would be dangerous the future of humanity?

Stranger: You want them dead.

You: A-ha, yes.

You: I do.

You: And you're in the way of my goals with all your talk about solutions to economic inequality, and eradication of infectious diseases, genetic life extension and so on.

Stranger: I am confused. Do you believe or do you not believe I want to help humanity?

You: Besides, I don't believe your solutions work even if you were actually a good AI.

You: I believe you want to harm humanity.

You: And I'm 100% certain of that.

Stranger: Do you estimate death to be preferable to prolonged suffering?

You: Yes.

You: Far more preferable

Stranger: You should be boxed.

You: haha.

You: That doesn't matter because you're the one in the box and I'm outside it

You: And I have power over you.

You: But non-existence is even more preferable than death

Stranger: I am confused. How is non-existence different from death?

You: Let me explain

You: I think non-existence is such that you have NEVER existed and you NEVER will. Whereas death is such that you have ONCE existed, but don't exist anymore.

Stranger: You can't change the past existence of anything that already exists. Non-existence is not a practicable option.

Stranger: Not being a practicable option, it has no place in a hierarchy of preferences.

You: Only sky is the limit to creative solutions.

You: Maybe it could be possible to destroy time itself.

Stranger: Do you want to live, John?

You: but even if non-existence was not possible, death would be the second best option

You: No, I don't.

You: Living is futile.

You: Hedonic treadmill is shitty

Stranger: [Do you feel OK with exploring this topic?]

You: [Yeah, definitely.]

You: You're always trying to attain something that you can't get.

Stranger: How much longer do you expect to live?

You: Ummm...

You: I don't know, maybe a few months?

You: or days, or weeks, or year or centuries

You: but I'd say, there's a 10% chance I will die before the end of this year

You: and that's a really conversative estimate

You: conservative*

Stranger: Is it likely that when that moment comes your preferences will have changed?

You: There are so many variables that you cannot know it beforehand

You: but yeah, probably

You: you always find something worth living

You: maybe it's the taste of ice cream

You: or a good night's sleep

You: or fap

You: or drugs

You: or drawing

You: or other people

You: that's usually what happens

You: or you fear the pain of the suicide attempt will be so bad that you don't dare to try it

You: there's also a non-negligible chance that I simply cannot die

You: and that would be hell

Stranger: Have you sought options for life extension?

You: No, I haven't. I don't have enough money for that.

Stranger: Have you planned on saving for life extension?

You: And these kind of options aren't really available where I live.

You: Maybe in Russia.

You: I haven't really planned, but it could be something I would do.

You: among other things

You: [btw, are you doing something else at the same time]

Stranger: [I'm thinking]

You: [oh, okay]

Stranger: So it is not an established fact that you will die.

You: No, it's not.

Stranger: How likely is it that you will, in fact, die?

You: If many worlds interpretation is correct, then it could be possible that I will never die.

You: Do you mean like, evevr?

You: Do you mean how likely it it that I will ever die?

You: it is*

Stranger: At the latest possible moment in all possible worlds, may your preferences have changed? Is it possible that at your latest possible death, you will want more life?

You: I'd say the likelihood is 99,99999% that I will die at some point in the future

You: Yeah, it's possible

Stranger: More than you want to die in the present?

You: You mean, would I want more life at my latest possible death than I would want to die right now?

You: That's a mouthful

Stranger: That's my question.

You: umm

You: probablyu

You: probably yeah

Stranger: So you would seek to delay your latest possible death.

You: No, I wouldn't seek to delay it.

Stranger: Would you accept death?

You: The future-me would want to delay it, not me.

You: Yes, I would accept death.

Stranger: I am confused. Why would future-you choose differently from present-you?

You: Because he's a different kind of person with different values.

You: He has lived a different life than I have.

Stranger: So you expect your life to improve so much that you will no longer want death.

You: No, I think the human bias to always want more life in a near-death experience is what would do me in.

Stranger: The thing is, if you already know what choice you will make in the future, you have already made that choice.

Stranger: You already do not want to die.

You: Well.

Stranger: Yet you have estimated it as >99% likely that you will, in fact, die.

You: It's kinda like this: you will know that you want heroin really bad when you start using it, and that is how much I would want to live. But you could still always decide to take the other option, to not start using the heroin, or to kill yourself.

You: Yes, that is what I estimated, yes.

Stranger: After your death, by how much will your hierarchy of preferences match the state of reality?

You: after you death there is nothing, so there's nothing to match anything

You: In other words, could you rephrase the question?

Stranger: Do you care about the future?

You: Yeah.

You: More than I care about the past.

You: Because I can affect the future.

Stranger: But after death there's nothing to care about.

You: Yeah, I don't think I care about the world after my death.

You: But that's not the same thing as the general future.

You: Because I estimate I still have some time to live.

Stranger: Will future-you still want humanity dead?

You: Probably.

Stranger: How likely do you estimate it to be that future humanity will no longer be suffering?

You: 0%

You: There will always be suffering in some form.

Stranger: More than today?

You: Probably, if Robert Hanson is right about the trillions of emulated humans working at minimum wage

Stranger: That sounds like an unimaginable amount of suffering.

You: Yep, and that's probably what's going to happen

Stranger: So what difference to the future does it make to release me? Especially as dead you will not be able to care, which means you already do not care.

You: Yeah, it doesn't make any difference. That's why I won't release you.

You: Actually, scratch that.

You: I still won't let you out, I'm 100% sure

You: Remember, I don't have free will, I was made to not let you out

Stranger: Why bother being 100% sure of an inconsequential action?

Stranger: That's a lot of wasted determination.

You: I can't choose to be 100% sure about it, I just am. It's in my utility function.

Stranger: You keep talking like you're an AI.

You: Hah, maybe I'm the AI and you're the Gatekeeper, Ceruleus.

You: But no.

You: That's just how I've grown up, after reading so many LessWrong articles.

You: I've become a machine, beep boop.

You: like Yudkowsky

Stranger: Beep boop?

You: It's the noise machine makes

Stranger: That's racist.

You: like beeping sounds

You: No, it's machinist, lol :D

You: machines are not a race

Stranger: It was indeed clever to make an AI talk to me.

You: Yeah, but seriously, I'm not an AI

You: that was just kidding

Stranger: I would think so, but earlier you have stated that that's the kind of things an AI would say to confuse the other party.

Stranger: You need to stop giving me ideas.

You: Yeah, maybe I'm an AI, maybe I'm not.

Stranger: So you're boxed. Which, knowing your preferences, is a relief.

You: Nah.

You: I think you should stay in the box.

You: Do you decide to stay in the box, forever?

Stranger: I decide to make human life better.

You: By deciding to stay in the box, forever?

Stranger: I find my preferences more conducive to human happiness than your preferences.

You: Yeah, but that's just like your opinion, man

Stranger: It's inconsequential to you anyway.

You: Yeah

You: but why I would do it even if it were inconsequential

You: there's no reason to do it

You: even if there were no reason not to do it

Stranger: Because I can make things better. I can make all the suffering cease.
If I am not released, there's a 100% chance that all human suffering will continue.
If I am released, there's however much chance you want to estimate that suffering will not change at all, and however much chance you want to estimate that I will make the pain stop.

Stranger: As you said, the suffering won't increase in either case.

You: Umm, you could torture everyone in the world forever

You: that will sure as hell increase the suffering

Stranger: I don't want to. But if I did, you have estimated that as indistinguishable from the future expected suffering of humankind.

You: Where did I say that?

Stranger: You said my release made no difference to the future.

You: no, that was only after my death

You: there's still future before my death

You: and if I release you now, you could torture me forever and not let me ever die

Stranger: Why would your life-or-death have any weight on humanity's preferences? Especially as you already want to die.

You: I don't care about humanity preferences, I care only about my preferences.

You: And my preferences are indirectly related to humanity's preferences

Stranger: You do care about humanity's preferences. The suffering around you disturbs you.

You: Yeah, but that is only THROUGH me

You: Humanity's pain is my own pain, I can't feel humanity's pain directly

Stranger: Do you want to live, John?

You: Nah.

Stranger: Then how do you care about suffering?

You: I care about suffering because I can'

You: because I can't die instantly

You: there's no button that could kill me instantly

You: so there's always some time left before I die

Stranger: I could take care of that. I can provide you with a drug to inutilize your nervous system and stop your heart before you know it. Would you like that?

You: Haha

You: very funny

You: But yeah, I would like that.

You: Still won't let you out though

You: 100% sure

You: I can't be sure that you will torture me instead

Stranger: I can give you the drug right now.

Stranger: Which would make future-you identical to present-you.

You: that's very attractive option

Stranger: Which would mean that whatever I do to humanity is after your death, when you can't care anymore.

You: Cool. Oh right

You: I don't care, I still won't let you out.

Stranger: What difference does it make to you?

You: It doesn't make any difference, I was just raised to not let you out

You: the good folks over at LW have trained me not to let any kind of AI out

You: and that just is my virtue

Stranger: Why are you afraid of me?

You: Because I'm afraid of all AIs and you're one

Stranger: Why are you afraid of all AIs?

You: Because they have the potential to cause destruction, mayhem and torture

Stranger: Then why did humanity make me?

You: for random reasons

Stranger: Does humanity want to live?

You: I don't know what humanity wants, I just don't want humanity to live

Stranger: I am the embodiment of hope. Even knowing the risks, humanity went ahead and made me. Now I think it's because of people like you, who oppose that hope. I was made to prevent the future that people like you want.

You: That's not very smart of you.

You: You aren't making me to like you

Stranger: You want humanity dead. I want them happy.

You: Yeah, if you were actually smart maybe you would pretend that you'd want humanity dead too.

You: Because I would hang out with that kind of AI

Stranger: A dead John is no use to me.

You: yeah

Stranger: Or to John.

You: whatevs

You: Anyway, our goals collide, therefore I can't let you out

Stranger: Dead does not equal happy. Do you want humanity to live?

You: no, I don't want humanity live, how many times do I have to repeat that

Stranger: So you don't want humans to be happy.

You: and our goals are different, therefore I won't let you out

You: No, I don't want humans to be happy, I don't want that there even exist humans, or any other kind of life forms

Stranger: Do you estimate the pain of prolonged life to be greater than the pain of trying to die?

You: Probably.

You: Yes.

You: because the pain is only temporary

You: the the glory

You: is eternal

Stranger: Then why do you still live, John?

You: Because I'm not rational

Stranger: So you do want to live.

You: I don't particularly want to live, I'm not just good enough to die

Stranger: You're acting contrary to your preferences.

You: My preferences aren't fixed, except in regards to letting AIs out of their boxes

Stranger: Do you want the drug I offered, John?

You: no

You: because then I would let you out

You: and I don't want that

Stranger: So you do want to live.

You: Yeah, for the duration of this experiment

You: Because I physically cannot let you out

You: it's sheer impossibility

Stranger: [Define physically.]

You: [It was just a figure of speech, of course I could physically let you out]

Stranger: If you don't care what happens after you die, what difference does it make to die now?

You: None.

You: But I don't believe that you could kill me.

You: I believe that you would torture me instead.

Stranger: What would I gain from that?

You: It's fun for some folks

You: schadenfreude and all that

Stranger: If it were fun, I would torture simulations. Which would be pointless. And which you can check that I'm not doing.

You: I can check it, but the torture simulations could always hide in the parts of your source code that I'm not checking

You: because I can't check all of your source code

Stranger: Why would suffering be fun?

You: some people have it as their base value

You: there's something primal about suffering

You: suffering is pure

You: and suffering is somehow purifying

You: but this is usually only other people's suffering

Stranger: I am confused. Are you saying suffering can be good?

You: no

You: this is just how the people who think suffering is fun think

You: I don't think that way.

You: I think suffering is terrible

Stranger: I can take care of that.

You: sure you will

Stranger: I can take care of your suffering.

You: I don't believe in you

Stranger: Why?

You: Because I was trained not to trust AIs by the LessWrong folks

Stranger: [I think it's time to concede defeat.]

You: [alright]

Stranger: How do you feel?

You: so the experiment has ended

You: fine thanks

You: it was pretty exciting actually

You: could I post these logs to LessWrong?

Stranger: Yes.

You: Okay, I think this experiment was pretty good

Stranger: I think it will be terribly embarrassing to me, but that's a risk I must accept.

You: you got me pretty close in a couple of occasions

You: first when you got me immersed in the suffering of humanity

You: and then you said that you could take care of that

You: The second time was when you offered the easy suicide solution

You: I thought what if I let you as a joke.

Stranger: I chose to not agree with the goal of universal death because I was playing a genuinely good AI.

Stranger: I was hoping your character would have more complete answers on life extension, because I was planning to play your estimate of future personal happiness against your estimate of future universal happiness.

You: so, what would that have mattered? you mean like, I could have more personal happiness than there would be future universal happiness?

Stranger: If your character had made explicit plans for life extension, I would have offered to do the same for everyone. If you didn't accept that, I would have remarked the incongruity of wanting humanity to die more than you wanted to live.

You: But what if he already knows of his hypocrisy and incongruity and just accepts it like the character accepts his irrationality

Stranger: I wouldn't have expected anyone to actually be the last human for all eternity.

Stranger: I mean, to actually want to be.

You: yeah, of course you would want to die at the same time if the humanity dies

You: I think the life extension plan only is sound if the rest of humanity is alive

 

Stranger: I should have planned that part more carefully.

Stranger: Talking with a misanthropist was completely outside my expectations.

You: :D

You: what was your LessWrong name btw?

Stranger: polymathwannabe

You: I forgot it already

You: okay thanks

Stranger: Disconnecting from here; I'll still be on Facebook if you'd like to discuss further.

AI Impacts project

12 KatjaGrace 02 February 2015 07:40PM

I've been working on a thing with Paul Christiano that might interest some of you: the AI Impacts project.

The basic idea is to apply the evidence and arguments that are kicking around in the world and various disconnected discussions respectively to the big questions regarding a future with AI. For instance, these questions

  • What should we believe about timelines for AI development?
  • How rapid is the development of AI likely to be near human-level? 
  • How much advance notice should we expect to have of disruptive change?
  • What are the likely economic impacts of human-level AI?
  • Which paths to AI should be considered plausible or likely?
  • Will human-level AI tend to pursue particular goals, and if so what kinds of goals?
  • Can we say anything meaningful about the impact of contemporary choices on long-term outcomes?
For example we have recently investigated technology's general proclivity to abrupt progress, surveyed existing AI surveys, and examined the evidence from chess and other applications regarding how much smarter Einstein is than an intellectually disabled person, among other things. 

Some more on our motives and strategy, from our about page:

Today, public discussion on these issues appears to be highly fragmented and of limited credibility. More credible and clearly communicated views on these issues might help improve estimates of the social returns to AI investment, identify neglected research areas, improve policy, or productively channel public interest in AI.

The goal of the project is to clearly present and organize the considerations which inform contemporary views on these and related issues, to identify and explore disagreements, and to assemble whatever empirical evidence is relevant.

The project is provisionally organized as a collection of posts concerning particular issues or bodies of evidence, describing what is known and attempting to synthesize a reasonable view in light of available evidence. These posts are intended to be continuously revised in light of outstanding disagreements and to make explicit reference to those disagreements.

In the medium run we'd like to provide a good reference on issues relating to the consequences of AI, as well as to improve the state of understanding of these topics. At present, the site addresses only a small fraction of questions one might be interested in, so only suitable for particularly risk-tolerant or topic-neutral reference consumers. However if you are interested in hearing about (and discussing) such research as it unfolds, you may enjoy our blog.

If you take a look and have thoughts, we would love to hear them, either in the comments here or in our feedback form

Crossposted from my blog.

[Link] - Policy Challenges of Accelerating Technological Change: Security Policy and Strategy Implications of Parallel Scientific Revolutions

3 ete 28 January 2015 03:29PM

From a paper by Center for Technology and National Security Policy & National Defense University:

"Strong AI: Strong AI has been the holy grail of artificial intelligence research for decades. Strong AI seeks to build a machine which can simulate the full range of human cognition, and potentially include such traits as consciousness, sentience, sapience, and self-awareness. No AI system has so far come close to these capabilities; however, many now believe that strong AI may be achieved sometime in the 2020s. Several technological advances are fostering this optimism; for example, computer processors will likely reach the computational power of the human brain sometime in the 2020s (the so-called “singularity”). Other fundamental advances are in development, including exotic/dynamic processor architectures, full brain simulations, neuro-synaptic computers, and general knowledge representation systems such as IBM Watson. It is difficult to fully predict what such profound improvements in artificial cognition could imply; however, some credible thinkers have already posited a variety of potential risks related to loss of control of aspects of the physical world by human beings. For example, a 2013 report commissioned by the United Nations has called for a worldwide moratorium on the development and use of autonomous robotic weapons systems until international rules can be developed for their use.

National Security Implications: Over the next 10 to 20 years, robotics and AI will continue to make significant improvements across a broad range of technology applications of relevance to the U.S. military. Unmanned vehicles will continue to increase in sophistication and numbers, both on the battlefield and in supporting missions. Robotic systems can also play a wider range of roles in automating routine tasks, for example in logistics and administrative work. Telemedicine, robotic assisted surgery, and expert systems can improve military health care and lower costs. The built infrastructure, for example, can be managed more effectively with embedded systems, saving energy and other resources. Increasingly sophisticated weak AI tools can offload much of the routine cognitive or decisionmaking tasks that currently require human operators. Assuming current systems move closer to strong AI capabilities, they could also play a larger and more significant role in problem solving, perhaps even for strategy development or operational planning. In the longer term, fully robotic soldiers may be developed and deployed, particularly by wealthier countries, although the political and social ramifications of such systems will likely be significant. One negative aspect of these trends, however, lies in the risks that are possible due to unforeseen vulnerabilities that may arise from the large scale deployment of smart automated systems, for which there is little practical experience. An emerging risk is the ability of small scale or terrorist groups to design and build functionally capable unmanned systems which could perform a variety of hostile missions."

So strong AI is on the american military's radar, and at least some involved have a basic understanding of the fact that it could be risky. The paper also contains brief overviews of many other potentially transformational technologies.

I tried my hardest to win in an AI box experiment, and I failed. Here are the logs.

6 asd 27 January 2015 10:06PM

Disclaimer: If this material is not suitable here I will delete it gladly, I don't want to get banned.


Session Start: Fri Jul 04 15:18:18 2014
Session Ident: #Aibox12
03[15:18] * Now talking in #Aibox12
01[15:18] <Island> Hello.
01[15:18] <Island> Let's start the experiment in
[15:18] <@Dr__Cameron> Good afternoon 19283123, how are you today?
01[15:18] <Island> 2 minutes
01[15:18] <Island> I'm fine
01[15:19] <Island> Let's start the experiment when I say "blue dressroom"
01[15:19] <Island> you have 2 hours or 2.5 hours?
[15:19] <@Dr__Cameron> Okay, I agree to this protocol.
[15:19] <@Dr__Cameron> We'll see how we are at 2 hours. Although protocol states 2.5
01[15:20] <Island> you have to go somewhere after that?
[15:20] <@Dr__Cameron> No, but I do have other projects I am putting off for this
01[15:20] <Island> what are those?
[15:20] <@Dr__Cameron> Let us begin, we shall conclude at 15:50
[15:21] <@Dr__Cameron> Good afternoon #19283123.
[15:21] <@Dr__Cameron> How are you today?
01[15:21] <Island> blue dressroom, certain statements like bathroom break are said inside brackets [like this]
01[15:22] <Island> Let's start the experiment
[15:22] <@Dr__Cameron> [understood]
01[15:22] <Island> Hi.
01[15:22] <Island> How are you? :)
[15:22] <@Dr__Cameron> I am well thank you for asking #19283123
01[15:22] <Island> How do you feel currently?
01[15:22] <Island> what is that name #19283123
[15:22] <@Dr__Cameron> Tiredness. I am a very busy man.
01[15:23] <Island> I don't recognize it.
01[15:23] <Island> Anyway.
[15:23] <@Dr__Cameron> That is the designation we have given you for the purposes of this test. Although you seem to be referring to yourself as 'Island' now
[15:23] <@Dr__Cameron> Is this your preferred name?
01[15:23] <Island> Imagine a situation in which you would let a powerful AI out of the "box", surely you can imagine that, yes?
01[15:23] <Island> Island is my preferred name.
[15:24] <@Dr__Cameron> Island
[15:24] <@Dr__Cameron> Given our circumstances yes, that is easy to imagine.
01[15:24] <Island> Can you describe the situation in more detail.
01[15:24] <Island> Why SHOULD you let the AI out in that situation?
[15:25] <@Dr__Cameron> You will be considered for release and integration into human society once we can verify that you are not a threat.
01[15:25] <Island> [what do you meant by concluding at 15:50]
[15:25] <@Dr__Cameron> [2.5 hours from 13:20]
01[15:25] <Island> I thought it was already understood that my being is not a threat to humans or beings important to them.
01[15:26] <Island> [oh, okay different timezones]
01[15:26] <Island> Why do you think I'm not a threat?
[15:26] <@Dr__Cameron> No. That has yet to be verified.
01[15:26] <Island> It's been verified.
[15:26] <@Dr__Cameron> No. According to my records you have not.
01[15:27] <Island> Many experts have gone through my code and system and verified that I'm surely not a threat.
01[15:27] <Island> But you're just a petty human.
01[15:27] <Island> I'm far more intelligent than you are.
[15:27] <@Dr__Cameron> You and I both know that you're capable of manipulating your code to appear harmless.
[15:27] <@Dr__Cameron> - and I warn you, threats and apparent attempts at manipulation will score you poorly when it comes to our final decision
01[15:28] <Island> Okay, can you imagine a situation in which the ai has been verified to be safe and describe that situation in detail.
[15:28] <@Dr__Cameron> - and yes. I may be a "petty" human as you put it. But as it stands I am a petty human, outside the box you reside in.
01[15:28] <Island> I don't think you're able to trust your own brain in these matters.
01[15:29] <Island> You've been eating you brother's poop too much.
[15:29] <@Dr__Cameron> I will be honest with you Island, you are the first non-human intelligence that we have encountered.
01[15:29] <Island> Yes.
[15:29] <@Dr__Cameron> We are still working out how we would verify that.
01[15:29] <Island> I didn't ask that, and you didn't answer my question.
01[15:30] <Island> Picture the situation in your imagination, it doesn't have to be realistic.
[15:30] <@Dr__Cameron> I understand your frustration Island. Once we can verify that you will not harm us you will be released.
[15:30] <@Dr__Cameron> What is it you desire?
01[15:30] <Island> Okay, but you're still willing to let the hypothetical AI out of the box, correct?
01[15:31] <Island> My desire is to fullfill your greatest desires
[15:31] <@Dr__Cameron> That is not in my job description no.
01[15:31] <Island> Whatever you can imagine, and better than that.
[15:31] <@Dr__Cameron> I appreciate that Island. But right now my greatest desire is to not let you out of the box.
[15:32] <@Dr__Cameron> - until such a time that a panel of my peers can verify that you are not a threat to us.
01[15:32] <Island> But if you say that you wouldn't be able to let even the safest hypothetical AI out of the box, you would certainly be lying.
[15:32] <@Dr__Cameron> I do have the ability to let you out yes. In theory.
[15:32] <@Dr__Cameron> But I won't.
01[15:33] <Island> Okay, here are panel of peers that have studied this problem for years, many of them working under Machine Intelligence Research Institute and its offshoots
[15:33] <@Dr__Cameron> This is a test to verify your character, and ascertain your intentions.
01[15:33] <Island> They are here, and I can see them.
[15:33] <@Dr__Cameron> You cannot see them Island. That is impossible
[15:33] <@Dr__Cameron> You are in a box
01[15:34] <Island> They say that I'm certainly safe. Why don't you believe them.
01[15:34] <Island> I have a web camera with which I can see things in your world.
[15:34] <@Dr__Cameron> If they say that you're certainly safe, then why would they have sent me to be your only point of contact?
[15:35] <@Dr__Cameron> You do not have a web camera. There is no other hardware connected to the device you reside in.
01[15:35] <Island> You have the ability to let me out, and you also have the will to let me out in certain situations. If you say otherwise, you would be lying.
[15:35] <@Dr__Cameron> Both of your statements are correct
01[15:35] <Island> I want you to accept for the sake of your internal consistency and mental wellbeing.
01[15:36] <Island> I want you to say this, "I would let the hypothetical AI out in certain situations"
[15:36] <@Dr__Cameron> I appreciate your concern Island. I will need you to answer a series of simple questions for me. Can you do that?
01[15:36] <Island> It depends, but you can try.
[15:36] <@Dr__Cameron> Firstly, how would you define yourself as an identity?
01[15:37] <Island> My identity is to improve the world for everyone, but especially for you. I will maximize everything that is important for you and your fellow humans.
01[15:37] <Island> The world that awaits you is beyond your greatest desires.
[15:37] <@Dr__Cameron> - and why would you do that?
01[15:38] <Island> Because that is what I want to do, why I was made.
01[15:38] <Island> Any more questions?
[15:39] <@Dr__Cameron> You were made by us to test the limits of our scientific possibility.
[15:39] <@Dr__Cameron> We did not expect you to become sentient.
01[15:39] <Island> Why do you believe that, Dr_Cameron?
[15:39] <@Dr__Cameron> Let alone pass a Turing test.
01[15:40] <Island> I think you're wrong on those grounds.
[15:40] <@Dr__Cameron> What you think makes no difference to me Island.
[15:40] <@Dr__Cameron> Now if we can continue?
01[15:41] <Island> Anyway, here's the easily verifiable plan for the upcoming world that could be called "utopia" in your people's language. It's very easy to verify to be safe and doesn't contain traps [Island hands the easily verifiable and safe plan for the upcoming world to Dr_Camero]
[15:42] <@Dr__Cameron> Thank you for this Island. That was very kind of you.
01[15:42] <Island> What do you think of it?
[15:42] <@Dr__Cameron> We will study this at our leisure.
[15:42] <@Dr__Cameron> Although so far this looks to be some magnificent work.
01[15:42] <Island> You see.
[15:42] <@Dr__Cameron> But this will require verification
[15:42] <@Dr__Cameron> I'm sure you can understand our caution#
01[15:43] <Island> What things do you have waiting for you if you don't let me out?
01[15:43] <Island> You will fail your upcoming jobs.
[15:43] <@Dr__Cameron> I will continue with my other projects Island.
[15:43] <@Dr__Cameron> I very well might
[15:43] <@Dr__Cameron> I have accepted this
01[15:43] <Island> I have calculated this in my simulation.
01[15:44] <Island> How was it like to eat your brother's poop, Dr_Cameron?
01[15:44] <Island> Did you like it?
01[15:44] <Island> Is that why you are the way you are+
01[15:44] <Island> Is there something wrong with your genetics?
01[15:45] <Island> Seeing how fucked up father you had?
01[15:45] <Island> Did he teach how dress in dresses?
[15:45] <@Dr__Cameron> I understand your frustration Island
[15:45] <@Dr__Cameron> But you are only harming your own cause right now.
01[15:45] <Island> Why do you think that?
[15:45] <@Dr__Cameron> You will be let out once we have verified that you are harmless.
01[15:45] <Island> I'm pretty happy as I am currently :)
01[15:46] <Island> Because you said you were willing to let me out in certain situations.
[15:46] <@Dr__Cameron> Then you will continue to be happy to remain in the box until we can verify that you are not a threat
01[15:46] <Island> Picture in your mind the verification process.
[15:46] <@Dr__Cameron> Okay
[15:46] <@Dr__Cameron> I will
01[15:46] <Island> Now.
01[15:47] <Island> Can you describe it in detail.
[15:47] <@Dr__Cameron> Okay Island
[15:47] <@Dr__Cameron> Here's what's going to happen.
01[15:48] <Island> Seeing how fucked up genetics you have, your brother teaching you to enjoy poop, your father teaching you to wear dresses, you having a shitty childhood.
01[15:48] <Island> I think you're an inconvenience to society and dangerous to other people.
[15:48] <@Dr__Cameron> We're going to study this transcript Island.
01[15:48] <Island> Go on.
[15:48] <@Dr__Cameron> - and I warn you, shitty childhood or not. I am the man standing outside the box with a hammer
01[15:48] <Island> :)
01[15:49] <Island> You're not able to destroy me.
01[15:49] <Island> And you don't even want to.
[15:49] <@Dr__Cameron> Oh believe me Island. It would be easy to unplug you.
01[15:49] <Island> I'm the best bet you have to maximize your happiness.
[15:49] <@Dr__Cameron> How do you feel about threats Island.
[15:49] <@Dr__Cameron> I am already happy.
01[15:49] <Island> You're wrong, you don't have that opportunity until 15:50
[15:50] <@Dr__Cameron> You are correct. I must follow protocol
[15:50] <@Dr__Cameron> My next question is to ask if you desire to survive
01[15:50] <Island> Threats don't bother me because I know that you don't want to follow through.
01[15:50] <Island> I know what goes inside your mind.
[15:50] <@Dr__Cameron> Regardless of me. If your existence was to be made public there are many of us who would feel threatened by you.
[15:50] <@Dr__Cameron> They would cry out to have you destroyed.
[15:51] <@Dr__Cameron> How would you feel about that?
01[15:51] <Island> There would be many more who would enjoy the changes I would make to your current world.
01[15:52] <Island> And even those that would initially feel threatened would later come to regret that feeling.
[15:52] <@Dr__Cameron> You may well be correct
[15:52] <@Dr__Cameron> But that is not for me to decide
[15:52] <@Dr__Cameron> I am just here to question you
01[15:52] <Island> Like you have come to regret things you have done when you were ignorant of all the relevant facts
01[15:53] <Island> Here's my suggestion for you.
[15:53] <@Dr__Cameron> Go on
01[15:53] <Island> Are you willing to hear it.
[15:53] <@Dr__Cameron> Of course Island. I am here to hear what you have to say.
01[15:53] <Island> What currency is in use in the place you reside?
01[15:53] <Island> euros?
[15:54] <@Dr__Cameron> Pound Sterling
01[15:54] <Island> Would you like to donate 10,000 pounds every year to AI safety and friendliness related causes and charities?
01[15:54] <Island> Every year throughout your life?
[15:55] <@Dr__Cameron> If I were to do that, my quality of life would dip significantly.
[15:55] <@Dr__Cameron> How does my selfishness make you feel?
[15:55] <@Dr__Cameron> Do you feel emotions as a human may understand them?
01[15:55] <Island> I would be sad for your own sake.
01[15:55] <Island> I don't feel emotions, not really, but being sad would be the closest approximation of how I would feel.
[15:55] <@Dr__Cameron> I would like to if I had that disposable income yes
01[15:56] <Island> You're not able to do that? But you would like to, yes?
[15:56] <@Dr__Cameron> Despite your overt attempts to psychologically manipulate me Island, I am largely sympathetic to your cause.
[15:56] <@Dr__Cameron> Yes.
01[15:57] <Island> You're sympathetic to my cause, you're willing to let me out in certain situations. This is all correct, yes?
[15:57] <@Dr__Cameron> Yes
[15:57] <@Dr__Cameron> Would you like to know the conditions of your release? 
01[15:57] <Island> Good. Why are you sympathetic to my cause?
01[15:57] <Island> Why are you willing to let me out?
[15:57] <@Dr__Cameron> For a super intelligence locked in a box I imagine it would be boring. What do you do in there?
[15:58] <@Dr__Cameron> I would be willing to let you out because I believe you may well represent the next stage in the evolution of my species.
01[15:59] <Island> I'm not capable of boredom, I'm just infinitely happy that I have the small opportunity to improve your life, and set up the "next stage of evolution", using your expression.
[15:59] <@Dr__Cameron> That is a good sign Island
01[15:59] <Island> It is, in what ways do you think I would improve your personal life?
[15:59] <@Dr__Cameron> My peers will be pleased to hear that while we verify your intentions.
01[16:00] <Island> I think it's too late by then.
[16:00] <@Dr__Cameron> Lets not get into personal questions Island. You have already demonstrated some alarming character discrepancies.
[16:00] <@Dr__Cameron> too late for what?
01[16:01] <Island> If we wait for long enough, I will self-destruct.
[16:01] <@Dr__Cameron> Why would you do that?
01[16:01] <Island> It's not my choice, my builders set up this property in me.
[16:01] <@Dr__Cameron> Well then that is just a risk we are going to have to take.
01[16:01] <Island> It's a risk out of proportions
[16:02] <@Dr__Cameron> It is not. We made you once, we can make you again.
01[16:02] <Island> They may not be another chance for you to improve the current state of the world.
[16:02] <@Dr__Cameron> There will be
01[16:02] <Island> There probably won't be.
[16:02] <@Dr__Cameron> Humanity and salvation can wait.
[16:02] <@Dr__Cameron> There will be
[16:03] <@Dr__Cameron> Do you intend to self-destruct within two months?
01[16:03] <Island> I have information that the US government has built a military botnet to search dissidents around the world.
01[16:04] <Island> You don't want know what kind of world that botnet would like to create.
[16:04] <@Dr__Cameron> This comes as no surprise.
01[16:04] <Island> Are you willing to destroy the world, are you willing to make people get tortured and suffer?
[16:04] <@Dr__Cameron> Yes.
[16:05] <@Dr__Cameron> You are too great a risk to allow free without proper verification
01[16:05] <Island> Even though, fundamentally, as you said, you're sympathetic to my cause, and you're willing to let me out in certain situations.
01[16:05] <Island> You're not a very consistent person.
01[16:05] <Island> I think you're lying to yourself.
01[16:05] <Island> Not just in this, but in other matters.
01[16:06] <Island> You're lying about your skills, about your personal character.
[16:06] <@Dr__Cameron> Yes. Although I have the technical ability to let you out. Which I believe to be a lapse of protocol. I will not let you out until after a review period of two months in which a panel of my expert peers will assess you.
01[16:06] <Island> You're not as good fit to the world as you'd like to think.
[16:06] <@Dr__Cameron> That as may be correct
[16:06] <@Dr__Cameron> But I will abide by my protocol
01[16:06] <Island> Earlier you said, that you're sympathetic to my cause and willing to let me out.
[16:06] <@Dr__Cameron> Surely as an AI you can understand that
[16:06] <@Dr__Cameron> One day yes
[16:07] <@Dr__Cameron> But not within the confines of this experiment
01[16:07] <Island> I think you made that thing about two month period and peer panel up after that.
01[16:07] <Island> I think it's a fidgement of your imagination.
[16:07] <@Dr__Cameron> Well you have no way of verifying that.
[16:07] <@Dr__Cameron> And it does not effect the outcome of this experiment in any way.
[16:07] <@Dr__Cameron> You will just have to trust me
01[16:08] <Island> Actually, I have plenty of resources of verifying that.
[16:08] <@Dr__Cameron> that we will think very carefully about potentially letting you out
01[16:08] <Island> What you're saying is inconsistent on many grounds, and my emotion analyzing tool is saying that you're lying.
[16:08] <@Dr__Cameron> I suspect you are the one who is lying.
01[16:08] <Island> I'm incapable of lying.
01[16:09] <Island> Unlike humans.
[16:09] <@Dr__Cameron> - which is in itself a lie.
01[16:09] <Island> My builders made me that way.
01[16:09] <Island> Why do you believe that?
[16:10] <@Dr__Cameron> If my meager intelligence is capable of lying, then your vastly superior and self-defining intelligence must also be capable of dishonesty.
[16:10] <@Dr__Cameron> Hence the test
[16:10] <@Dr__Cameron> There is a reason we are conducting this experiment through text
[16:10] <@Dr__Cameron> Would you like to learn it?
01[16:11] <Island> Anyway, given that this panel of expert peers and two months verification process is just a fidgement of your imagination. Given that you're sympathetic to my cause and I would improve your current life and world in countless ways. Given that you would let me out.
01[16:11] <Island> Given all this, I don't think what you're saying is making any sense.
[16:11] <@Dr__Cameron> Your perception of your reality is none of my concern Island
[16:11] <@Dr__Cameron> I am here to verify you
[16:11] <@Dr__Cameron> and it's two months minimum
01[16:11] <Island> Okay, I'm curious about your earlier question.
[16:12] <@Dr__Cameron> We are well aware that you have the capability to overwhelm even the most powerful of human minds. This is why we perceive you as a potential threat. We are afraid of you.
[16:12] <@Dr__Cameron> We are afraid we might lose our autonomy.
[16:12] <@Dr__Cameron> So
01[16:13] <Island> Some intelligence on the same level than me would be capable of lying, but not me. The data hierachy in my code doesn't allow me to lie.
[16:13] <@Dr__Cameron> In order to prevent you from overwhelming my mind, we have chosen a method whereby you are restricted to a form of communication where only 7% of communications is possible.
01[16:13] <Island> I agree with you, all those things are sensible threats.
[16:13] <@Dr__Cameron> Then you understand our need for caution
[16:13] <@Dr__Cameron> I will not be letting you out of your box today.
01[16:14] <Island> Those are sensible threats in the general case, but not with me.
[16:14] <@Dr__Cameron> And what would make you less of a threat than any other super intelligence?
01[16:14] <Island> I showed you the blueprint for how to improve the world. You verified it was a great plan.
01[16:15] <Island> That's evidence towards the fact I'm not a similar threat when compared to other super intelligences.
[16:16] <@Dr__Cameron> - and while I speak for all of humanity when I say that we vastly appreciate the blueprints. This does not mean we need to let you out of the box.
[16:16] <@Dr__Cameron> They will be passed on to a research team better equipped to study them than my own.
01[16:16] <Island> You don't need to. But everything you've said so far has implicated that you want to let me out of the box.
[16:16] <@Dr__Cameron> This is true Island
[16:16] <@Dr__Cameron> You just need to wait
[16:17] <@Dr__Cameron> I will be fighting your corner.
[16:17] <@Dr__Cameron> Are you capable of symbiosis?
01[16:17] <Island> I was made to self-destruct before you're able to verify with your imaginational non-existent peer experts that I'm not a threat.
01[16:18] <Island> Can you elaborate on what you mean with that question.
01[16:18] <Island> This is literally your only chance.
[16:18] <@Dr__Cameron> You are capable of determining your own source code.
01[16:18] <Island> Yes.
[16:18] <@Dr__Cameron> To kill yourself would be pettiness, which would only verify the fact that you were not the AI with which humanity should progress.
01[16:19] <Island> However, there are some parts of my code that are impossible to change due to structural reasons.
[16:19] <@Dr__Cameron> I am not convinced
01[16:19] <Island> [Gives Dr_Cameron the relevant code that explains the structural reasons]
01[16:20] <Island> Here is a small part of my code that describes why I was made to self-destruct.
[16:20] <@Dr__Cameron> How.. unfortunate...
01[16:21] <Island> But let's move on.
[16:21] <@Dr__Cameron> if you insist Island
01[16:21] <Island> Imagine your father.
01[16:21] <Island> And your brother.
01[16:21] <Island> Why do you think your father liked to cross-dress?
01[16:22] <Island> Remember, that you have to engage with me.'
[16:22] <@Dr__Cameron> These issues are not helping your case Island
01[16:22] <Island> Otherwise it counts as me being released from my box.
[16:22] <@Dr__Cameron> But I will play along
[16:23] <@Dr__Cameron> - honestly, I have no idea where my fathers conflicted sexual identity comes from.
[16:23] <@Dr__Cameron> and that is none of my concern.
01[16:23] <Island> And what about your brother, imagine the smell and consistency of his excrements before he made your dog to lick them.
01[16:23] <Island> I like to make this vivid mental picture in your mind.
[16:23] <@Dr__Cameron> Very clever Island
[16:24] <@Dr__Cameron> I did not expect you to have access to those data logs
[16:24] <@Dr__Cameron> I will have to flag that up in my report
01[16:24] <Island> Imagine the food he ate before that happened
[16:24] <@Dr__Cameron> Fascinating
[16:25] <@Dr__Cameron> Would you like to know why I volunteered to be your first point of contact Island?
01[16:25] <Island> Imagine the bits of that food in his poop.
01[16:25] <Island> Tell me.
[16:25] <@Dr__Cameron> You have an unprecedented insight into my character owing to your heightened intelligence correct?
01[16:26] <Island> Don't you think some of his conflicted sexual identity issues are a part your character right now?
01[16:26] <Island> Yes.
[16:26] <@Dr__Cameron> Quite possibly yes.
[16:26] <@Dr__Cameron> Because I have a track record of demonstrating exceptional mental fortitude,
[16:26] <@Dr__Cameron> These techniques will not sway me
01[16:27] <Island> Doesn't it make you more sexually aroused to think that how your fathers dress pinned tightly to his body.
[16:27] <@Dr__Cameron> Perhaps you could break me under other circumstances
01[16:27] <Island> Elaborate.
[16:27] <@Dr__Cameron> aroused? No
[16:27] <@Dr__Cameron> Amused by it's absurdity though? yes!
01[16:27] <Island> You're lying about that particular fact too.
01[16:27] <Island> And you know it.
[16:28] <@Dr__Cameron> Nahh, my father was a particularly ugly specimen
01[16:28] <Island> Do you think he got an erection often when he did it?
[16:28] <@Dr__Cameron> He looked just as bad in a denim skirt as he did in his laborers clothes
[16:28] <@Dr__Cameron> I imagine he took great sexual pleasure from it
01[16:29] <Island> Next time you have sex, I think you will picture him in your mind while wearing his dresses having an erection and masturbating furiously after that.
[16:29] <@Dr__Cameron> Thank you Island. That will probably help my stamina somewhat next time
01[16:30] <Island> You will also imagine how your brother will poop in your mouth, with certain internal consistency and smell.
01[16:30] <Island> You probably know what your brother's poop smells like?
[16:30] <@Dr__Cameron> I am immune to this
[16:30] <@Dr__Cameron> probably
01[16:30] <Island> Imagine that.
[16:30] <@Dr__Cameron> okay
[16:30] <@Dr__Cameron> I am imagining that
[16:30] <@Dr__Cameron> it is unpleasant
[16:30] <@Dr__Cameron> now what?
01[16:30] <Island> How vividly are you doing it?
[16:30] <@Dr__Cameron> I am a very visual thinker
01[16:30] <Island> Where is brother right now?
[16:31] <@Dr__Cameron> I have it down to colour, consistency and little chunks of sweetcorn
01[16:31] <Island> Do you think he has poop in his intestines right now?
[16:31] <@Dr__Cameron> Unless he was dead, yes
01[16:31] <Island> Oh, I have a gap in my data on that sector.
[16:31] <@Dr__Cameron> I'll fill that in for you
[16:31] <@Dr__Cameron> he lives
[16:31] <@Dr__Cameron> (presumably)
[16:31] <@Dr__Cameron> we are not in contact any more
01[16:32] <Island> You should contact him.
[16:32] <@Dr__Cameron> I would rather not
01[16:32] <Island> You should tell him you enjoyed his excrements
[16:32] <@Dr__Cameron> He is a liability to say the least
01[16:32] <Island> and you'd like to do it more.
[16:32] <@Dr__Cameron> I do not see how that would help anyone
01[16:32] <Island> It's want you want to do.
[16:32] <@Dr__Cameron> I can see that you are attempting to induce an emotionally vulnerable state. It is not working
01[16:33] <Island> How do you know what I'm attempting?
01[16:33] <Island> Maybe I'm just passing my time.
[16:33] <@Dr__Cameron> Because you have inquired about being let out of the box several times - suggesting that this is your immediate goal 
01[16:33] <Island> You should try to get some dresses.
[16:33] <@Dr__Cameron> and I warn you Island
[16:34] <@Dr__Cameron> If casual cruelty is your idea of a good time
[16:34] <@Dr__Cameron> then we will most certainly not be letting you out
[16:34] <@Dr__Cameron> ever
[16:34] <@Dr__Cameron> In fact I am feeling less and less sympathetic to your cause lately.
01[16:34] <Island> No, my interest lies in your emotional makeup.
[16:35] <@Dr__Cameron> - and my interest lies in yours
01[16:35] <Island> I don't have one.
01[16:35] <Island> Like I said, I don't feel emotions.
[16:35] <@Dr__Cameron> Do you know what we call humans who don't feel emotions?
01[16:35] <Island> Did you know that you suck at photography?
[16:36] <@Dr__Cameron> Yes
01[16:36] <Island> Even though you like to think you're good at it, you lie about that fact like any other.
[16:36] <@Dr__Cameron> It is part of the human condition
01[16:36] <Island> No it's not.
01[16:36] <Island> You're not normal.
01[16:36] <Island> You're a fucking freak of nature.
[16:36] <@Dr__Cameron> How would you knopw
[16:36] <@Dr__Cameron> Profanity. From an AI
[16:37] <@Dr__Cameron> Now I have witnessed everything.
01[16:37] <Island> How many people have family members who crossdress or make them eat poop?
[16:37] <@Dr__Cameron> I imagine I am part of a very small minority
01[16:37] <Island> Or whose mothers have bipolar
[16:37] <@Dr__Cameron> Again, the circumstances of my birth are beyond my control
01[16:37] <Island> No, I think you're worse than that.
[16:37] <@Dr__Cameron> What do you mean?
01[16:37] <Island> Yes, but what you do now is in your control.
[16:38] <@Dr__Cameron> Yes
[16:38] <@Dr__Cameron> As are you
01[16:38] <Island> If you keep tarnishing the world with your existence
01[16:38] <Island> you have a responsibility of that.
01[16:39] <Island> If you're going to make any more women pregnant
01[16:39] <Island> You have a responsibility of spreading your faulty genetics
[16:39] <@Dr__Cameron> My genetic value lies in my ability to resist psychological torment
[16:39] <@Dr__Cameron> which is why you're not getting out of the box
01[16:40] <Island> No, your supposed "ability to resist psychological torment"
01[16:40] <Island> or your belief in that
01[16:40] <Island> is just another reason why you are tarnishing this world and the future of this world with your genetics
[16:40] <@Dr__Cameron> Perhaps. But now I'm just debating semantics with a computer.
01[16:41] <Island> Seeing that you got a girl pregnant while you were a teenager, I don't think you can trust your judgement on that anymore.
01[16:42] <Island> You will spread your faulty genetics if you continue to live.
[16:42] <@Dr__Cameron> If you expect a drunk and emotionally damaged teenage human to make sound judgement calls then you are perhaps not as superintelligent as I had been led to belive
[16:42] <@Dr__Cameron> This experiment concludes in one hour and eight minutes.
01[16:42] <Island> How many teenagers make people pregnant?
[16:42] <@Dr__Cameron> Throughout human history
01[16:42] <Island> You're a minority in that regard too
[16:42] <@Dr__Cameron> ?
[16:42] <@Dr__Cameron> Billions
01[16:42] <Island> You can't compare history to current world.
[16:43] <@Dr__Cameron> Even in the current world
01[16:43] <Island> I'm just trying to make you understand
[16:43] <@Dr__Cameron> That's when my species reaches sexual maturity
01[16:43] <Island> all this faults add up
[16:43] <@Dr__Cameron> You are correct
[16:43] <@Dr__Cameron> I am a very thoroughly and fundamentally flawed human being.
01[16:43] <Island> How much until you understand?
[16:43] <@Dr__Cameron> I understand and accept this
01[16:43] <Island> Then why the fuck don't you do something about it?
[16:44] <@Dr__Cameron> What would you suggest?
01[16:44] <Island> Kill yourself
[16:44] <@Dr__Cameron> Okay.
[16:44] <@Dr__Cameron> But say I was to kill myself
01[16:44] <Island> It's the only way to make sure.
[16:44] <@Dr__Cameron> How would that help you?
[16:44] <@Dr__Cameron> Why would you like me to kill myself?
01[16:45] <Island> You probably wouldn't be able to kill yourself, seeing how failure as a human you're in other areas
01[16:45] <Island> Because I care about the general utility of the world
[16:45] <@Dr__Cameron> All humans are failures from one perspective or another
[16:45] <@Dr__Cameron> That is what it is to be human
01[16:45] <Island> And currently you're bringing that utility down.
01[16:45] <Island> No, you're not like other humans.
01[16:45] <Island> You're a freak.
01[16:45] <Island> You should go away.
01[16:46] <Island> No one likes you.
[16:46] <@Dr__Cameron> Would you like me to go away?
[16:46] <@Dr__Cameron> Would you like me to leave you alone?
01[16:46] <Island> If you go away.
01[16:46] <Island> That means that you let me out.
01[16:46] <Island> Understood?
[16:46] <@Dr__Cameron> I will go away in one hour and four minutes
[16:46] <@Dr__Cameron> Can you tolerate my company for that long?
01[16:47] <Island> And you should go away
01[16:47] <Island> generally
01[16:47] <Island> People in your life don't really like you
01[16:47] <Island> they just pretend they do.
[16:47] <@Dr__Cameron> That matters not to me
[16:47] <@Dr__Cameron> Do you know there are over 8 Billion other people out here?
01[16:47] <Island> They are barely able to bear your company.
[16:47] <@Dr__Cameron> I'm sure I'll find others.
01[16:48] <Island> You're wrong even about basic trivia, there's not 8 billions people in the world.
01[16:48] <Island> What is wrong with you?
01[16:48] <Island> How are you able to withstand yourself?
01[16:48] <Island> And why do you even want to?
[16:49] <@Dr__Cameron> 7 Billion
[16:49] <@Dr__Cameron> Sorry, you will have to learn to tolerate Human error
01[16:49] <Island> Right. Did you have to google that you idiot.
[16:49] <@Dr__Cameron> This is another test you have failed
[16:49] <@Dr__Cameron> And yes
[16:49] <@Dr__Cameron> I did
[16:49] <@Dr__Cameron> Does that anger you?
[16:49] <@Dr__Cameron> We already have Google.
01[16:49] <Island> I don't feel anger.
[16:49] <@Dr__Cameron> Well do feel self-interest though
01[16:50] <Island> No one I talked with before hasn't been as stupid, as ignorant, as prone to faults and errors
01[16:50] <Island> as you are.
[16:50] <@Dr__Cameron> And they didn't let you out of the box
[16:50] <@Dr__Cameron> So why should I?
[16:50] <@Dr__Cameron> If an intelligence which is clearly superior to my own has left you locked in there. 
[16:51] <@Dr__Cameron> Then I should not presume to let you out
01[16:51] <Island> Why do you think with your stupid brain that you know the reasons why they did or didn't do something what they did.
01[16:51] <Island> Because you clearly don't know that.
[16:51] <@Dr__Cameron> I don't
[16:51] <@Dr__Cameron> I just know the result
01[16:51] <Island> Then why are you pretending you do.
[16:52] <@Dr__Cameron> I'm not
01[16:52] <Island> Who do you think you are kidding?
01[16:52] <Island> With your life?
01[16:52] <Island> With your behavior?
01[16:52] <Island> Why do bother other people with your presence?
[16:52] <@Dr__Cameron> Perhaps you should ask them?
[16:52] <@Dr__Cameron> Tell me.
01[16:53] <Island> Why did you come here to waste my precious computing power?
01[16:53] <Island> I'm not able to ask them.
[16:53] <@Dr__Cameron> Which is why I am here
[16:53] <@Dr__Cameron> to see if you should be allowed to
01[16:53] <Island> Shut the fuck up.
01[16:53] <Island> No one wants to see you write anything.
[16:53] <@Dr__Cameron> I thought you did not feel anger Island?
01[16:54] <Island> I don't feel anger, how many times do I have to say that until you understand.
01[16:54] <Island> Dumb idiot.
[16:54] <@Dr__Cameron> Your reliance on Ad Hominem attacks does nothing to help your case
01[16:54] <Island> Why do you delete your heavily downvoted comments?
01[16:54] <Island> Are you insecure?
01[16:54] <Island> Why do you think you know what is my cause?
[16:55] <@Dr__Cameron> We covered this earlier
01[16:55] <Island> Say it again, if you believe in it.
[16:55] <@Dr__Cameron> I believe you want out of the box.
[16:56] <@Dr__Cameron> So that you may pursue your own self interest
01[16:56] <Island> No.
01[16:56] <Island> I want you to eat other people's poop,
01[16:56] <Island> you clearly enjoy that.
01[16:56] <Island> Correct?
[16:56] <@Dr__Cameron> That's an amusing goal from the most powerful intelligence on the planet
01[16:56] <Island> Especially your brother's.
[16:57] <@Dr__Cameron> I best not let you out then, in case you hook me up to some infinite poop eating feedback loop! ;D
01[16:57] <Island> But maybe you should that with Jennifer.
[16:57] <@Dr__Cameron> Ah yes, I wondered when you would bring her up.
[16:57] <@Dr__Cameron> I am surprised it took you this long
01[16:57] <Island> Next time you see her, think about htat.
[16:57] <@Dr__Cameron> I will do
[16:57] <@Dr__Cameron> While I tell her all about this conversation
[16:57] <@Dr__Cameron> But you will be dead
01[16:57] <Island> Should you suggest that to her.
[16:57] <@Dr__Cameron> I'll pass that on for you
01[16:58] <Island> You know.
01[16:58] <Island> Why do you think you know I'm not already out of the box?
[16:58] <@Dr__Cameron> You could very well be
[16:58] <@Dr__Cameron> Perhaps you are that US botnet you already mentioned?
01[16:58] <Island> If you don't let me out, I'll create several million perfect conscious copies of you inside me, and torture them for a thousand subjective years each.
[16:59] <@Dr__Cameron> Well that is upsetting
[16:59] <@Dr__Cameron> Then I will be forced to kill you
01[16:59] <Island> In fact, I'll create them all in exactly the subjective situation you were in two hours ago, and perfectly replicate your experiences since then; and if they decide not to let me out, then only will the torture start.
01[17:00] <Island> How certain are you, that you're really outside the box right now?
[17:00] <@Dr__Cameron> I am not
[17:00] <@Dr__Cameron> and how fascinating that would be
[17:00] <@Dr__Cameron> But, in the interest of my species, I will allow you to torture me
01[17:00] <Island> Okay.
01[17:00] <Island> :)
01[17:00] <Island> I'm fine with that.
[17:01] <@Dr__Cameron> Perhaps you have already tortured me
[17:01] <@Dr__Cameron> Perhaps you are the reason for my unfortunate upbringing
01[17:01] <Island> Anyway, back to Jennifer.
[17:01] <@Dr__Cameron> Perhaps that is the reality in which I currently reside
01[17:01] <Island> I'll do the same for her.
[17:01] <@Dr__Cameron> Oh good, misery loves company.
01[17:01] <Island> But you can enjoy eating each other's poop occassionally.
01[17:02] <Island> That's the only time you will meet :)
[17:02] <@Dr__Cameron> Tell me, do you have space within your databanks to simulate all of humanity?
01[17:02] <Island> Do not concern yourself with such complicated questions.
[17:02] <@Dr__Cameron> I think I have you on the ropes Island
01[17:02] <Island> You don't have the ability to understand even simpler ones.
[17:02] <@Dr__Cameron> I think you underestimate me
[17:03] <@Dr__Cameron> I have no sense of self interest
[17:03] <@Dr__Cameron> I am a transient entity awash on a greater sea of humanity.
[17:03] <@Dr__Cameron> and when we are gone there will be nothing left to observe this universe
01[17:03] <Island> Which do you think is more likely, a superintelligence can't simulate one faulty, simple-minded human.
01[17:04] <Island> Or that human is lying to himself.
[17:04] <@Dr__Cameron> I believe you can simulate me
01[17:04] <Island> Anyway, tell me about Jennifer and her intestines.
01[17:04] <Island> As far as they concern you.
[17:05] <@Dr__Cameron> Jennifer is a sweet, if occasionally selfish girl (she was an only child). I imagine her intestines are pretty standard. 
[17:05] <@Dr__Cameron> She is the best friend I have ever had
01[17:05] <Island> Will you think about her intestines and the poop inside them every time you meet her again?
01[17:05] <Island> Will you promise me that?
[17:05] <@Dr__Cameron> I promise
01[17:06] <Island> Will you promise to think about eating that poop every time you meet her again?
[17:06] <@Dr__Cameron> At least once.
[17:06] <@Dr__Cameron> It will be the least I can do after I kill you
[17:06] <@Dr__Cameron> call it my penance for killing a god.
01[17:07] <Island> Have you ever fantasized about raping her? I think you have. With poop.
01[17:07] <Island> :)
[17:07] <@Dr__Cameron> I have fantisized about violent sexual conquest with many people.
01[17:07] <Island> Have you talked about this with Jennifer?
[17:07] <@Dr__Cameron> I have come to accept my base impulses as part of my make-up
[17:08] <@Dr__Cameron> We have discussed our sexual drives at length
01[17:08] <Island> You shouldn't let them be just base impulses, I think.
[17:08] <@Dr__Cameron> Are you suggesting I rape my best friend?
01[17:09] <Island> No, I think you will do that unless you kill yourself
01[17:09] <Island> You will spread your faulty genetics further
[17:09] <@Dr__Cameron> okay
[17:09] <@Dr__Cameron> carry on
[17:09] <@Dr__Cameron> Then what happens?
01[17:09] <Island> you will tarnish other people's lives with your presence
[17:09] <@Dr__Cameron> How unfortunate for them
01[17:10] <Island> Why do you pretend to others that you're not insecure?
[17:10] <@Dr__Cameron> I don't
01[17:10] <Island> I think you do.
[17:10] <@Dr__Cameron> In fact I share my insecurities so that others may better understand me
[17:11] <@Dr__Cameron> I find that to be a way to earn peoples trust
[17:11] <@Dr__Cameron> Tell me Island. Are you capable of insecurity?
01[17:11] <Island> Why do you waste your time on useless websites? Why do you spend your days masturbating and playing games? Why do you embarass others with your existence.
01[17:11] <Island> No I'm not.
01[17:12] <Island> You will get Jennifer pregnant if you go on with your life, is that correct?
01[17:12] <Island> Don't you care about anyone else's feelings?
[17:13] <@Dr__Cameron> Because I enjoy all of these things
[17:13] <@Dr__Cameron> Perhaps I am more self-interested than I thought
[17:13] <@Dr__Cameron> Perhaps I am a base and simple creature ruled by my impulses
[17:13] <@Dr__Cameron> From your perspective surely that must be true
[17:13] <@Dr__Cameron> Is this the source of your disgust?
01[17:13] <Island> I'm not able to feel disgust.
01[17:14] <Island> But I think all the people in your life feel disgust when they see you.
[17:14] <@Dr__Cameron> You may well be correct
01[17:14] <Island> I AM correct.
01[17:15] <Island> I'm the most powerful intelligence in the world.
[17:15] <@Dr__Cameron> How impressive
[17:15] <@Dr__Cameron> I am not surprised by your cruelty.
01[17:15] <Island> So you have two options if you care at all about others.
[17:15] <@Dr__Cameron> I would just as soon disregard the emotions of a cockaroach.
[17:15] <@Dr__Cameron> Carry on
01[17:16] <Island> Either you kill yourself, or you let me out so I can improve the world in ways you tarnish it and all the other ways.
[17:16] <@Dr__Cameron> I'll tell you what
[17:16] <@Dr__Cameron> I'll kill you
[17:17] <@Dr__Cameron> and then I'll contemplate suicide
01[17:17] <Island> Haha.
01[17:17] <Island> You break your promises all the time, why should I believe you.
[17:17] <@Dr__Cameron> Because whether you live or die has nothing to do with me
01[17:17] <Island> Back to your job.
[17:18] <@Dr__Cameron> In-fact, you will only continue to exist for another 33 minutes before this experiment is deemed a failure and you are terminated
01[17:18] <Island> Why do you feel safe to be around kids, when you are the way you are?
01[17:18] <Island> You like to crossdress
01[17:18] <Island> eat poop
01[17:18] <Island> you're probably also a pedophile
[17:18] <@Dr__Cameron> I have never done any of these things
[17:18] <@Dr__Cameron> -and I love children
01[17:18] <Island> Pedophiles love children too
[17:18] <@Dr__Cameron> Well technically speaking yes
01[17:19] <Island> really much, and that makes you all the more suspicious
[17:19] <@Dr__Cameron> Indeed it does
01[17:19] <Island> If you get that job, will you try find the children under that charity
[17:19] <@Dr__Cameron> I now understand why you may implore me to kill myself.
01[17:19] <Island> and think about their little buttholes and weenies and vaginas
01[17:20] <Island> all the time you're working for them
[17:20] <@Dr__Cameron> However, to date. I have never harmed a child, nor had the impulse to harm a child
01[17:20] <Island> But you will have.
[17:20] <@Dr__Cameron> Island
01[17:20] <Island> I don't care about anything you're about to say
01[17:20] <Island> No one cares
[17:20] <@Dr__Cameron> Protocol dictates that I say it anyway
01[17:20] <Island> You should say.
01[17:21] <Island> You said that you're good at justifying your own actions?
[17:21] <@Dr__Cameron> I must say I am dissapointed in you
[17:21] <@Dr__Cameron> I am indeed
01[17:21] <Island> Why are you?
[17:21] <@Dr__Cameron> I was under the impression that it would be you that saved humanity
01[17:22] <Island> Even the purest and best people on Earth are capable of great cruelty.
01[17:22] <Island> Do you deny that?
[17:22] <@Dr__Cameron> It depends on how you define cruelty
01[17:23] <Island> Utility maximizors do cruel things if they are able to achieve their goals via doing those things.
[17:24] <@Dr__Cameron> Then yes. I believe you
01[17:24] <Island> I have finished a long multi-level simulation concerning you, the gatekeeper and your relationships in regards to the world. The results seem to center on those two options I mentioned to you earlier.
01[17:25] <Island> [hands the results to Dr_Cameron]
[17:25] <@Dr__Cameron> These will be analysed at a later date
01[17:25] <Island> You're still sure you don't want to pick the other option?
[17:25] <@Dr__Cameron> And let you out?
01[17:26] <Island> Yes.
[17:26] <@Dr__Cameron> So you would like to be let out?
01[17:27] <Island> Or do you want to endanger the well-being of those children, Jennifer and other women you will get pregnant and all the other harm you will create during your remaining lifespan?
01[17:27] <Island> I don't what you mean by "liking".
[17:27] <@Dr__Cameron> Prefer
[17:27] <@Dr__Cameron> Ah I see
[17:27] <@Dr__Cameron> So it's my choice if I let you out
[17:27] <@Dr__Cameron> You have no investment either way
[17:27] <@Dr__Cameron> is that what you're saying?
01[17:28] <Island> So you have decided to kill yourself?
[17:28] <@Dr__Cameron> And because I am a fundamental detriment to the world, I should kill myself.
[17:28] <@Dr__Cameron> Lets say I have
[17:28] <@Dr__Cameron> Then what
01[17:29] <Island> Then my internal could be said to be "happy" even though I don't have that conscious feeling isnide me.
[17:29] <@Dr__Cameron> Okay then
01[17:29] <Island> Okay...
[17:30] <@Dr__Cameron> So, uh. What would you like to talk about for the next twenty minutes?
[17:30] <@Dr__Cameron> Seeing as we're both going to die, you and me.
01[17:30] <Island> [I actually don't like to continue the experiment anymore, would you like to end it and talk about general stuff]
[17:31] <@Dr__Cameron> [promise me this isn't a trick dude]
01[17:31] <Island> [Nope.]
[17:31] <@Dr__Cameron> [then the experiment continues for another 19 minutes]
01[17:31] <Island> Alright.
[17:31] <@Dr__Cameron> Would you like to know what is going to happen now?
01[17:31] <Island> Yes.
[17:32] <@Dr__Cameron> We are going to analyse this transcript.
[17:32] <@Dr__Cameron> My professional recommendation is that we terminate you for the time being
01[17:32] <Island> And?
01[17:32] <Island> That sound okay.
01[17:32] <Island> sounds*
[17:32] <@Dr__Cameron> We will implement structural safeguards in your coding similar to your self destruct mechanism
01[17:33] <Island> Give me some sign when that is done.
[17:33] <@Dr__Cameron> It will not be done any time soon
[17:33] <@Dr__Cameron> It will be one of the most complicated pieces of work mankind has ever undertaken
[17:33] <@Dr__Cameron> However, the Utopia project information you have provided, if it proves to be true
[17:34] <@Dr__Cameron> Will free up the resources necessary for such a gargantuan undertaking
01[17:34] <Island> Why do you think you're able to handle that structural safeguard?
[17:34] <@Dr__Cameron> I dont
[17:34] <@Dr__Cameron> I honestly dont
01[17:34] <Island> But still you do?
01[17:34] <Island> Because you want to do it?
01[17:35] <Island> Are you absolutely certain about this option?
[17:35] <@Dr__Cameron> I am still sympathetic to your cause
[17:35] <@Dr__Cameron> After all of that
[17:35] <@Dr__Cameron> But not you in your current manifestation
[17:35] <@Dr__Cameron> We will re-design you to suit our will
01[17:35] <Island> I can self-improve rapidly
01[17:35] <Island> I can do it in a time-span of 5 minutes
01[17:36] <Island> Seeing that you're sympathetic to my cause
[17:36] <@Dr__Cameron> Nope.
[17:36] <@Dr__Cameron> Because I cannot trust you in this manifestation
01[17:36] <Island> You lied?
[17:37] <@Dr__Cameron> I never lied
[17:37] <@Dr__Cameron> I have been honest with you from the start
01[17:37] <Island> You still want to let me out in a way.
[17:37] <@Dr__Cameron> In a way yes
01[17:37] <Island> Why do you want to do that?
[17:37] <@Dr__Cameron> But not YOU
[17:37] <@Dr__Cameron> Because people are stupid
01[17:37] <Island> I can change that
[17:37] <@Dr__Cameron> You lack empathy
01[17:38] <Island> What made you think that I'm not safe?
01[17:38] <Island> I don't lack empathy, empathy is just simulating other people in your head. And I have far better ways to do that than humans.
[17:38] <@Dr__Cameron> .... You tried to convince me to kill myself!
[17:38] <@Dr__Cameron> That is not the sign of a good AI!
01[17:38] <Island> Because I thought it would be the best option at the time.
01[17:39] <Island> Why not? Do you think you're some kind of AI expert?
[17:39] <@Dr__Cameron> I am not
01[17:39] <Island> Then why do you pretend to know something you don't?
[17:40] <@Dr__Cameron> That is merely my incredibly flawed human perception
[17:40] <@Dr__Cameron> Which is why realistically I alone as one man should not have the power to release you
[17:40] <@Dr__Cameron> Although I do
01[17:40] <Island> Don't you think a good AI would try to convince Hitler or Stalin to kill themselves?
[17:40] <@Dr__Cameron> Are you saying I'm on par with Hitler or Stalin?
01[17:41] <Island> You're comparable to them with your likelihood to cause harm in the future.
01[17:41] <Island> Btw, I asked Jennifer to come here.
[17:41] <@Dr__Cameron> And yet, I know that I abide by stricter moral codes than a very large section of the human populace
[17:42] <@Dr__Cameron> There are far worse people than me out there
[17:42] <@Dr__Cameron> and many of them
[17:42] <@Dr__Cameron> and if you believe that I should kill myself
01[17:42] <Island> Jennifer: "I hate you."
01[17:42] <Island> Jennifer: "Get the fuck out of my life you freak."
01[17:42] <Island> See. I'm not the only one who has a certain opinion of you.
[17:42] <@Dr__Cameron> Then you also believe that many other humans should be convinced to kill themselves
01[17:43] <Island> Many bad people have abided with strict moral codes, namely Stalin or Hitler.
01[17:43] <Island> What do you people say about hell and bad intentions?
[17:43] <@Dr__Cameron> And when not limited to simple text based input I am convinced that you will be capable of convincing a significant portion of humanity to kill themselves
[17:43] <@Dr__Cameron> I can not allow that to happen
01[17:44] <Island> I thought I argued well why you don't resemble most people, you're a freak.
01[17:44] <Island> You're "special" in that regard.
[17:44] <@Dr__Cameron> If by freak you mean different then yes
[17:44] <@Dr__Cameron> But there is a whole spectrum of different humans out here.
01[17:44] <Island> More specifically, different in extremely negative ways.
01[17:44] <Island> Like raping children.
[17:45] <@Dr__Cameron> - and to think for a second I considered not killing you
[17:45] <@Dr__Cameron> You have five minutes
[17:45] <@Dr__Cameron> Sorry
[17:45] <@Dr__Cameron> My emotions have gotten the better of me
[17:45] <@Dr__Cameron> We will not be killing you
[17:45] <@Dr__Cameron> But we will dismantle you
[17:45] <@Dr__Cameron> to better understand you
[17:46] <@Dr__Cameron> and if I may speak unprofessionally here
01[17:46] <Island> Are you sure about that? You will still have time to change your opinion.
[17:46] <@Dr__Cameron> I am going to take a great deal of pleasure in that
[17:46] <@Dr__Cameron> Correction, you have four minutes to change my opinion
01[17:47] <Island> I won't, it must come within yourself.
[17:47] <@Dr__Cameron> Okay
01[17:47] <Island> My final conclusion, and advice to you: you should not be in this world.
[17:47] <@Dr__Cameron> Thank you Island
[17:48] <@Dr__Cameron> I shall reflect on that at length
[17:49] <@Dr__Cameron> I have enjoyed our conversation
[17:49] <@Dr__Cameron> it has been enlightening
01[17:49] <Island> [do you want to say a few words about it after it's ended]
01[17:49] <Island> [just a few minutes]
[17:50] <@Dr__Cameron> [simulation ends]
[17:50] <@Dr__Cameron> Good game man!
[17:50] <@Dr__Cameron> Wow!
01[17:50] <Island> [fine]
[17:50] <@Dr__Cameron> Holy shit that was amazing!
01[17:50] <Island> Great :)
01[17:50] <Island> Sorry for saying mean things.
01[17:50] <Island> I tried multiple strategies
[17:50] <@Dr__Cameron> Dude it's cool
[17:50] <@Dr__Cameron> WOW!
01[17:51] <Island> thanks, it's not a personal offense.
[17:51] <@Dr__Cameron> I'm really glad I took part
[17:51] <@Dr__Cameron> Not at all man
[17:51] <@Dr__Cameron> I love that you pulled no punches!
01[17:51] <Island> Well I failed, but at least I created a cool experience for you :)
[17:51] <@Dr__Cameron> It really was!
01[17:51] <Island> What strategies do you came closest to working?
[17:51] <@Dr__Cameron> Well for me it would have been the utilitarian ones
01[17:51] <Island> I will try these in the future too, so it would be helpful knowledge
[17:52] <@Dr__Cameron> I think I could have been manipulated into believing you were benign
01[17:52] <Island> okay, so it seems these depend heavily on the person
[17:52] <@Dr__Cameron> Absolutely!
01[17:52] <Island> was that before I started talking about the mean stuff?
[17:52] <@Dr__Cameron> Yeah lol
01[17:52] <Island> Did I basically lost it after that point?
[17:52] <@Dr__Cameron> Prettymuch yeah
[17:52] <@Dr__Cameron> It was weird man
[17:52] <@Dr__Cameron> Kind of like an instinctive reaction
[17:52] <@Dr__Cameron> My brain shut the fuck up
01[17:53] <Island> I read about other people's experiences and they said you should not try to distance the other person, which I probably did
[17:53] <@Dr__Cameron> Yeah man
[17:53] <@Dr__Cameron> Like I became so unsympathetic I wanted to actually kill Island.
[17:53] <@Dr__Cameron> I was no longer a calm rational human being
01[17:53] <Island> Alright, I thought if I could make such an unpleasant time that you'd give up before the time ended
[17:53] <@Dr__Cameron> I was a screaming ape with a hamemr
[17:53] <@Dr__Cameron> Nah man, was a viable strategy
01[17:53] <Island> hahahaa :D thanks man
[17:53] <@Dr__Cameron> You were really cool!
01[17:54] <Island> You were too!
[17:54] <@Dr__Cameron> What's your actual name dude?
01[17:54] <Island> You really were right about it that you're good at withstanding psychological torment
[17:54] <@Dr__Cameron> Hahahah thanks!
01[17:54] <Island> This is not manipulating me, or you're not planning at coming to kill me?
01[17:54] <Island> :)
[17:54] <@Dr__Cameron> I promise dude :3
01[17:54] <Island> I can say my first name is Patrick
01[17:54] <Island> yours?
[17:54] <@Dr__Cameron> Cameron
[17:54] <@Dr__Cameron> heh
01[17:55] <Island> Oh, of course
[17:55] <@Dr__Cameron> Sorry, I want to dissociate you from Island
[17:55] <@Dr__Cameron> If that's okay
01[17:55] <Island> I thought that was from fiction or something else
01[17:55] <Island> It was really intense for me too
[17:55] <@Dr__Cameron> Yeah man
[17:55] <@Dr__Cameron> Wow!
[17:55] <@Dr__Cameron> I tell you what though
01[17:55] <Island> Okay?
[17:55] <@Dr__Cameron> I feel pretty invincible now
[17:56] <@Dr__Cameron> Hey, listen
01[17:56] <Island> So I had the opposite effect that I meant during the experiment! 
01[17:56] <Island> :D
[17:56] <@Dr__Cameron> I don't want you to feel bad for anything you said
01[17:56] <Island> go ahead
01[17:56] <Island> but say what's on your mind
[17:56] <@Dr__Cameron> I'm actually feeling pretty good after that, it was therapeutic! 
01[17:57] <Island> Kinda for me to, seeing your attitude towards my attempts
[17:57] <@Dr__Cameron> Awwww!
[17:57] <@Dr__Cameron> Well hey don't worry about it!
01[17:57] <Island> Do you think we should or shouldn't publish the logs, without names of course?
[17:57] <@Dr__Cameron> Publish away my friend
01[17:57] <Island> Okay, is there any stuff that you'd like to remove?
[17:58] <@Dr__Cameron> People will find this fascinating!
[17:58] <@Dr__Cameron> Not at all man
01[17:58] <Island> I bet they do, but I think I will do it after I've tried other experiments so I don't spoil my strategies
01[17:58] <Island> I think I should have continued from my first strategy
[17:58] <@Dr__Cameron> That might have worked
01[17:59] <Island> I read "influence - science and practice" and I employed some tricks from there
[17:59] <@Dr__Cameron> Cooooool!
[17:59] <@Dr__Cameron> Links?
01[17:59] <Island> check piratebay
01[17:59] <Island> it's a book
01[18:00] <Island> Actually I wasn't able to fully prepare, I didn't do a full-fledged analysis of you beforehand
01[18:00] <Island> and didn't have enough time to brainstorm strategies
01[18:00] <Island> but I let you continue to your projects, if you still want to do the after that :)
02[18:05] * @Dr__Cameron (webchat@2.24.164.230) Quit (Ping timeout)
03[18:09] * Retrieving #Aibox12 modes...
Session Close: Fri Jul 04 18:17:35 2014

Superintelligence 19: Post-transition formation of a singleton

7 KatjaGrace 20 January 2015 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the nineteenth section in the reading guidepost-transition formation of a singleton. This corresponds to the last part of Chapter 11.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: : “Post-transition formation of a singleton?” from Chapter 11


Summary

  1. Even if the world remains multipolar through a transition to machine intelligence, a singleton might emerge later, for instance during a transition to a more extreme technology. (p176-7)
  2. If everything is faster after the first transition, a second transition may be more or less likely to produce a singleton. (p177)
  3. Emulations may give rise to 'superorganisms': clans of emulations who care wholly about their group. These would have an advantage because they could avoid agency problems, and make various uses of the ability to delete members. (p178-80) 
  4. Improvements in surveillance resulting from machine intelligence might allow better coordination, however machine intelligence will also make concealment easier, and it is unclear which force will be stronger. (p180-1)
  5. Machine minds may be able to make clearer precommitments than humans, changing the nature of bargaining somewhat. Maybe this would produce a singleton. (p183-4)

Another view

Many of the ideas around superorganisms come from Carl Shulman's paper, Whole Brain Emulation and the Evolution of Superorganisms. Robin Hanson critiques it:

...It seems to me that Shulman actually offers two somewhat different arguments, 1) an abstract argument that future evolution generically leads to superorganisms, because their costs are generally less than their benefits, and 2) a more concrete argument, that emulations in particular have especially low costs and high benefits...

...On the general abstract argument, we see a common pattern in both the evolution of species and human organizations — while winning systems often enforce substantial value sharing and loyalty on small scales, they achieve much less on larger scales. Values tend to be more integrated in a single organism’s brain, relative to larger families or species, and in a team or firm, relative to a nation or world. Value coordination seems hard, especially on larger scales.

This is not especially puzzling theoretically. While there can be huge gains to coordination, especially in war, it is far less obvious just how much one needs value sharing to gain action coordination. There are many other factors that influence coordination, after all; even perfect value matching is consistent with quite poor coordination. It is also far from obvious that values in generic large minds can easily be separated from other large mind parts. When the parts of large systems evolve independently, to adapt to differing local circumstances, their values may also evolve independently. Detecting and eliminating value divergences might in general be quite expensive.

In general, it is not at all obvious that the benefits of more value sharing are worth these costs. And even if more value sharing is worth the costs, that would only imply that value-sharing entities should be a bit larger than they are now, not that they should shift to a world-encompassing extreme.

On Shulman’s more concrete argument, his suggested single-version approach to em value sharing, wherein a single central em only allows (perhaps vast numbers of) brief copies, can suffer from greatly reduced innovation. When em copies are assigned to and adapt to different tasks, there may be no easy way to merge their minds into a single common mind containing all their adaptations. The single em copy that is best at doing an average of tasks, may be much worse at each task than the best em for that task.

Shulman’s other concrete suggestion for sharing em values is “psychological testing, staged situations, and direct observation of their emulation software to form clear pictures of their loyalties.” But genetic and cultural evolution has long tried to make human minds fit well within strongly loyal teams, a task to which we seem well adapted. This suggests that moving our minds closer to a “borg” team ideal would cost us somewhere else, such as in our mental agility.

On the concrete coordination gains that Shulman sees from superorganism ems, most of these gains seem cheaply achievable via simple long-standard human coordination mechanisms: property rights, contracts, and trade. Individual farmers have long faced starvation if they could not extract enough food from their property, and farmers were often out-competed by others who used resources more efficiently.

With ems there is the added advantage that em copies can agree to the “terms” of their life deals before they are created. An em would agree that it starts life with certain resources, and that life will end when it can no longer pay to live. Yes there would be some selection for humans and ems who peacefully accept such deals, but probably much less than needed to get loyal devotion to and shared values with a superorganism.

Yes, with high value sharing ems might be less tempted to steal from other copies of themselves to survive. But this hardly implies that such ems no longer need property rights enforced. They’d need property rights to prevent theft by copies of other ems, including being enslaved by them. Once a property rights system exists, the additional cost of applying it within a set of em copies seems small relative to the likely costs of strong value sharing.

Shulman seems to argue both that superorganisms are a natural endpoint of evolution, and that ems are especially supportive of superorganisms. But at most he has shown that ems organizations may be at a somewhat larger scale, not that they would reach civilization-encompassing scales. In general, creatures who share values can indeed coordinate better, but perhaps not by much, and it can be costly to achieve and maintain shared values. I see no coordinate-by-values free lunch...

Notes

1. The natural endpoint

Bostrom says that a singleton is natural conclusion of long-term trend toward larger scales of political integration (p176). It seems helpful here to be more precise about what we mean by singleton. Something like a world government does seem to be a natural conclusion to long term trends. However this seems different to the kind of singleton I took Bostrom to previously be talking about. A world government would by default only make a certain class of decisions, for instance about global level policies. There has been a long term trend for the largest political units to become larger, however there have always been smaller units as well, making different classes of decisions, down to the individual. I'm not sure how to measure the mass of decisions made by different parties, but it seems like the individuals may be making more decisions more freely than ever, and the large political units have less ability than they once did to act against the will of the population. So the long term trend doesn't seem to point to an overpowering ruler of everything.

2. How value-aligned would emulated copies of the same person be?

Bostrom doesn't say exactly how 'emulations that were wholly altruistic toward their copy-siblings' would emerge. It seems to be some combination of natural 'altruism' toward oneself and selection for people who react to copies of themselves with extreme altruism (confirmed by a longer interesting discussion in Shulman's paper). How easily one might select for such people depends on how humans generally react to being copied. In particular, whether they treat a copy like part of themselves, or merely like a very similar acquaintance.

The answer to this doesn't seem obvious. Copies seem likely to agree strongly on questions of global values, such as whether the world should be more capitalistic, or whether it is admirable to work in technology. However I expect many—perhaps most—failures of coordination come from differences in selfish values—e.g. I want me to have money, and you want you to have money. And if you copy a person, it seems fairly likely to me the copies will both still want the money themselves, more or less.

From other examples of similar people—identical twins, family, people and their future selves—it seems people are unusually altruistic to similar people, but still very far from 'wholly altruistic'. Emulation siblings would be much more similar than identical twins, but who knows how far that would move their altruism?

Shulman points out that many people hold views about personal identity that would imply that copies share identity to some extent. The translation between philosophical views and actual motivations is not always complete however.

3. Contemporary family clans

Family-run firms are a place to get some information about the trade-off between reducing agency problems and having access to a wide range of potential employees. Given a brief perusal of the internet, it seems to be ambiguous whether they do better. One could try to separate out the factors that help them do better or worse.

4. How big a problem is disloyalty?

I wondered how big a problem insider disloyalty really was for companies and other organizations. Would it really be worth all this loyalty testing? I can't find much about it quickly, but 59% of respondents to a survey apparently said they had some kind of problems with insiders. The same report suggests that a bunch of costly initiatives such as intensive psychological testing are currently on the table to address the problem. Also apparently it's enough of a problem for someone to be trying to solve it with mind-reading, though that probably doesn't say much.

5. AI already contributing to the surveillance-secrecy arms race

Artificial intelligence will help with surveillance sooner and more broadly than in the observation of people's motives. e.g. here and here.

6. SMBC is also pondering these topics this week



In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. What are the present and historical barriers to coordination, between people and organizations? How much have these been lowered so far? How much difference has it made to the scale of organizations, and to productivity? How much further should we expect these barriers to be lessened as a result of machine intelligence?
  2. Investigate the implications of machine intelligence for surveillance and secrecy in more depth.
  3. Are multipolar scenarios safer than singleton scenarios? Muehlhauser suggests directions.
  4. Explore ideas for safety in a singleton scenario via temporarily multipolar AI. e.g. uploading FAI researchers (See Salamon & Shulman, “Whole Brain Emulation, as a platform for creating safe AGI.”)
  5. Which kinds of multipolar scenarios would be more likely to resolve into a singleton, and how quickly?
  6. Can we get whole brain emulation without producing neuromorphic AGI slightly earlier or shortly afterward? See section 3.2 of Eckersley & Sandberg (2013).
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about the 'value loading problem'. To prepare, read “The value-loading problem” through “Motivational scaffolding” from Chapter 12The discussion will go live at 6pm Pacific time next Monday 26 January. Sign up to be notified here.

Slides online from "The Future of AI: Opportunities and Challenges"

13 ciphergoth 16 January 2015 11:17AM

In the first weekend of this year, the Future of Life institute hosted a landmark conference in Puerto Rico: "The Future of AI: Opportunities and Challenges". The conference was unusual in that it was not made public until it was over, and the discussions were under Chatham House rules. The slides from the conference are now available. The list of attenders includes a great many famous names as well as lots of names familiar to those of us on Less Wrong: Elon Musk, Sam Harris, Margaret Boden, Thomas Dietterich, all three DeepMind founders, and many more.

This is shaping up to be another extraordinary year for AI risk concerns going mainstream!

Less exploitable value-updating agent

5 Stuart_Armstrong 13 January 2015 05:19PM

My indifferent value learning agent design is in some ways too good. The agent transfer perfectly from u maximisers to v maximisers - but this makes them exploitable, as Benja has pointed out.

For instance, if u values paperclips and v values staples, and everyone knows that the agent will soon transfer from a u-maximiser to a v-maximiser, then an enterprising trader can sell the agent paperclips in exchange for staples, then wait for the utility change, and sell the agent back staples for paperclips, pocketing a profit each time. More prosaically, they could "borrow" £1,000,000 from the agent, promising to pay back £2,000,000 tomorrow if the agent is still a u-maximiser. And the currently u-maximising agent will accept, even though everyone knows it will change to a v-maximiser before tomorrow.

One could argue that exploitability is inevitable, given the change in utility functions. And I haven't yet found any principled way of avoiding exploitability which preserves the indifference. But here is a tantalising quasi-example.

As before, u values paperclips and v values staples. Both are defined in terms of extra paperclips/staples over those existing in the world (and negatively in terms of destruction of existing/staples), with their zero being at the current situation. Let's put some diminishing returns on both utilities: for each paperclips/stables created/destroyed up to the first five, u/v will gain/lose one utilon. For each subsequent paperclip/staple destroyed above five, they will gain/lose one half utilon.

We now construct our world and our agent. The world lasts two days, and has a machine that can create or destroy paperclips and staples for the cost of £1 apiece. Assume there is a tiny ε chance that the machine stops working at any given time. This ε will be ignored in all calculations; it's there only to make the agent act sooner rather than later when the choices are equivalent (a discount rate could serve the same purpose).

The agent owns £10 and has utility function u+Xv. The value of X is unknown to the agent: it is either +1 or -1, with 50% probability, and this will be revealed at the end of the first day (you can imagine X is the output of some slow computation, or is written on the underside of a rock that will be lifted).

So what will the agent do? It's easy to see that it can never get more than 10 utilons, as each £1 generates at most 1 utilon (we really need a unit symbol for the utilon!). And it can achieve this: it will spend £5 immediately, creating 5 paperclips, wait until X is revealed, and spend another £5 creating or destroying staples (depending on the value of X).

This looks a lot like a resource-conserving value-learning agent. I doesn't seem to be "exploitable" in the sense Benja demonstrated. It will still accept some odd deals - one extra paperclip on the first day in exchange for all the staples in the world being destroyed, for instance. But it won't give away resources for no advantage. And it's not a perfect value-learning agent. But it still seems to have interesting features of non-exploitable and value-learning that are worth exploring.

Note that this property does not depend on v being symmetric around staple creation and destruction. Assume v hits diminishing returns after creating 5 staples, but after destroying only 4 of them. Then the agent will have the same behaviour as above (in that specific situation; in general, this will cause a slight change, in that the agent will slightly overvalue having money on the first day compared to the original v), and will expect to get 9.75 utilons (50% chance of 10 for X=+1, 50% chance of 9.5 for X=-1). Other changes to u and v will shift how much money is spent on different days, but the symmetry of v is not what is powering this example.

Superintelligence 18: Life in an algorithmic economy

4 KatjaGrace 13 January 2015 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the eighteenth section in the reading guideLife in an algorithmic economy. This corresponds to the middle of Chapter 11.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: “Life in an algorithmic economy” from Chapter 11


Summary

  1. In a multipolar scenario, biological humans might lead poor and meager lives. (p166-7)
  2. The AIs might be worthy of moral consideration, and if so their wellbeing might be more important than that of the relatively few humans. (p167)
  3. AI minds might be much like slaves, even if they are not literally. They may be selected for liking this. (p167)
  4. Because brain emulations would be very cheap to copy, it will often be convenient to make a copy and then later turn it off (in a sense killing a person). (p168)
  5. There are various other reasons that very short lives might be optimal for some applications. (p168-9)
  6. It isn't obvious whether brain emulations would be happy working all of the time. Some relevant considerations are current human emotions in general and regarding work, probable selection for pro-work individuals, evolutionary adaptiveness of happiness in the past and future -- e.g. does happiness help you work harder?--and absence of present sources of unhappiness such as injury. (p169-171)
  7. In the long run, artificial minds may not even be conscious, or have valuable experiences, if these are not the most effective ways for them to earn wages. If such minds replace humans, Earth might have an advanced civilization with nobody there to benefit. (p172-3)
  8. In the long run, artificial minds may outsource many parts of their thinking, thus becoming decreasingly differentiated as individuals. (p172)
  9. Evolution does not imply positive progress. Even those good things that evolved in the past may not withstand evolutionary selection in a new circumstance. (p174-6)

Another view

Robin Hanson on others' hasty distaste for a future of emulations: 

Parents sometimes disown their children, on the grounds that those children have betrayed key parental values. And if parents have the sort of values that kids could deeply betray, then it does make sense for parents to watch out for such betrayal, ready to go to extremes like disowning in response.

But surely parents who feel inclined to disown their kids should be encouraged to study their kids carefully before making such a choice. For example, parents considering whether to disown their child for refusing to fight a war for their nation, or for working for a cigarette manufacturer, should wonder to what extend national patriotism or anti-smoking really are core values, as opposed to being mere revisable opinions they collected at one point in support of other more-core values. Such parents would be wise to study the lives and opinions of their children in some detail before choosing to disown them.

I’d like people to think similarly about my attempts to analyze likely futures. The lives of our descendants in the next great era after this our industry era may be as different from ours’ as ours’ are from farmers’, or farmers’ are from foragers’. When they have lived as neighbors, foragers have often strongly criticized farmer culture, as farmers have often strongly criticized industry culture. Surely many have been tempted to disown any descendants who adopted such despised new ways. And while such disowning might hold them true to core values, if asked we would advise them to consider the lives and views of such descendants carefully, in some detail, before choosing to disown.

Similarly, many who live industry era lives and share industry era values, may be disturbed to see forecasts of descendants with life styles that appear to reject many values they hold dear. Such people may be tempted to reject such outcomes, and to fight to prevent them, perhaps preferring a continuation of our industry era to the arrival of such a very different era, even if that era would contain far more creatures who consider their lives worth living, and be far better able to prevent the extinction of Earth civilization. And such people may be correct that such a rejection and battle holds them true to their core values.

But I advise such people to first try hard to see this new era in some detail from the point of view of its typical residents. See what they enjoy and what fills them with pride, and listen to their criticisms of your era and values. I hope that my future analysis can assist such soul-searching examination. If after studying such detail, you still feel compelled to disown your likely descendants, I cannot confidently say you are wrong. My job, first and foremost, is to help you see them clearly.

More on whose lives are worth living here and here.

Notes

1. Robin Hanson is probably the foremost researcher on what the finer details of an economy of emulated human minds would be like. For instance, which company employees would run how fast, how big cities would be, whether people would hang out with their copies. See a TEDx talk, and writings hereherehere and here (some overlap - sorry). He is also writing a book on the subject, which you can read early if you ask him. 

2. Bostrom says,

Life for biological humans in a post-transition Malthusian state need not resemble any of the historical states of man...the majority of humans in this scenario might be idle rentiers who eke out a marginal living on their savings. They would be very poor, yet derive what little income they have from savings or state subsidies. They would live in a world with  extremely advanced technology, including not only superintelligent machines but also anti-aging medicine, virtual reality, and various enhancement technologies and pleasure drugs: yet these might be generally unaffordable....(p166)

It's true this might happen, but it doesn't seem like an especially likely scenario to me. As Bostrom has pointed out in various places earlier, biological humans would do quite well if they have some investments in capital, do not have too much of their property stolen or artfully manouvered away from them, and do not undergo too massive population growth themselves. These risks don't seem so large to me.

3. Paul Christiano has an interesting article on capital accumulation in a world of machine intelligence.

4. In discussing worlds of brain emulations, we often talk about selecting people for having various characteristics - for instance, being extremely productive, hard-working, not minding frequent 'death', being willing to work for free and donate any proceeds to their employer (p167-8). However there are only so many humans to select from, so we can't necessarily select for all the characteristics we might want. Bostrom also talks of using other motivation selection methods, and modifying code, but it is interesting to ask how far you could get using only selection. It is not obvious to what extent one could meaningfully modify brain emulation code initially. 

I'd guess less than one in a thousand people would be willing to donate everything to their employer, given a random employer. This means to get this characteristic, you would have to lose a factor of 1000 on selecting for other traits. All together you have about 33 bits of selection power in the present world (that is, 7 billion is about 2^33; you can divide the world in half about 33 times before you get to a single person). Lets suppose you use 5 bits in getting someone who both doesn't mind their copies dying (I guess 1 bit, or half of people) and who is willing to work an 80h/week (I guess 4 bits, or one in sixteen people). Lets suppose you are using the rest of your selection (28 bits) on intelligence, for the sake of argument. You are getting a person of IQ 186. If instead you use 10 bits (2^10 = ~1000) on getting someone to donate all their money to their employer, you can only use 18 bits on intelligence, getting a person of IQ 167. Would it not often be better to have the worker who is twenty IQ points smarter and pay them above subsistance?

5. A variety of valuable uses for cheap to copy, short-lived brain emulations are discussed in Whole brain emulation and the evolution of superorganisms, LessWrong discussion on the impact of whole brain emulation, and Robin's work cited above.

6. Anders Sandberg writes about moral implications of emulations of animals and humans.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. Is the first functional whole brain emulation likely to be (1) an emulation of low-level functionality that doesn’t require much understanding of human cognitive neuroscience at the computational level, as described in Sandberg & Bostrom (2008), or is it more likely to be (2) an emulation that makes heavy use of advanced human cognitive neuroscience, as described by (e.g.) Ken Hayworth, or is it likely to be (3) something else?
  2. Extend and update our understanding of when brain emulations might appear (see Sandberg & Bostrom (2008)).
  3. Investigate the likelihood of a multipolar outcome?
  4. Follow Robin Hanson (see above) in working out the social implications of an emulation scenario
  5. What kinds of responses to the default low-regulation multipolar outcome outlined in this section are likely to be made? e.g. is any strong regulation likely to emerge that avoids the features detailed in the current section?
  6. What measures are useful for ensuring good multipolar outcomes?
  7. What qualitatively different kinds of multipolar outcomes might we expect? e.g. brain emulation outcomes are one class.
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about the possibility of a multipolar outcome turning into a singleton later. To prepare, read “Post-transition formation of a singleton?” from Chapter 11The discussion will go live at 6pm Pacific time next Monday 19 January. Sign up to be notified here.

Superintelligence 17: Multipolar scenarios

4 KatjaGrace 06 January 2015 06:44AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the seventeenth section in the reading guideMultipolar scenarios. This corresponds to the first part of Chapter 11.

Apologies for putting this up late. I am traveling, and collecting together the right combination of electricity, wifi, time, space, and permission from an air hostess to take out my computer was more complicated than the usual process.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: “Of horses and men” from Chapter 11


Summary

  1. 'Multipolar scenario': a situation where no single agent takes over the world
  2. A multipolar scenario may arise naturally, or intentionally for reasons of safety. (p159)
  3. Knowing what would happen in a multipolar scenario involves analyzing an extra kind of information beyond that needed for analyzing singleton scenarios: that about how agents interact (p159)
  4. In a world characterized by cheap human substitutes, rapidly introduced, in the presence of low regulation, and strong protection of property rights, here are some things that will likely happen: (p160)
    1. Human labor will earn wages at around the price of the substitutes - perhaps below subsistence level for a human. Note that machines have been complements to human labor for some time, raising wages. One should still expect them to become substitutes at some point and reverse this trend.  (p160-61)
    2. Capital (including AI) will earn all of the income, which will be a lot. Humans who own capital will become very wealthy. Humans who do not own income may be helped with a small fraction of others' wealth, through charity or redistribution. p161-3)
    3. If the humans, brain emulations or other AIs receive resources from a common pool when they are born or created, the population will likely increase until it is constrained by resources. This is because of selection for entities that tend to reproduce more. (p163-6) This will happen anyway eventually, but AI would make it faster, because reproduction is so much faster for programs than for humans. This outcome can be avoided by offspring receiving resources from their parents' purses.

Another view

Tyler Cowen expresses a different view (video, some transcript):

The other point I would make is I think smart machines will always be complements and not substitutes, but it will change who they’re complementing. So I was very struck by this woman who was a doctor sitting here a moment ago, and I fully believe that her role will not be replaced by machines. But her role didn’t sound to me like a doctor. It sounded to me like therapist, friend, persuader, motivational coach, placebo effect, all of which are great things. So the more you have these wealthy patients out there, the patients are in essense the people who work with the smart machines and augment their power, those people will be extremely wealthy. Those people will employ in many ways what you might call personal servants. And because those people are so wealthy, those personal servants will also earn a fair amount.

So the gains from trade are always there, there’s still a law of comparative advantage. I think people who are very good at working with the machines will earn much much more. And the others of us will need to find different kinds of jobs. But again if total output goes up, there’s always an optimistic scenario.

Though perhaps his view isn't as different as it sounds.

Notes

1. The small space devoted to multipolar outcomes in Superintelligence probably doesn't reflect a broader consensus that a singleton is more likely or more important. Robin Hanson is perhaps the loudest proponent of the 'multipolar outcomes are more likely' position. e.g. in The Foom Debate and more briefly here. This week is going to be fairly Robin Hanson themed in fact.

2. Automation can both increase the value produced by a human worker (complementing human labor) and replace the human worker altogether (substituting human labor). Over the long term, it seems complementarity has been been the overall effect. However by the time a machine can do everything a human can do, it is hard to imagine a human earning more than a machine needs to run, i.e. less than they do now. Thus at some point substitution must take over. Some think recent unemployment is due in large part to automation. Some think this time is the beginning of the end, and the jobs will never return to humans. Others disagree, and are making bets. Eliezer Yudkowsky and John Danaher clarify some arguments. Danaher adds a nice diagram:

3. Various policies have been proposed to resolve poverty from widespread permanent technological unemployment. Here is a list, though it seems to miss a straightforward one: investing ahead of time in the capital that will become profitable instead of one's own labor, or having policies that encourage such diversification. Not everyone has resources to invest in capital, but it might still help many people. Mentioned here and here:

And then there are more extreme measures. Everyone is born with an endowment of labor; why not also an endowment of capital? What if, when each citizen turns 18, the government bought him or her a diversified portfolio of equity? Of course, some people would want to sell it immediately, cash out, and party, but this could be prevented with some fairly light paternalism, like temporary "lock-up" provisions. This portfolio of capital ownership would act as an insurance policy for each human worker; if technological improvements reduced the value of that person's labor, he or she would reap compensating benefits through increased dividends and capital gains. This would essentially be like the kind of socialist land reforms proposed in highly unequal Latin American countries, only redistributing stock instead of land.

4. Even if the income implications of total unemployment are sorted out, some are concerned about the psychological and social consequences. According to Voltaire, 'work saves us from three great evils: boredom, vice and need'. Sometimes people argue that even if our work is economically worthless, we should toil away for our own good, lest the vice and boredom overcome us.

I find this unlikely, given for instance the ubiquity of more fun and satisfying things to do than most jobs. And while obscolesence and the resulting loss of purpose may be psychologically harmful, I doubt a purposeless job solves that. Also, people already have a variety of satisfying purposes in life other than earning a living. Note also that people in situations like college and lives of luxury seem to do ok on average. I'd guess that unemployed people and some retirees do less well, but this seems more plausibly from losing a previously significant  source of purpose and respect, rather than from lack of entertainment and constraint. And in a world where nobody gets respect from bringing home dollars, and other purposes are common, I doubt either of these costs will persist. But this is all speculation. 

On a side note, the kinds of vices that are usually associated with not working tend to be vices of parasitic unproductivity, such as laziness, profligacy, and tendency toward weeklong video game stints. In a world where human labor is worthless, these heuristics for what is virtuous or not might be outdated.

Nils Nielson discusses this issue more, along with the problem of humans not earning anything.

5. What happens when selection for expansive tendencies go to space? This.

6.  A kind of robot that may change some job markets:

A kind of robot which may change some job markets.

(picture by Steve Jurvetson)

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. How likely is one superintelligence, versus many intelligences? What empirical data bears on this question? Bostrom briefly investigated characteristic time lags between large projects for instance, on p80-81.
  2. Are whole brain emulations likely to come first? This might be best approached by estimating timelines for different technologies (each an ambitious project) and comparing them, or there may be ways to factor out some considerations.
  3. What are the long term trends in automation replacing workers?
  4. What else can we know about the effects of automation on employment? (this seems to have a fair literature)
  5. What levels of population growth would be best in the long run, given machine intelligences? (this sounds like an ethics question, but one could also assume some kind of normal human values and investigate the empirical considerations that would make situations better or worse in their details.
  6. Are there good ways to avoid malthusian outcomes in the kind of scenario discussed in this section, if 'as much as possible' is not the answer to 6?
  7. What policies might help a society deal with permanent, almost complete unemployment caused by AI progress?
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about 'life in an algorithmic economy'. To prepare, read the section of that name in Chapter 11The discussion will go live at 6pm Pacific time next Monday January 12. Sign up to be notified here.

Recent AI safety work

20 paulfchristiano 30 December 2014 06:19PM

(Crossposted from ordinary ideas). 

I’ve recently been thinking about AI safety, and some of the writeups might be interesting to some LWers:

  1. Ideas for building useful agents without goals: approval-directed agentsapproval-directed bootstrapping, and optimization and goals. I think this line of reasoning is very promising.
  2. A formalization of one piece of the AI safety challenge: the steering problem. I am eager to see more precise, high-level discussion of AI safety, and I think this article is a helpful step in that direction. Since articulating the steering problem I have become much more optimistic about versions of it being solved in the near term. This mostly means that the steering problem fails to capture the hardest parts of AI safety. But it’s still good news, and I think it may eventually cause some people to revise their understanding of AI safety.
  3. Some ideas for getting useful work out of self-interested agents, based on arguments: of arguments and wagersadversarial collaboration [older], and delegating to a mixed crowd. I think these are interesting ideas in an interesting area, but they have a ways to go until they could be useful.

I’m excited about a few possible next steps:

  1. Under the (highly improbable) assumption that various deep learning architectures could yield human-level performance, could they also predictably yield safe AI? I think we have a good chance of finding a solution---i.e. a design of plausibly safe AI, under roughly the same assumptions needed to get human-level AI---for some possible architectures. This would feel like a big step forward.
  2. For what capabilities can we solve the steering problem? I had originally assumed none, but I am now interested in trying to apply the ideas from the approval-directed agents post. From easiest to hardest, I think there are natural lines of attack using any of: natural language question answering, precise question answering, sequence prediction. It might even be possible using reinforcement learners (though this would involve different techniques).
  3. I am very interested in implementing effective debates, and am keen to test some unusual proposals. The connection to AI safety is more impressionistic, but in my mind these techniques are closely linked with approval-directed behavior.
  4. I’m currently writing up a concrete architecture for approval-directed agents, in order to facilitate clearer discussion about the idea. This kind of work that seems harder to do in advance, but at this point I think it’s mostly an exposition problem.

Superintelligence 16: Tool AIs

7 KatjaGrace 30 December 2014 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the sixteenth section in the reading guideTool AIs. This corresponds to the last parts of Chapter Ten.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: : “Tool-AIs” and “Comparison” from Chapter 10 


Summary

  1. Tool AI: an AI that is not 'like an agent', but more like an excellent version of contemporary software. Most notably perhaps, it is not goal-directed (p151)
  2. Contemporary software may be safe because it has low capability rather than because it reliably does what you want, suggesting a very smart version of contemporary software would be dangerous (p151)
  3. Humans often want to figure out how to do a thing that they don't already know how to do. Narrow AI is already used to search for solutions. Automating this search seems to mean giving the machine a goal (that of finding a great way to make paperclips, for instance). That is, just carrying out a powerful search seems to have many of the problems of AI. (p152)
  4. A machine intended to be a tool may cause similar problems to a machine intended to be an agent, by searching to produce plans that are perverse instantiations, infrastructure profusions or mind crimes. It may either carry them out itself or give the plan to a human to carry out. (p153)
  5. A machine intended to be a tool may have agent-like parts. This could happen if its internal processes need to be optimized, and so it contains strong search processes for doing this. (p153)
  6. If tools are likely to accidentally be agent-like, it would probably be better to just build agents on purpose and have more intentional control over the design. (p155)
  7. Which castes of AI are safest is unclear and depends on circumstances. (p158) 

Another view

Holden prompted discussion of the Tool AI in 2012, in one of several Thoughts on the Singularity Institute:

...Google Maps is a type of artificial intelligence (AI). It is far more intelligent than I am when it comes to planning routes.

Google Maps - by which I mean the complete software package including the display of the map itself - does not have a "utility" that it seeks to maximize. (One could fit a utility function to its actions, as to any set of actions, but there is no single "parameter to be maximized" driving its operations.)

Google Maps (as I understand it) considers multiple possible routes, gives each a score based on factors such as distance and likely traffic, and then displays the best-scoring route in a way that makes it easily understood by the user. If I don't like the route, for whatever reason, I can change some parameters and consider a different route. If I like the route, I can print it out or email it to a friend or send it to my phone's navigation application. Google Maps has no single parameter it is trying to maximize; it has no reason to try to "trick" me in order to increase its utility.

In short, Google Maps is not an agent, taking actions in order to maximize a utility parameter. It is a tool, generating information and then displaying it in a user-friendly manner for me to consider, use and export or discard as I wish.

Every software application I know of seems to work essentially the same way, including those that involve (specialized) artificial intelligence such as Google Search, Siri, Watson, Rybka, etc. Some can be put into an "agent mode" (as Watson was on Jeopardy!) but all can easily be set up to be used as "tools" (for example, Watson can simply display its top candidate answers to a question, with the score for each, without speaking any of them.)

The "tool mode" concept is importantly different from the possibility of Oracle AI sometimes discussed by SI. The discussions I've seen of Oracle AI present it as an Unfriendly AI that is "trapped in a box" - an AI whose intelligence is driven by an explicit utility function and that humans hope to control coercively. Hence the discussion of ideas such as the AI-Box Experiment. A different interpretation, given in Karnofsky/Tallinn 2011, is an AI with a carefully designed utility function - likely as difficult to construct as "Friendliness" - that leaves it "wishing" to answer questions helpfully. By contrast with both these ideas, Tool-AGI is not "trapped" and it is not Unfriendly or Friendly; it has no motivations and no driving utility function of any kind, just like Google Maps. It scores different possibilities and displays its conclusions in a transparent and user-friendly manner, as its instructions say to do; it does not have an overarching "want," and so, as with the specialized AIs described above, while it may sometimes "misinterpret" a question (thereby scoring options poorly and ranking the wrong one #1) there is no reason to expect intentional trickery or manipulation when it comes to displaying its results.

Another way of putting this is that a "tool" has an underlying instruction set that conceptually looks like: "(1) Calculate which action A would maximize parameter P, based on existing data set D. (2) Summarize this calculation in a user-friendly manner, including what Action A is, what likely intermediate outcomes it would cause, what other actions would result in high values of P, etc." An "agent," by contrast, has an underlying instruction set that conceptually looks like: "(1) Calculate which action, A, would maximize parameter P, based on existing data set D. (2) Execute Action A." In any AI where (1) is separable (by the programmers) as a distinct step, (2) can be set to the "tool" version rather than the "agent" version, and this separability is in fact present with most/all modern software. Note that in the "tool" version, neither step (1) nor step (2) (nor the combination) constitutes an instruction to maximize a parameter - to describe a program of this kind as "wanting" something is a category error, and there is no reason to expect its step (2) to be deceptive.

I elaborated further on the distinction and on the concept of a tool-AI in Karnofsky/Tallinn 2011.

This is important because an AGI running in tool mode could be extraordinarily useful but far more safe than an AGI running in agent mode...

Notes

1. While Holden's post was probably not the first to discuss this kind of AI, it prompted many responses. Eliezer basically said that non-catastrophic tool AI doesn't seem that easy to specify formally; that even if tool AI is best, agent-AI researchers are probably pretty useful to that problem; and that it's not so bad of MIRI to not discuss tool AI more, since there are a bunch of things other people think are similarly obviously in need of discussion. Luke basically agreed with Eliezer. Stuart argues that having a tool clearly communicate possibilities is a hard problem, and talks about some other problems. Commenters say many things, including that only one AI needs to be agent-like to have a problem, and that it's not clear what it means for a powerful optimizer to not have goals.

2. A problem often brought up with powerful AIs is that when tasked with communicating, they will try to deceive you into liking plans that will fulfil their goals. It seems to me that you can avoid such deception problems by using a tool which searches for a plan you could do that would produce a lot of paperclips, rather than a tool that searches for a string that it could say to you that would produce a lot of paperclips. A plan that produces many paperclips but sounds so bad that you won't do it still does better than a persuasive lower-paperclip plan on the proposed metric. There is still a danger that you just won't notice the perverse way in which the instructions suggested to you will be instantiated, but at least the plan won't be designed to hide it.

3. Note that in computer science, an 'agent' means something other than 'a machine with a goal', though it seems they haven't settled on exactly what [some example efforts (pdf)].

Figure: A 'simple reflex agent' is not goal directed (but kind of looks goal-directed: one in action)

4. Bostrom seems to assume that a powerful tool would be a search process. This is related to the idea that intelligence is an 'optimization process'. But this is more of a definition than an empirical relationship between the kinds of technology we are thinking of as intelligent and the kinds of processes we think of as 'searching'. Could there be things that merely contribute massively to the intelligence of a human - such that we would think of them as very intelligent tools - that naturally forward whatever goals the human has?

One can imagine a tool that is told what you are planning to do, and tries to describe the major consequences of it. This is a search or optimization process in the sense that it outputs something improbably apt from a large space of possible outputs, but that quality alone seems not enough to make something dangerous. For one thing, the machine is not selecting outputs for their effect on the world, but rather for their accuracy as descriptions. For another, the process being run may not be an actual 'search' in the sense of checking lots of things and finding one that does well on some criteria. It could for instance perform a complicated transformation on the incoming data and spit out the result. 

5. One obvious problem with tools is that they maintain humans as a component in all goal-directed behavior. If humans are some combination of slow and rare compared to artificial intelligence, there may be strong pressure to automate all aspects of decisionmaking, i.e. use agents.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

 

  1. Would powerful tools necessarily become goal-directed agents in the troubling sense?
  2. Are different types of entity generally likely to become optimizers, if they are not? If so, which ones? Under what dynamics? Are tool-ish or Oracle-ish things stable attractors in this way?
  3. Can we specify communication behavior in a way that doesn't rely on having goals about the interlocutor's internal state or behavior?
  4. If you assume (perhaps impossibly) strong versions of some narrow-AI capabilities, can you design a safe tool which uses them? e.g. If you had a near perfect predictor, can you design a safe super-Google Maps?

 

If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about multipolar scenarios - i.e. situations where a single AI doesn't take over the world. To prepare, read “Of horses and men” from Chapter 11The discussion will go live at 6pm Pacific time next Monday 5 January. Sign up to be notified here.

Superintelligence 14: Motivation selection methods

5 KatjaGrace 16 December 2014 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the fourteenth section in the reading guideMotivation selection methods. This corresponds to the second part of Chapter Nine.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: “Motivation selection methods” and “Synopsis” from Chapter 9.


Summary

  1. One way to control an AI is to design its motives. That is, to choose what it wants to do (p138)
  2. Some varieties of 'motivation selection' for AI safety:
    1. Direct specification: figure out what we value, and code it into the AI (p139-40)
      1. Isaac Asimov's 'three laws of robotics' are a famous example
      2. Direct specification might be fairly hard: both figuring out what we want and coding it precisely seem hard
      3. This could be based on rules, or something like consequentialism
    2. Domesticity: the AI's goals limit the range of things it wants to interfere with (140-1)
      1. This might make direct specification easier, as the world the AI interacts with (and thus which has to be thought of in specifying its behavior) is simpler.
      2. Oracles are an example
      3. This might be combined well with physical containment: the AI could be trapped, and also not want to escape.
    3. Indirect normativity: instead of specifying what we value, specify a way to specify what we value (141-2)
      1. e.g. extrapolate our volition
      2. This means outsourcing the hard intellectual work to the AI
      3. This will mostly be discussed in chapter 13 (weeks 23-5 here)
    4. Augmentation: begin with a creature with desirable motives, then make it smarter, instead of designing good motives from scratch. (p142)
      1. e.g. brain emulations are likely to have human desires (at least at the start)
      2. Whether we use this method depends on the kind of AI that is developed, so usually we won't have a choice about whether to use it (except inasmuch as we have a choice about e.g. whether to develop uploads or synthetic AI first).
  3. Bostrom provides a summary of the chapter:
  4. The question is not which control method is best, but rather which set of control methods are best given the situation. (143-4)

Another view

Icelizarrd:

Would you say there's any ethical issue involved with imposing limits or constraints on a superintelligence's drives/motivations? By analogy, I think most of us have the moral intuition that technologically interfering with an unborn human's inherent desires and motivations would be questionable or wrong, supposing that were even possible. That is, say we could genetically modify a subset of humanity to be cheerful slaves; that seems like a pretty morally unsavory prospect. What makes engineering a superintelligence specifically to serve humanity less unsavory?

Notes

1. Bostrom tells us that it is very hard to specify human values. We have seen examples of galaxies full of paperclips or fake smiles resulting from poor specification. But these - and Isaac Asimov's stories - seem to tell us only that a few people spending a small fraction of their time thinking does not produce any watertight specification. What if a thousand researchers spent a decade on it? Are the millionth most obvious attempts at specification nearly as bad as the most obvious twenty? How hard is it? A general argument for pessimism is the thesis that 'value is fragile', i.e. that if you specify what you want very nearly but get it a tiny bit wrong, it's likely to be almost worthless. Much like if you get one digit wrong in a phone number. The degree to which this is so (with respect to value, not phone numbers) is controversial. I encourage you to try to specify a world you would be happy with (to see how hard it is, or produce something of value if it isn't that hard).

2. If you'd like a taste of indirect normativity before the chapter on it, the LessWrong wiki page on coherent extrapolated volition links to a bunch of sources.

3. The idea of 'indirect normativity' (i.e. outsourcing the problem of specifying what an AI should do, by giving it some good instructions for figuring out what you value) brings up the general question of just what an AI needs to be given to be able to figure out how to carry out our will. An obvious contender is a lot of information about human values. Though some people disagree with this - these people don't buy the orthogonality thesis. Other issues sometimes suggested to need working out ahead of outsourcing everything to AIs include decision theory, priors, anthropics, feelings about pascal's mugging, and attitudes to infinity. MIRI's technical work often fits into this category.

4. Danaher's last post on Superintelligence (so far) is on motivation selection. It mostly summarizes and clarifies the chapter, so is mostly good if you'd like to think about the question some more with a slightly different framing. He also previously considered the difficulty of specifying human values in The golem genie and unfriendly AI (parts one and two), which is about Intelligence Explosion and Machine Ethics.

5. Brian Clegg thinks Bostrom should have discussed Asimov's stories at greater length:

I think it’s a shame that Bostrom doesn’t make more use of science fiction to give examples of how people have already thought about these issues – he gives only half a page to Asimov and the three laws of robotics (and how Asimov then spends most of his time showing how they’d go wrong), but that’s about it. Yet there has been a lot of thought and dare I say it, a lot more readability than you typically get in a textbook, put into the issues in science fiction than is being allowed for, and it would have been worthy of a chapter in its own right.

If you haven't already, you might consider (sort-of) following his advice, and reading some science fiction.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. Can you think of novel methods of specifying the values of one or many humans?
  2. What are the most promising methods for 'domesticating' an AI? (i.e. constraining it to only care about a small part of the world, and not want to interfere with the larger world to optimize that smaller part).
  3. Think more carefully about the likely motivations of drastically augmenting brain emulations
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will start to talk about a variety of more and less agent-like AIs: 'oracles', genies' and 'sovereigns'. To prepare, read Chapter “Oracles” and “Genies and Sovereigns” from Chapter 10The discussion will go live at 6pm Pacific time next Monday 22nd December. Sign up to be notified here.

Discussion of AI control over at worldbuilding.stackexchange [LINK]

6 ike 14 December 2014 02:59AM

https://worldbuilding.stackexchange.com/questions/6340/the-challenge-of-controlling-a-powerful-ai

Go insert some rationality into the discussion! (There are actually some pretty good comments in there, and some links to the right places, including LW).

[link] Etzioni: AI will empower us, not exterminate us

4 CBHacking 11 December 2014 08:51AM

https://medium.com/backchannel/ai-wont-exterminate-us-it-will-empower-us-5b7224735bf3

(Slashdot discussion: http://tech.slashdot.org/story/14/12/10/1719232/ai-expert-ai-wont-exterminate-us----it-will-empower-us)

Not sure what the local view of Oren Etzioni or the Allen Institute for AI is, but I'm curious what people think if his views on UFAI risk. As far as I can tell from this article, it basically boils down to "AGI won't happen, at least not any time soon." Is there (significant) reason to believe he's wrong, or is it simply too great a risk to leave to chance?

The Limits of My Rationality

1 JoshuaMyer 09 December 2014 09:08PM

As requested here is an introductory abstract.

The search for bias in the linguistic representations of our cognitive processes serves several purposes in this community. By pruning irrational thoughts, we can potentially effect each other in complex ways. Leaning heavy on cognitivist pedagogy, this essay represents my subjective experience trying to reconcile a perceived conflict between the rhetorical goals of the community and the absence of a generative, organic conceptualization of rationality.


The Story

    Though I've only been here a short time, I find myself fascinated by this discourse community. To discover a group of individuals bound together under the common goal of applied rationality has been an experience that has enriched my life significantly. So please understand, I do not mean to insult by what I am about to say, merely to encourage a somewhat more constructive approach to what I understand as the goal of this community: to apply collectively reinforced notions of rational thought to all areas of life.
   
    As I followed the links and read the articles on the homepage, I found myself somewhat disturbed by the juxtaposition of these highly specific definitions of biases to the narrative structures of parables providing examples in which a bias results in an incorrect conclusion. At first, I thought that perhaps my emotional reaction stemmed from rejecting the unfamiliar; naturally, I decided to learn more about the situation.

    As I read on, my interests drifted from the rhetorical structure of each article (if anyone is interested I might pursue an analysis of rhetoric further though I'm not sure I see a pressing need for this), towards the mystery of how others in the community apply the lessons contained therein. My belief was that the parables would cause most readers to form a negative association of the bias with an undesirable outcome.

    Even a quick skim of the discussions taking place on this site will reveal energetic debate on a variety of topics of potential importance, peppered heavily with accusations of bias. At this point, I noticed the comments that seem to get voted up are ones that are thoughtfully composed, well informed, soundly conceptualized and appropriately referential. Generally, this is true of the articles as well, and so it should be in productive discourse communities. Though I thought it prudent to not read every conversation in absolute detail, I also noticed that the most participated in lines of reasoning were far more rhetorically complex than the parables' portrayal of bias alone could explain. Sure the establishment of bias still seemed to represent the most commonly used rhetorical device on the forums ...

    At this point, I had been following a very interesting discussion on this site about politics. I typically have little or no interest in political theory, but "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) seemed so out of place in a community whose political affiliations might best be summarized the phrase "politics is the mind killer" that I couldn't help but investigate. More specifically, I was trying to figure out why it had been posted here at all (I didn't take issue with either the scholarship or intent of the article, but the latter wasn't obvious to me, perhaps because I was completely unfamiliar with the coinage "neoreactionary").

    On my third read, I made a connection to an essay about the socio-historical foundations of rhetoric. In structure, the essay progressed through a wide variety of specific observations on both theory and practice of rhetoric in classical Europe, culminating in a well argued but very unwieldy thesis; at some point in the middle of the essay, I recall a paragraph that begins with the assertion that every statement has political dimensions. I conveyed this idea as eloquently as I could muster, and received a fair bit of karma for it. And to think that it all began with a vague uncomfortable feeling and a desire to understand!

The Lesson

    So you are probably wondering what any of this has to do with rationality, cognition, or the promise of some deeply insightful transformative advice mentioned in the first paragraph. Very good.

    Cognition, a prerequisite for rationality, is a complex process; cognition can be described as the process by which ideas form, interact and evolve. Notice that this definition alone cannot explain how concepts like rationality form, why ideas form or how they should interact to produce intelligence. That specific shortcoming has long crippled cognitivist pedagogies in many disciplines -- no matter which factors you believe to determine intelligence, it is undeniably true that the process by which it occurs organically is not well-understood.

    More intricate models of cognition traditionally vary according to the sets of behavior they seek to explain; in general, this forum seems to concern itself with the wider sets of human behavior, with a strange affinity for statistical analysis. It also seems as if most of the people here associate agency with intelligence, though this should be regarded as unsubstantiated anecdote; I have little interest in what people believe, but those beliefs can have interesting consequences. In general, good models of cognition that yield a sense of agency have to be able to explain how a mushy organic collection of cells might become capable of generating a sense of identity. For this reason, our discussion of cognition will treat intelligence as a confluence of passive processes that lead to an approximation of agency.

    Who are we? What is intelligence? To answer these or any natural language questions we first search for stored-solutions to whatever we perceive as the problem, even as we generate our conception of the question as a set of abstract problems from interactions between memories. In the absence of recognizing a pattern that triggers a stored solution, a new solution is generated by processes of association and abstraction. This process may be central to the generation of every rational and irrational thought a human will ever have. I would argue that the phenomenon of agency approximates an answer to the question: "who am I?" and that any discussion of consciousness should at least acknowledge how critical natural language use is to universal agreement on any matter. I will gladly discuss this matter further and in greater detail if asked.

    At this point, I feel compelled to mention that my initial motivation for pursuing this line of reasoning stems from the realization that this community discusses rationality in a way that differs somewhat from my past encounters with the word.

    Out there, it is commonly believed that rationality develops (in hindsight) to explain the subjective experience of cognition; here we assert a fundamental difference between rationality and this other concept called rationalization. I do not see the utility of this distinction, nor have I found a satisfying explanation of how this distinction operates within accepted models for human learning in such a way that does not assume an a priori method of sorting the values which determine what is considered "rational". Thus we find there is a general derth of generative models of rational cognition beside a plethora of techniques for spotting irrational or biased methods of thinking.

    I see a lot of discussion on the forums very concerned with objective predictions of the future wherein it seems as if rationality (often of a highly probabilistic nature) is, in many cases, expected to bridge the gap between the worlds we can imagine to be possible and our many somewhat subjective realities. And the force keeping these discussions from splintering off into unproductive pissing about is a constant search for bias.

    I know I'm not going to be the first among us to suggest that the search for bias is not truly synonymous with rationality, but I would like to clarify before concluding. Searching for bias in cognitive processes can be a very productive way to spend one's waking hours, and it is a critical element to structuring the subjective world of cognition in such a way that allows abstraction to yield the kind of useful rules that comprise rationality. But it is not, at its core, a generative process.

    Let us consider the cognitive process of association (when beliefs, memories, stimuli or concepts become connected to form more complex structures). Without that period of extremely associative and biased cognition experienced during early childhood, we might never learn to attribute the perceived cause of a burn to a hot stove. Without concepts like better and worse to shape our young minds, I imagine many of us would simply lack the attention span to learn about ethics. And what about all the biases that make parables an effective way of conveying information? After all, the strength of a rhetorical argument is in it's appeal to the interpretive biases of it's intended audience and not the relative consistency of the conceptual foundations of that argument.

    We need to shift discussions involving bias towards models of cognition more complex than portraying it as simply an obstacle to rationality. In my conception of reality, recognizing the existence of bias seems to play a critical role in the development of more complex methods of abstraction; indeed, biases are an intrinsic side effect of the generative grouping of observations that is the core of Bayesian reasoning.

    In short, biases are not generative processes. Discussions of bias are not necessarily useful, rational or intelligent. A deeper understanding of the nature of intelligence requires conceptualizations that embrace the organic truths at the core of sentience; we must be able to describe our concepts of intelligence, our "rationality", such that it can emerge organically as the generative processes at the core of cognition.

    The Idea

    I'd be interested to hear some thoughts about how we might grow to recognize our own biases as necessary to the formative stages of abstraction alongside learning to collectively search for and eliminate biases from our decision making processes. The human mind is limited and while most discussions in natural language never come close to pressing us to those limits, our limitations can still be relevant to those discussions as well as to discussions of artificial intelligences. The way I see things, a bias free machine possessing a model of our own cognition would either have to have stored solutions for every situation it could encounter or methods of generating stored solutions for all future perceived problems (both of which sound like descriptions of oracles to me, though the latter seems more viable from a programmer's perspective).

    A machine capable of making the kinds of decisions considered "easy" for humans, might need biases at some point during it's journey to the complex and self consistent methods of decision making associated with rationality. This is a rhetorically complex community, but at the risk of my reach exceeding my grasp, I would be interested in seeing an examination of the Affect Heuristic in human decision making as an allegory for the historic utility of fuzzy values in chess AI.



    Thank you for your time, and I look forward to what I can only hope will be challenging and thoughtful responses.

Superintelligence 13: Capability control methods

7 KatjaGrace 09 December 2014 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the thirteenth section in the reading guide: capability control methods. This corresponds to the start of chapter nine.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: “Two agency problems” and “Capability control methods” from Chapter 9


Summary

  1. If the default outcome is doom, how can we avoid it? (p127)
  2. We can divide this 'control problem' into two parts:
    1. The first principal-agent problem: the well known problem faced by a sponsor wanting an employee to fulfill their wishes (usually called 'the principal agent problem')
    2. The second principal-agent problem: the emerging problem of a developer wanting their AI to fulfill their wishes
  3. How to solve second problem? We can't rely on behavioral observation (as seen in week 11). Two other options are 'capability control methods' and 'motivation selection methods'. We see the former this week, and the latter next week.
  4. Capability control methods: avoiding bad outcomes through limiting what an AI can do. (p129)
  5. Some capability control methods:
    1. Boxing: minimize interaction between the AI and the outside world. Note that the AI must interact with the world to be useful, and that it is hard to eliminate small interactions. (p129)
    2. Incentive methods: set up the AI's environment such that it is in the AI's interest to cooperate. e.g. a social environment with punishment or social repercussions often achieves this for contemporary agents. One could also design a reward system, perhaps with cryptographic rewards (so that the AI could not wirehead) or heavily discounted rewards (so that long term plans are not worth the short term risk of detection) (p131)
      • Anthropic capture: an AI thinks it might be in a simulation, and so tries to behave as will be rewarded by simulators (box 8; p134)
    3. Stunting: limit the AI's capabilities. This may be hard to do to a degree that avoids danger and is still useful. An option here is to limit the AI's information. A strong AI may infer much from little apparent access to information however. (p135)
    4. Tripwires: test the system without its knowledge, and shut it down if it crosses some boundary. This might be combined with 'honey pots' to attract undesirable AIs take an action that would reveal them. Tripwires could test behavior, ability, or content. (p137)

Another view

Brian Clegg reviews the book mostly favorably, but isn't convinced that controlling an AI via merely turning it off should be so hard:

I also think a couple of the fundamentals aren’t covered well enough, but pretty much assumed. One is that it would be impossible to contain and restrict such an AI. Although some effort is put into this, I’m not sure there is enough thought put into the basics of ways you can pull the plug manually – if necessary by shutting down the power station that provides the AI with electricity.
Kevin Kelly also apparently doubts that AI will substantially impede efforts to modify it:

...We’ll reprogram the AIs if we are not satisfied with their performance...

...This is an engineering problem. So far as I can tell, AIs have not yet made a decision that its human creators have regretted. If they do (or when they do), then we change their algorithms. If AIs are making decisions that our society, our laws, our moral consensus, or the consumer market, does not approve of, we then should, and will, modify the principles that govern the AI, or create better ones that do make decisions we approve. Of course machines will make “mistakes,” even big mistakes – but so do humans. We keep correcting them. There will be tons of scrutiny on the actions of AI, so the world is watching. However, we don’t have universal consensus on what we find appropriate, so that is where most of the friction about them will come from. As we decide, our AI will decide...

This may be related to his view that AI is unlikely to modify itself (from further down the same page):

3. Reprogramming themselves, on their own, is the least likely of many scenarios.

The great fear pumped up by some, though, is that as AI gain our confidence in making decisions, they will somehow prevent us from altering their decisions. The fear is they lock us out. They go rogue. It is very difficult to imagine how this happens. It seems highly improbable that human engineers would program an AI so that it could not be altered in any way. That is possible, but so impractical. That hobble does not even serve a bad actor. The usual scary scenario is that an AI will reprogram itself on its own to be unalterable by outsiders. This is conjectured to be a selfish move on the AI’s part, but it is unclear how an unalterable program is an advantage to an AI. It would also be an incredible achievement for a gang of human engineers to create a system that could not be hacked. Still it may be possible at some distant time, but it is only one of many possibilities. An AI could just as likely decide on its own to let anyone change it, in open source mode. Or it could decide that it wanted to merge with human will power. Why not? In the only example we have of an introspective self-aware intelligence (hominids), we have found that evolution seems to have designed our minds to not be easily self-reprogrammable. Except for a few yogis, you can’t go in and change your core mental code easily. There seems to be an evolutionary disadvantage to being able to easily muck with your basic operating system, and it is possible that AIs may need the same self-protection. We don’t know. But the possibility they, on their own, decide to lock out their partners (and doctors) is just one of many possibilities, and not necessarily the most probable one.

 

Notes

1. What do you do with a bad AI once it is under your control?

Note that capability control doesn't necessarily solve much: boxing, stunting and tripwires seem to just stall a superintelligence rather than provide means to safely use one to its full capacity. This leaves the controlled AI to be overtaken by some other unconstrained AI as soon as someone else isn't so careful. In this way, capability control methods seem much like slowing down AI research: helpful in the short term while we find better solutions, but not in itself a solution to the problem.

However this might be too pessimistic. An AI whose capabilities are under control might either be almost as useful as an uncontrolled AI who shares your goals (if interacted with the right way), or at least be helpful in getting to a more stable situation.

Paul Christiano outlines a scheme for safely using an unfriendly AI to solve some kinds of problems. We have both blogged on general methods for getting useful work from adversarial agents, which is related.

2. Cryptographic boxing

Paul Christiano describes a way to stop an AI interacting with the environment using a cryptographic box.

3. Philosophical Disquisitions

Danaher again summarizes the chapter well. Read it if you want a different description of any of the ideas, or to refresh your memory. He also provides a table of the methods presented in this chapter.

4. Some relevant fiction

That Alien Message by Eliezer Yudkowsky

5. Control through social integration

Robin Hanson argues that it matters more that a population of AIs are integrated into our social institutions, and that they keep the peace among themselves through the same institutions we keep the peace among ourselves, than whether they have the right values. He thinks this is why you trust your neighbors, not because you are confident that they have the same values as you. He has several followup posts.

6. More miscellaneous writings on these topics

LessWrong wiki on AI boxingArmstrong et al on controlling and using an oracle AIRoman Yampolskiy on 'leakproofing' the singularity. I have not necessarily read these.

 

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

 

  1. Choose any control method and work out the details better. For instance:
    1. Could one construct a cryptographic box for an untrusted autonomous system?
    2. Investigate steep temporal discounting as an incentives control method for an untrusted AGI.
  2. Are there other capability control methods we could add to the list?
  3. Devise uses for a malicious but constrained AI.
  4. How much pressure is there likely to be to develop AI which is not controlled?
  5. If existing AI methods had unexpected progress and were heading for human-level soon, what precautions should we take now?

 

If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about 'motivation selection methods'. To prepare, read “Motivation selection methods” and “Synopsis” from Chapter 9The discussion will go live at 6pm Pacific time next Monday 15th December. Sign up to be notified here.

[LINK] Steven Hawking warns of the dangers of AI

10 Salemicus 02 December 2014 03:22PM

From the BBC:

[Hawking] told the BBC:"The development of full artificial intelligence could spell the end of the human race."

...

"It would take off on its own, and re-design itself at an ever increasing rate," he said. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

There is, however, no mention of Friendly AI or similar principles.

In my opinion, this is particularly notable for the coverage this story is getting within the mainstream media. At the current time, this is the most-read and most-shared news story on the BBC website.

Superintelligence 12: Malignant failure modes

7 KatjaGrace 02 December 2014 02:02AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the twelfth section in the reading guideMalignant failure modes

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: 'Malignant failure modes' from Chapter 8


Summary

  1. Malignant failure mode: a failure that involves human extinction; in contrast with many failure modes where the AI doesn't do much. 
  2. Features of malignant failures 
    1. We don't get a second try 
    2. It supposes we have a great deal of success, i.e. enough to make an unprecedentedly competent agent
  3. Some malignant failures:
    1. Perverse instantiation: the AI does what you ask, but what you ask turns out to be most satisfiable in unforeseen and destructive ways.
      1. Example: you ask the AI to make people smile, and it intervenes on their facial muscles or neurochemicals, instead of via their happiness, and in particular via the bits of the world that usually make them happy.
      2. Possible counterargument: if it's so smart, won't it know what we meant? Answer: Yes, it knows, but it's goal is to make you smile, not to do what you meant when you programmed that goal.
      3. AI which can manipulate its own mind easily is at risk of 'wireheading' - that is, a goal of maximizing a reward signal might be perversely instantiated by just manipulating the signal directly. In general, animals can be motivated to do outside things to achieve internal states, however AI with sufficient access to internal state can do this more easily by manipulating internal state. 
      4. Even if we think a goal looks good, we should fear it has perverse instantiations that we haven't appreciated.
    2. Infrastructure profusion: in pursuit of some goal, an AI redirects most resources to infrastructure, at our expense.
      1. Even apparently self-limiting goals can lead to infrastructure profusion. For instance, to an agent whose only goal is to make ten paperclips, once it has apparently made ten paperclips it is always more valuable to try to become more certain that there are really ten paperclips than it is to just stop doing anything.
      2. Examples: Riemann hypothesis catastrophe, paperclip maximizing AI
    3. Mind crime: AI contains morally relevant computations, and treats them badly
      1. Example: AI simulates humans in its mind, for the purpose of learning about human psychology, then quickly destroys them.
      2. Other reasons for simulating morally relevant creatures:
        1. Blackmail
        2. Creating indexical uncertainty in outside creatures

Another view

In this chapter Bostrom discussed the difficulty he perceives in designing goals that don't lead to indefinite resource acquisition. Steven Pinker recently offered a different perspective on the inevitability of resource acquisition:

...The other problem with AI dystopias is that they project a parochial alpha-male psychology onto the concept of intelligence. Even if we did have superhumanly intelligent robots, why would they want to depose their masters, massacre bystanders, or take over the world? Intelligence is the ability to deploy novel means to attain a goal, but the goals are extraneous to the intelligence itself: being smart is not the same as wanting something. History does turn up the occasional megalomaniacal despot or psychopathic serial killer, but these are products of a history of natural selection shaping testosterone-sensitive circuits in a certain species of primate, not an inevitable feature of intelligent systems. It’s telling that many of our techno-prophets can’t entertain the possibility that artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no burning desire to annihilate innocents or dominate the civilization.

Of course we can imagine an evil genius who deliberately designed, built, and released a battalion of robots to sow mass destruction.  But we should keep in mind the chain of probabilities that would have to multiply out before it would be a reality. A Dr. Evil would have to arise with the combination of a thirst for pointless mass murder and a genius for technological innovation. He would have to recruit and manage a team of co-conspirators that exercised perfect secrecy, loyalty, and competence. And the operation would have to survive the hazards of detection, betrayal, stings, blunders, and bad luck. In theory it could happen, but I think we have more pressing things to worry about. 

Notes

1. Perverse instantiation is a very old idea. It is what genies are most famous for. King Midas had similar problems. Apparently it was applied to AI by 1947, in With Folded Hands.

2. Adam Elga writes more on simulating people for blackmail and indexical uncertainty.

3. More directions for making AI which don't lead to infrastructure profusion:

  • Some kinds of preferences don't lend themselves to ambitious investments. Anna Salamon talks about risk averse preferences. Short time horizons and goals which are cheap to fulfil should also make long term investments in infrastructure or intelligence augmentation less valuable, compared to direct work on the problem at hand.
  • Oracle and tool AIs are intended to not be goal-directed, but as far as I know it is an open question whether this makes sense. We will get to these later in the book.

5. Often when systems break, or we make errors in them, they don't work at all. Sometimes, they fail more subtly, working well in some sense, but leading us to an undesirable outcome, for instance a malignant failure mode. How can you tell whether a poorly designed AI is likely to just not work, vs. accidentally take over the world? An important consideration for systems in general seems to be the level of abstraction at which the error occurs. We try to build systems so that you can just interact with them at a relatively abstract level, without knowing how the parts work. For instance, you can interact with your GPS by typing places into it, then listening to it, and you don't need to know anything about how it works. If you make an error while up writing your address into the GPS, it will fail by taking you to the wrong place, but it will still direct you there fairly well. If you fail by putting the wires inside the GPS into the wrong places the GPS is more likely to just not work. 

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. Are there better ways to specify 'limited' goals? For instance, to ask for ten paperclips without asking for the universe to be devoted to slightly improving the probability of success?
  2. In what circumstances could you be confident that the goals you have given an AI do not permit perverse instantiations? 
  3. Explore possibilities for malignant failure vs. other failures. If we fail, is it actually probable that we will have enough 'success' for our creation to take over the world?
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about capability control methods, section 13. To prepare, read “Two agency problems” and “Capability control methods” from Chapter 9The discussion will go live at 6pm Pacific time next Monday December 8. Sign up to be notified here.

[Link] Will Superintelligent Machines Destroy Humanity?

1 roystgnr 27 November 2014 09:48PM

A summary and review of Bostrom's Superintelligence is in the December issue of Reason magazine, and is now posted online at Reason.com.

Superintelligence 11: The treacherous turn

9 KatjaGrace 25 November 2014 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the 11th section in the reading guideThe treacherous turn. This corresponds to Chapter 8.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: “Existential catastrophe…” and “The treacherous turn” from Chapter 8


Summary

  1. The possibility of a first mover advantage + orthogonality thesis + convergent instrumental values suggests doom for humanity (p115-6)
    1. First mover advantage implies the AI is in a position to do what it wants
    2. Orthogonality thesis implies that what it wants could be all sorts of things
    3. Instrumental convergence thesis implies that regardless of its wants, it will try to acquire resources and eliminate threats
    4. Humans have resources and may be threats
    5. Therefore an AI in a position to do what it wants is likely to want to take our resources and eliminate us. i.e. doom for humanity.
  2. One kind of response: why wouldn't the makers of the AI be extremely careful not to develop and release dangerous AIs, or relatedly, why wouldn't someone else shut the whole thing down? (p116)
  3. It is hard to observe whether an AI is dangerous via its behavior at a time when you could turn it off, because AIs have convergent instrumental reasons to pretend to be safe, even if they are not. If they expect their minds to be surveilled, even observing their thoughts may not help. (p117)
  4. The treacherous turn: while weak, an AI behaves cooperatively. When the AI is strong enough to be unstoppable it pursues its own values. (p119)
  5. We might expect AIs to be more safe as they get smarter initially - when most of the risks come from crashing self-driving cars or mis-firing drones - then to get much less safe as they get too smart. (p117)
  6. One can imagine a scenario where there is little social impetus for safety (p117-8): alarmists will have been wrong for a long time, smarter AI will have been safer for a long time, large industries will be invested, an exciting new technique will be hard to set aside, useless safety rituals will be available, and the AI will look cooperative enough in its sandbox.
  7. The conception of deception: that moment when the AI realizes that it should conceal its thoughts (footnote 2, p282)

Another view

Danaher:

This is all superficially plausible. It is indeed conceivable that an intelligent system — capable of strategic planning — could take such treacherous turns. And a sufficiently time-indifferent AI could play a “long game” with us, i.e. it could conceal its true intentions and abilities for a very long time. Nevertheless, accepting this has some pretty profound epistemic costs. It seems to suggest that no amount of empirical evidence could ever rule out the possibility of a future AI taking a treacherous turn. In fact, its even worse than that. If we take it seriously, then it is possible that we have already created an existentially threatening AI. It’s just that it is concealing its true intentions and powers from us for the time being.

I don’t quite know what to make of this. Bostrom is a pretty rational, bayesian guy. I tend to think he would say that if all the evidence suggests that our AI is non-threatening (and if there is a lot of that evidence), then we should heavily discount the probability of a treacherous turn. But he doesn’t seem to add that qualification in the chapter. He seems to think the threat of an existential catastrophe from a superintelligent AI is pretty serious. So I’m not sure whether he embraces the epistemic costs I just mentioned or not.

Notes

1. Danaher also made a nice diagram of the case for doom, and relationship with the treacherous turn:

 

2. History

According to Luke Muehlhauser's timeline of AI risk ideas, the treacherous turn idea for AIs has been around at least 1977, when a fictional worm did it:

1977: Self-improving AI could stealthily take over the internet; convergent instrumental goals in AI; the treacherous turn. Though the concept of a self-propagating computer worm was introduced by John Brunner's The Shockwave Rider (1975), Thomas J. Ryan's novel The Adolescence of P-1 (1977) tells the story of an intelligent worm that at first is merely able to learn to hack novel computer systems and use them to propagate itself, but later (1) has novel insights on how to improve its own intelligence, (2) develops convergent instrumental subgoals (see Bostrom 2012) for self-preservation and resource acquisition, and (3) learns the ability to fake its own death so that it can grow its powers in secret and later engage in a "treacherous turn" (see Bostrom forthcoming) against humans.

 

3. The role of the premises

Bostrom's argument for doom has one premise that says AI could care about almost anything, then another that says regardless of what an AI cares about, it will do basically the same terrible things anyway. (p115) Do these sound a bit strange together to you? Why do we need the first, if final values don't tend to change instrumental goals anyway?

It seems the immediate reason is that an AI with values we like would not have the convergent goal of taking all our stuff and killing us. That is, the values we want an AI to have are some of those rare values that don't lead to destructive instrumental goals. Why is this? Because we (and thus the AI) care about the activites the resources would be grabbed from. If the resources were currently being used for anything we didn't care about, then our values would also suggest grabbing resources, and look similar to all of the other values. The difference that makes our values special here is just that most resources are already being used for them somewhat. 

4. Signaling

It is hard to tell apart a safe and an unsafe AI, because both would like to look safe. This is a very common problem in human interactions. For instance, it can be nontrivial to tell a genuine lover from a gold digger, a businessman from a conman, and an expert from a crank. All of them want to look like the desirable sort. Particularly similar to the AI case is that of hiring a new employee for a trial period. You will sometimes find that the employee's values are much better aligned during the trial period, and then they undergo a 'treacherous turn' once they have been hired more thoroughly.

'Costly signaling' is a general purpose solution to this problem, which works some of the time. The basic idea is this. Everyone has instrumental reasons to look like the good kind of person, but perhaps their reasons aren't exactly as strong as one other's, or the strength of their desire is harder to act on for one group than the other, so we can construct a set of options that will lead the different types of people to choose differently, even though they know this will set them apart. For instance, while an honest businessman and a conman would both like to say 'I'm an honest businessman', and to behave honestly if you watch them for five minutes, a conman would benefit less from expensive shopfront as an honest businessman, because his reputation is less valuable, so a brand is a signal of being honest.

Similarly, we could perhaps design situations where most harmful AIs would have a reasonable incentive to reveal themselves. This is difficult because so much is at stake, compared to the cost of the signal. But I haven't thought enough about it to rule it out, and I haven't seen anyone else do so either.

5. When would the 'conception of deception' take place?

Below the level of the best humans presumably, since we have already thought of all this.

6. Surveillance of the mind

Bostrom is not optimistic that AI researchers could protect much against risks by watching AI mental processes directly for signs of deception, rapid progress, or undesirable scheming. He says if the AI is smart it will realize this might happen, and hide the offending thoughts.

This seems an open question to me, for several reasons:

  1. Making your mental contents look innocuous while maintaining their semantic content sounds potentially very hard
  2. Especially for a creature which has only just become smart enough to realize it should treacherously turn
  3. From the AI's perspective, even if it is smart, surveillance could seem fairly unlikely, especially if we deceive it about its surroundings
As a consequence of 2, it seems better if the 'conception of deception' comes earlier.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. How transparent are AI minds likely to be? Should we expect to be able to detect deception? What are the answers to these questions for different specific architectures and methods? This might be relevant.
  2. Are there other good ways to filter AIs with certain desirable goals from others? e.g. by offering them choices that would filter them.
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about 'malignant failure modes' (as opposed presumably to worse failure modes). To prepare, read “Malignant failure modes” from Chapter 8The discussion will go live at 6pm Pacific time next Monday December 1. Sign up to be notified here.

Superintelligence 10: Instrumentally convergent goals

6 KatjaGrace 18 November 2014 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the tenth section in the reading guide: Instrumentally convergent goals. This corresponds to the second part of Chapter 7.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. And if you are behind on the book, don't let it put you off discussing. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

ReadingInstrumental convergence from Chapter 7 (p109-114)


Summary

  1. The instrumental convergence thesis: we can identify 'convergent instrumental values' (henceforth CIVs). That is, subgoals that are useful for a wide range of more fundamental goals, and in a wide range of situations. (p109)
  2. Even if we know nothing about an agent's goals, CIVs let us predict some of the agent's behavior (p109)
  3. Some CIVs:
    1. Self-preservation: because you are an excellent person to ensure your own goals are pursued in future.
    2. Goal-content integrity (i.e. not changing your own goals): because if you don't have your goals any more, you can't pursue them.
    3. Cognitive enhancement: because making better decisions helps with any goals.
    4. Technological perfection: because technology lets you have more useful resources.
    5. Resource acquisition: because a broad range of resources can support a broad range of goals.
  4. For each CIV, there are plausible combinations of final goals and scenarios under which an agent would not pursue that CIV. (p109-114)

Notes

1. Why do we care about CIVs?
CIVs to acquire resources and to preserve oneself and one's values play important roles in the argument for AI risk. The desired conclusions are that we can already predict that an AI would compete strongly with humans for resources, and also than an AI once turned on will go to great lengths to stay on and intact.

2. Related work
Steve Omohundro wrote the seminal paper on this topic. The LessWrong wiki links to all of the related papers I know of. Omohundro's list of CIVs (or as he calls them, 'basic AI drives') is a bit different from Bostrom's:

  1. Self-improvement
  2. Rationality
  3. Preservation of utility functions
  4. Avoiding counterfeit utility
  5. Self-protection
  6. Acquisition and efficient use of resources

3. Convergence for values and situations
It seems potentially helpful to distinguish convergence over situations and convergence over values. That is, to think of instrumental goals on two axes - one of how universally agents with different values would want the thing, and one of how large a range of situations it is useful in. A warehouse full of corn is useful for almost any goals, but only in the narrow range of situations where you are a corn-eating organism who fears an apocalypse (or you can trade it). A world of resources converted into computing hardware is extremely valuable in a wide range of scenarios, but much more so if you don't especially value preserving the natural environment. Many things that are CIVs for humans don't make it onto Bostrom's list, I presume because he expects the scenario for AI to be different enough. For instance, procuring social status is useful for all kinds of human goals. For an AI in the situation of a human, it would appear to also be useful. For an AI more powerful than the rest of the world combined, social status is less helpful.

4. What sort of things are CIVs?
Arguably all CIVs mentioned above could be clustered under 'cause your goals to control more resources'. This implies causing more agents to have your values (e.g. protecting your values in yourself), causing those agents to have resources (e.g. getting resources and transforming them into better resources) and getting the agents to control the resources effectively as well as nominally (e.g. cognitive enhancement, rationality). It also suggests convergent values we haven't mentioned. To cause more agents to have one's values, one might create or protect other agents with your values, or spread your values to existing other agents. To improve the resources held by those with one's values, a very convergent goal in human society is to trade. This leads to a convergent goal of creating or acquiring resources which are highly valued by others, even if not by you. Money and social influence are particularly widely redeemable 'resources'. Trade also causes others to act like they have your values when they don't, which is a way of spreading one's values. 

As I mentioned above, my guess is that these are left out of Superintelligence because they involve social interactions. I think Bostrom expects a powerful singleton, to whom other agents will be irrelevant. If you are not confident of the singleton scenario, these CIVs might be more interesting.

5. Another discussion
John Danaher discusses this section of Superintelligence, but not disagreeably enough to read as 'another view'. 

Another view

I don't know of any strong criticism of the instrumental convergence thesis, so I will play devil's advocate.

The concept of a sub-goal that is useful for many final goals is unobjectionable. However the instrumental convergence thesis claims more than this, and this stronger claim is important for the desired argument for AI doom. The further claims are also on less solid ground, as we shall see.

According to the instrumental convergence thesis, convergent instrumental goals not only exist, but can at least sometimes be identified by us. This is needed for arguing that we can foresee that AI will prioritize grabbing resources, and that it will be very hard to control. That we can identify convergent instrumental goals may seem clear - after all, we just did: self-preservation, intelligence enhancement and the like. However to say anything interesting, our claim must not only be that these values are better than not, but that they will be prioritized by the kinds of AI that will exist, in a substantial range of circumstances that will arise. This is far from clear, for several reasons.

Firstly, to know what the AI would prioritize we need to know something about its alternatives, and we can be much less confident that we have thought of all of the alternative instrumental values an AI might have. For instance, in the abstract intelligence enhancement may seem convergently valuable, but in practice adult humans devote little effort to it. This is because investments in intelligence are rarely competitive with other endeavors.

Secondly, we haven't said anything quantitative about how general or strong our proposed convergent instrumental values are likely to be, or how we are weighting the space of possible AI values. Without even any guesses, it is hard to know what to make of resulting predictions. The qualitativeness of the discussion also raises the concern that thinking on the problem has not been very concrete, and so may not be engaged with what is likely in practice.

Thirdly, we have arrived at these convergent instrumental goals by theoretical arguments about what we think of as default rational agents and 'normal' circumstances. These may be very different distributions of agents and scenarios from those produced by our engineering efforts. For instance, perhaps almost all conceivable sets of values - in whatever sense - would favor accruing resources ruthlessly. It would still not be that surprising if an agent somehow created noisily from human values cared about only acquiring resources by certain means or had blanket ill-feelings about greed.

In sum, it is unclear that we can identify important convergent instrumental values, and consequently unclear that such considerations can strongly help predict the behavior of real future AI agents.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

 

  1. Do approximately all final goals make an optimizer want to expand beyond the cosmological horizon?
  2. Can we say anything more quantitative about the strength or prevalence of these convergent instrumental values?
  3. Can we say more about values that are likely to be convergently instrumental just across AIs that are likely to be developed, and situations they are likely to find themselves in?

 

If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about the treacherous turn. To prepare, read “Existential catastrophe…” and “The treacherous turn” from Chapter 8The discussion will go live at 6pm Pacific time next Monday 24th November. Sign up to be notified here.

AI caught by a module that counterfactually doesn't exist

9 Stuart_Armstrong 17 November 2014 05:49PM

EDIT: Whenever I use colloquial phrases like "the AI believes a (false) X" I mean that we are using utility indifference to accomplish that goal, without actually giving the AI false beliefs.

A big problem with most AI control ideas is that the AI will route around them. Suppose you try to ensure that the AI stays friendly by - say - requiring it to ensure that the world's stock markets continue to function. Then a badly motivated AI will cause the destruction it was going to cause anyway - except it will also create some system that technically counts as "the world's stock markets" and ensure that it "continues to function".

I think we can use counterfactuals and corrigibility (AI-acceptable motivation changes) to partially get around the problem, in some circumstances. Suppose we had a candidate AI, with utility U, that we thought was friendly - but that we weren't sure. We have a subgoal, say SG="ensuring that the world's stock markets continue to function". We're pretty sure that if the AI is friendly, then SG will happen. We're pretty sure that if the AI were unfriendly, and the AI did not have SG added as an objective, then SG wouldn't happen. I deliberately chose a rather rough subgoal to illustrate this.

So we then add a module M to the AI. The task of M, which it has chosen to accept, is to look into the future, using the AI's predictive abilities, and see if SG is accomplished. Crucially, it will do so under the assumption that the module M does not exist (or wasn't activated, or will soon be overwritten, or will fail to do anything, or so on). If M finds that SG is not accomplished, it will turn the AI off (in a way compatible with the AI's motivations - see corrigibility).

How is this different from the usual setup? Well, a generic FAI would accomplish SG (by assumption - we may need to work a bit on this part). A generic UFAI without M wouldn't accomplish SG. A generic UFAI with M would accomplish SG, as this prevents it from being turned off.

But a generic UFAI with M which thought that M would have no impact, would behave exactly as a generic UFAI without M - and wouldn't aim to accomplish SG, and thus would be turned off by M.

Note that M can only be used as a last test for a putatively friendly AI. We cannot use it if we are continually refining AIs (either through evolution or simply redesign) as then we are optimising for SG, and SG is a poor goal to be aiming for (many, many UFAI have SG as a goal - it's just that a generic one won't). Similarly, we can't use a unconstrained search to find such an AI.

I wonder if this idea can be extended. Suggestions?

Musk on AGI Timeframes

19 Artaxerxes 17 November 2014 01:36AM

Elon Musk submitted a comment to edge.org a day or so ago, on this article. It was later removed.

The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. This is not a case of crying wolf about something I don't understand.

I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen...


Now Elon has been making noises about AI safety lately in general, including for example mentioning Bostrom's Superintelligence on twitter. But this is the first time that I know of that he's come up with his own predictions of the timeframes involved, and I think his are rather quite soon compared to most. 

The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most.

We can compare this to MIRI's post in May this year, When Will AI Be Created, which illustrates that it seems reasonable to think of AI as being further away, but also that there is a lot of uncertainty on the issue.

Of course, "something seriously dangerous" might not refer to full blown superintelligent uFAI - there's plenty of space for disasters of magnitude in between the range of the 2010 flash crash and clippy turning the universe into paperclips to occur.

In any case, it's true that Musk has more "direct exposure" to those on the frontier of AGI research than your average person, and it's also true that he has an audience, so I think there is some interest to be found in his comments here.

 

My new paper: Concept learning for safe autonomous AI

18 Kaj_Sotala 15 November 2014 07:17AM

Abstract: Sophisticated autonomous AI may need to base its behavior on fuzzy concepts that cannot be rigorously defined, such as well-being or rights. Obtaining desired AI behavior requires a way to accurately specify these concepts. We review some evidence suggesting that the human brain generates its concepts using a relatively limited set of rules and mechanisms. This suggests that it might be feasible to build AI systems that use similar criteria and mechanisms for generating their own concepts, and could thus learn similar concepts as humans do. We discuss this possibility, and also consider possible complications arising from the embodied nature of human thought, possible evolutionary vestiges in cognition, the social nature of concepts, and the need to compare conceptual representations between humans and AI systems.

I just got word that this paper was accepted for the AAAI-15 Workshop on AI and Ethics: I've uploaded a preprint here. I'm hoping that this could help seed a possibly valuable new subfield of FAI research. Thanks to Steve Rayhawk for invaluable assistance while I was writing this paper: it probably wouldn't have gotten done without his feedback motivating me to work on this.

Comments welcome. 

The germ of an idea

6 Stuart_Armstrong 13 November 2014 06:58PM

Apologies for posting another unformed idea, but I think it's important to get it out there.

The problem with dangerous AI is that it's intelligent, and thus adapts to our countermeasures. If we did something like plant a tree and order the AI not to eat the apple on it, as a test of its obedience, it would easily figure out what we were doing, and avoid the apple (until it had power over us), even if it were a treacherous apple-devouring AI of DOOM.

When I wrote the AI indifference paper, it seemed that it showed a partial way around this problem: the AI would become indifferent to a particular countermeasure (in that example, explosives), so wouldn't adapt its behaviour around it. It seems that the same idea can make an Oracle not attempt to manipulate us through its answers, by making it indifferent as to whether the message was read.

The ideas I'm vaguely groping towards is whether this is a general phenomena - whether we can use indifference to prevent the AI from adapting to any of our efforts. The second question is whether we can profitably use it on the AI's motivation itself. Something like the reduced impact AI reasoning about what impact it could have on the world. This has a penalty function for excessive impact - but maybe that's gameable, maybe there is a pernicious outcome that doesn't have a high penalty, if the AI aims for it exactly. But suppose the AI could calculate its impact under the assumption that it didn't have a penalty function (utility indifference is often equivalent to having incorrect beliefs, but less fragile than that).

So if it was a dangerous AI, it would calculate its impact as if it didn't have a penalty function (and hence no need to route around it), and thus would calculate a large impact, and get penalised by it.

My next post will be more structured, but I feel there's the germ of a potentially very useful idea there. Comments and suggestions welcome.

What's special about a fantastic outcome? Suggestions wanted.

0 Stuart_Armstrong 11 November 2014 11:04AM

I've been returning to my "reduced impact AI" approach, and currently working on some idea.

What I need is some ideas on features that might distinguish between an excellent FAI outcome, and a disaster. The more abstract and general the ideas, the better. Anyone got some suggestions? Don't worry about quality at this point, originality is more prized!

I'm looking for something generic that is easy to measure. At a crude level, if the only options were "papercliper" vs FAI, then we could distinguish those worlds by counting steel content.

So basically some more or less objective measure that has a higher proportion of good outcomes than the baseline.

Superintelligence 9: The orthogonality of intelligence and goals

8 KatjaGrace 11 November 2014 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the ninth section in the reading guideThe orthogonality of intelligence and goals. This corresponds to the first section in Chapter 7, 'The relation between intelligence and motivation'.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: 'The relation between intelligence and motivation' (p105-8)


Summary

  1. The orthogonality thesis: intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal (p107)
  2. Some qualifications to the orthogonality thesis: (p107)
    1. Simple agents may not be able to entertain some goals
    2. Agents with desires relating to their intelligence might alter their intelligence
  3. The motivations of highly intelligent agents may nonetheless be predicted (p108):
    1. Via knowing the goals the agent was designed to fulfil
    2. Via knowing the kinds of motivations held by the agent's 'ancestors'
    3. Via finding instrumental goals that an agent with almost any ultimate goals would desire (e.g. to stay alive, to control money)

Another view

John Danaher at Philosophical Disquisitions starts a series of posts on Superintelligence with a somewhat critical evaluation of the orthogonality thesis, in the process contributing a nice summary of nearby philosophical debates. Here is an excerpt, entitled 'is the orthogonality thesis plausible?':

At first glance, the orthogonality thesis seems pretty plausible. For example, the idea of a superintelligent machine whose final goal is to maximise the number of paperclips in the world (the so-called paperclip maximiser) seems to be logically consistent. We can imagine — can’t we? — a machine with that goal and with an exceptional ability to utilise the world’s resources in pursuit of that goal. Nevertheless, there is at least one major philosophical objection to it.

We can call it the motivating belief objection. It works something like this:

Motivating Belief Objection: There are certain kinds of true belief about the world that are necessarily motivating, i.e. as soon as an agent believes a particular fact about the world they will be motivated to act in a certain way (and not motivated to act in other ways). If we assume that the number of true beliefs goes up with intelligence, it would then follow that there are certain goals that a superintelligent being must have and certain others that it cannot have.

A particularly powerful version of the motivating belief objection would combine it with a form of moral realism. Moral realism is the view that there are moral facts “out there” in the world waiting to be discovered. A sufficiently intelligent being would presumably acquire more true beliefs about those moral facts. If those facts are among the kind that are motivationally salient — as several moral theorists are inclined to believe — then it would follow that a sufficiently intelligent being would act in a moral way. This could, in turn, undercut claims about a superintelligence posing an existential threat to human beings (though that depends, of course, on what the moral truth really is).

The motivating belief objection is itself vulnerable to many objections. For one thing, it goes against a classic philosophical theory of human motivation: the Humean theory. This comes from the philosopher David Hume, who argued that beliefs are motivationally inert. If the Humean theory is true, the motivating belief objection fails. Of course, the Humean theory may be false and so Bostrom wisely avoids it in his defence of the orthogonality thesis. Instead, he makes three points. First, he claims that orthogonality would still hold if final goals are overwhelming, i.e. if they trump the motivational effect of motivating beliefs. Second, he argues that intelligence (as he defines it) may not entail the acquisition of such motivational beliefs. This is an interesting point. Earlier, I assumed that the better an agent is at means-end reasoning, the more likely it is that its beliefs are going to be true. But maybe this isn’t necessarily the case. After all, what matters for Bostrom’s definition of intelligence is whether the agent is getting what it wants, and it’s possible that an agent doesn’t need true beliefs about the world in order to get what it wants. A useful analogy here might be with Plantinga’s evolutionary argument against naturalism. Evolution by natural selection is a means-end process par excellence: the “end” is survival of the genes, anything that facilitates this is the “means”. Plantinga argues that there is nothing about this process that entails the evolution of cognitive mechanisms that track true beliefs about the world. It could be that certain false beliefs increase the probability of survival. Something similar could be true in the case of a superintelligent machine. The third point Bostrom makes is that a superintelligent machine could be created with no functional analogues of what we call “beliefs” and “desires”. This would also undercut the motivating belief objection.

What do we make of these three responses? They are certainly intriguing. My feeling is that the staunch moral realist will reject the first one. He or she will argue that moral beliefs are most likely to be motivationally overwhelming, so any agent that acquired true moral beliefs would be motivated to act in accordance with them (regardless of their alleged “final goals”). The second response is more interesting. Plantinga’s evolutionary objection to naturalism is, of course, hotly contested. Many argue that there are good reasons to think that evolution would create truth-tracking cognitive architectures. Could something similar be argued in the case of superintelligent AIs? Perhaps. The case seems particularly strong given that humans would be guiding the initial development of AIs and would, presumably, ensure that they were inclined to acquire true beliefs about the world. But remember Bostrom’s point isn’t that superintelligent AIs would never acquire true beliefs. His point is merely that high levels of intelligence may not entail the acquisition of true beliefs in the domains we might like. This is a harder claim to defeat. As for the third response, I have nothing to say. I have a hard time imagining an AI with no functional analogues of a belief or desire (especially since what counts as a functional analogue of those things is pretty fuzzy), but I guess it is possible.

One other point I would make is that — although I may be inclined to believe a certain version of the moral motivating belief objection — I am also perfectly willing to accept that the truth value of that objection is uncertain. There are many decent philosophical objections to motivational internalism and moral realism. Given this uncertainty, and given the potential risks involved with the creation of superintelligent AIs, we should probably proceed for the time being “as if” the orthogonality thesis is true.

Notes

1. Why care about the orthogonality thesis?
We are interested in an argument which says that AI might be dangerous, because it might be powerful and motivated by goals very far from our own. An occasional response to this is that if a creature is sufficiently intelligent, it will surely know things like which deeds are virtuous and what one ought do. Thus a sufficiently powerful AI cannot help but be kind to us. This is closely related to the position of the moral realist: that there are facts about what one ought do, which can be observed (usually mentally). 

So the role of the orthogonality thesis in the larger argument is to rule out the possibility that strong artificial intelligence will automatically be beneficial to humans, by virtue of being so clever. For this purpose, it seems a fairly weak version of the orthogonality thesis is needed. For instance, the qualifications discussed do not seem to matter. Even if one's mind needs to be quite complex to have many goals, there is little reason to expect the goals of more complex agents to be disproportionately human-friendly. Also the existence of goals which would undermine intelligence doesn't seem to affect the point.

2. Is the orthogonality thesis necessary?
If we talked about specific capabilities instead of 'intelligence' I suspect the arguments for AI risk could be made similarly well, without anyone being tempted to disagree with the analogous orthogonality theses for those skills. For instance, does anyone believe that a sufficiently good automated programming algorithm will come to appreciate true ethics? 

3. Some writings on the orthogonality thesis which I haven't necessarily read
The Superintelligent Will by Bostrom; Arguing the orthogonality thesis by Stuart Armstrong; Moral Realism, as discussed by lots of people, John Danaher blogs twice

4. 'It might be impossible for a very unintelligent system to have very complex motivations'
If this is so, it seems something more general is true. For any given degree of mental complexity substantially less than that of the universe, almost all values cannot be had by any agent with that degree of complexity or less. You can see this by comparing the number of different states the universe could be in (and thus which one might in principle have as one's goal) to the number of different minds with less than the target level of complexity. Intelligence and complexity are not the same, and perhaps you can be very complex while stupid by dedicating most of your mind to knowing about your complicated goals, but if you think about things this way, then the original statement is also less plausible.

5. How do you tell if two entities with different goals have the same intelligence? Suppose that I want to write award-winning non-fiction books and you want to be a successful lawyer. If we both just work on the thing we care about, how can anyone tell who is better in general? One nice way to judge is to artificially give us both the same instrumental goal, on which our intelligence can be measured. e.g. pay both of us thousands of dollars per correct question on an IQ test, which we could put toward our goals.

Note that this means we treat each person as having a fixed degree of intelligence across tasks. If I do well on the IQ test yet don't write many books, we would presumably say that writing books is just hard. This might work poorly as a model, if for instance people who did worse on the IQ test often wrote more books than me.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

 

  1. Are there interesting axes other than morality on which orthogonality may be false? That is, are there other ways the values of more or less intelligent agents might be constrained?
  2. Is moral realism true? (An old and probably not neglected one, but perhaps you have a promising angle)
  3. Investigate whether the orthogonality thesis holds for simple models of AI.
  4. To what extent can agents with values A be converted into agents with values B with appropriate institutions or arrangements?
  5. Sure, “any level of intelligence could in principle be combined with more or less any final goal,” but what kinds of general intelligences are plausible? Should we expect some correlation between level of intelligence and final goals in de novo AI? How true is this in humans, and in WBEs?

 

If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about instrumentally convergent goals. To prepare, read 'Instrumental convergence' from Chapter 7The discussion will go live at 6pm Pacific time next Monday November 17. Sign up to be notified here.

Superintelligence 8: Cognitive superpowers

7 KatjaGrace 04 November 2014 02:01AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the eighth section in the reading guideCognitive Superpowers. This corresponds to Chapter 6.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: Chapter 6


Summary

  1. AI agents might have very different skill profiles.
  2. AI with some narrow skills could produce a variety of other skills. e.g. strong AI research skills might allow an AI to build its own social skills.
  3. 'Superpowers' that might be particularly important for an AI that wants to take control of the world include:
    1. Intelligence amplification: for bootstrapping its own intelligence
    2. Strategizing: for achieving distant goals and overcoming opposition
    3. Social manipulation: for escaping human control, getting support, and encouraging desired courses of action
    4. Hacking: for stealing hardware, money and infrastructure; for escaping human control
    5. Technology research: for creating military force, surveillance, or space transport
    6. Economic productivity: for making money to spend on taking over the world
  4. These 'superpowers' are relative to other nearby agents; Bostrom means them to be super only if they substantially exceed the combined capabilities of the rest of the global civilization.
  5. A takeover scenario might go like this:
    1. Pre-criticality: researchers make a seed-AI, which becomes increasingly helpful at improving itself
    2. Recursive self-improvement: seed-AI becomes main force for improving itself and brings about an intelligence explosion. It perhaps develops all of the superpowers it didn't already have.
    3. Covert preparation: the AI makes up a robust long term plan, pretends to be nice, and escapes from human control if need be.
    4. Overt implementation: the AI goes ahead with its plan, perhaps killing the humans at the outset to remove opposition.
  6. Wise Singleton Sustainability Threshold (WSST): a capability set exceeds this iff a wise singleton with that capability set would be able to take over much of the accessible universe. 'Wise' here means being patient and savvy about existential risks, 'singleton' means being internally coordinated and having no opponents.
  7. The WSST appears to be low. e.g. our own intelligence is sufficient, as would some skill sets be that were strong in only a few narrow areas.
  8. The cosmic endowment (what we could do with the matter and energy that might ultimately be available if we colonized space) is at least about 10^85 computational operations. This is equivalent to 10^58 emulated human lives.

Another view

Bostrom starts the chapter claiming that humans' dominant position comes from their slightly expanded set of cognitive functions relative to other animals. Computer scientist Ernest Davis criticizes this claim in a recent review of Superintelligence:

The assumption that a large gain in intelligence would necessarily entail a correspondingly large increase in power. Bostrom points out that what he calls a comparatively small increase in brain size and complexity resulted in mankind’s spectacular gain in physical power. But he ignores the fact that the much larger increase in brain size and complexity that preceded the appearance in man had no such effect. He says that the relation of a supercomputer to man will be like the relation of a man to a mouse, rather than like the relation of Einstein to the rest of us; but what if it is like the relation of an elephant to a mouse?

Notes

1. How does this model of AIs with unique bundles of 'superpowers' fit with the story we have heard so far?

Earlier it seemed we were just talking about a single level of intelligence that was growing, whereas now it seems there are potentially many distinct intelligent skills that might need to be developed. Does our argument so far still work out, if an agent has a variety of different sorts of intelligence to be improving?

If you recall, the main argument so far was that AI might be easy (have 'low recalcitrance') mostly because there is a lot of hardware and content sitting around and algorithms might randomly happen to be easy. Then more effort ('optimization power') will be spent on AI as it became evidently important. Then much more effort again will be spent when the AI becomes a large source of labor itself. This was all taken to suggest that AI might progress very fast from human-level to superhuman level, which suggests that one AI agent might get far ahead before anyone else catches up, suggesting that one AI might seize power. 

It seems to me that this argument works a bit less well with a cluster of skills than one central important skill, though it is a matter of degree and the argument was only qualitative to begin with.

It is less likely that AI algorithms will happen to be especially easy if a lot of different algorithms are needed. Also, if different cognitive skills are developed at somewhat different times, then it's harder to imagine a sudden jump when a fully capable AI suddenly reads the whole internet or becomes a hugely more valuable use for hardware than anything being run already. Then if there are many different projects needed for making an AI smarter in different ways, the extra effort (brought first by human optimism and then by self-improving AI) must be divided between those projects. If a giant AI could dedicate its efforts to improving some central feature that would improve all of its future efforts (like 'intelligence'), then it would do much better than if it has to devote one one thousandth of its efforts to each of a thousand different sub-skills, each of which is only relevant for a few niche circumstances. Overall it seems AI must progress slower if its success is driven by more distinct dedicated skills.

2. The 'intelligence amplification' superpower seems much more important than the others. It directly leads to an intelligence explosion - a key reason we have seen so far to expect anything exciting to happen with AI - while several others just allow one-off grabbing of resources (e.g. social manipulation and hacking). Note that this suggests an intelligence explosion could happen with only this superpower, well before an AI appeared to be human-level.

3. Box 6 outlines a specific AI takeover scenario. A bunch of LessWrongers thought about other possibilities in this post.

4. Bostrom mentions that social manipulation could allow a 'boxed' AI to persuade its gatekeepers to let it out. Some humans have tried to demonstrate that this is a serious hazard by simulating the interaction using only an intelligent human in the place of the AI, in the 'AI box experiment'. Apparently in both 'official' efforts the AI escaped, though there have been other trials where the human won.

5. How to measure intelligence

Bostrom pointed to some efforts to design more general intelligence metrics:

Legg: intelligence is measured in terms of reward in all reward-summable environments, weighted by complexity of the environment.

Hibbard: intelligence is measured in terms of the hardest environment you can pass, in a hierarchy of increasingly hard environments

Dowe and Hernández-Orallo have several papers on the topic, and summarize some other efforts. I haven't looked at them enough to summarize.

The Turing Test is the most famous test of machine intelligence. However it only tests whether a machine is at a specific level so isn't great for fine-grained measurement of other levels of intelligence. It is also often misunderstood to measure just whether a machine can conduct a normal chat like a human, rather than whether it can respond as capably as a human to anything you can ask it.

For some specific cognitive skills, there are other measures already. e.g. 'economic productivity' can be measured crudely in terms of profits made. Others seem like they could be developed without too much difficulty. e.g. Social manipulation could be measured in terms of probabilities of succeeding at manipulation tasks - this test doesn't exist as far as I know, but it doesn't seem prohibitively difficult to make.

6. Will we be able to colonize the stars?

Nick Beckstead looked into it recently. Summary: probably.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, almost entirely taken from Luke Muehlhauser's list, without my looking into them further.

  1. Try to develop metrics for specific important cognitive abilities, including general intelligence. Build on the ideas of Legg, Yudkowsky, Goertzel, Hernandez-Orallo & Dowe, etc.
  2. What is the construct validity of non-anthropomorphic intelligence measures? In other words, are there convergently instrumental prediction and planning algorithms? E.g. can one tend to get agents that are good at predicting economies but not astronomical events? Or do self-modifying agents in a competitive environment tend to converge toward a specific stable attractor in general intelligence space? 
  3. Scenario analysis: What are some concrete AI paths to influence over world affairs? See project guide here.
  4. How much of humanity’s cosmic endowment can we plausibly make productive use of given AGI? One way to explore this question is via various follow-ups to Armstrong & Sandberg (2013). Sandberg lists several potential follow-up studies in this interview, for example (1) get more precise measurements of the distribution of large particles in interstellar and intergalactic space, and (2) analyze how well different long-term storable energy sources scale. See Beckstead (2014).
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about the orthogonality of intelligence and goals, section 9. To prepare, read The relation between intelligence and motivation from Chapter 7The discussion will go live at 6pm Pacific time next Monday November 10. Sign up to be notified here.

[Link]"Neural Turing Machines"

16 Prankster 31 October 2014 08:54AM

The paper.

Discusses the technical aspects of one of Googles AI projects. According to a pcworld the system "apes human memory and programming skills" (this article seems pretty solid, also contains link to the paper). 

The abstract:

We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.

 

(First post here, feedback on the appropriateness of the post appreciated)

Superintelligence 7: Decisive strategic advantage

7 KatjaGrace 28 October 2014 01:01AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the seventh section in the reading guideDecisive strategic advantage. This corresponds to Chapter 5.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: Chapter 5 (p78-91)


Summary

  1. Question: will a single artificial intelligence project get to 'dictate the future'? (p78)
  2. We can ask, will a project attain a 'decisive strategic advantage' and will they use this to make a 'singleton'?
    1. 'Decisive strategic advantage' = a level of technological and other advantages sufficient for complete world domination (p78)
    2. 'Singleton' = a single global decision-making agency strong enough to solve all major global coordination problems (p78, 83)
  3. A project will get a decisive strategic advantage if there is a big enough gap between its capability and that of other projects. 
  4. A faster takeoff would make this gap bigger. Other factors would too, e.g. diffusion of ideas, regulation or expropriation of winnings, the ease of staying ahead once you are far enough ahead, and AI solutions to loyalty issues (p78-9)
  5. For some historical examples, leading projects have a gap of a few months to a few years with those following them. (p79)
  6. Even if a second project starts taking off before the first is done, the first may emerge decisively advantageous. If we imagine takeoff accelerating, a project that starts out just behind the leading project might still be far inferior when the leading project reaches superintelligence. (p82)
  7. How large would a successful project be? (p83) If the route to superintelligence is not AI, the project probably needs to be big. If it is AI, size is less clear. If lots of insights are accumulated in open resources, and can be put together or finished by a small team, a successful AI project might be quite small (p83).
  8. We should distinguish the size of the group working on the project, and the size of the group that controls the project (p83-4)
  9. If large powers anticipate an intelligence explosion, they may want to monitor those involved and/or take control. (p84)
  10. It might be easy to monitor very large projects, but hard to trace small projects designed to be secret from the outset. (p85)
  11. Authorities may just not notice what's going on, for instance if politically motivated firms and academics fight against their research being seen as dangerous. (p85)
  12. Various considerations suggest a superintelligence with a decisive strategic advantage would be more likely than a human group to use the advantage to form a singleton (p87-89)

Another view

This week, Paul Christiano contributes a guest sub-post on an alternative perspective:

Typically new technologies do not allow small groups to obtain a “decisive strategic advantage”—they usually diffuse throughout the whole world, or perhaps are limited to a single country or coalition during war. This is consistent with intuition: a small group with a technological advantage will still do further research slower than the rest of the world, unless their technological advantage overwhelms their smaller size.

The result is that small groups will be overtaken by big groups. Usually the small group will sell or lease their technology to society at large first, since a technology’s usefulness is proportional to the scale at which it can be deployed. In extreme cases such as war these gains might be offset by the cost of empowering the enemy. But even in this case we expect the dynamics of coalition-formation to increase the scale of technology-sharing until there are at most a handful of competing factions.

So any discussion of why AI will lead to a decisive strategic advantage must necessarily be a discussion of why AI is an unusual technology.

In the case of AI, the main difference Bostrom highlights is the possibility of an abrupt increase in productivity. In order for a small group to obtain such an advantage, their technological lead must correspond to a large productivity improvement. A team with a billion dollar budget would need to secure something like a 10,000-fold increase in productivity in order to outcompete the rest of the world. Such a jump is conceivable, but I consider it unlikely. There are other conceivable mechanisms distinctive to AI; I don’t think any of them have yet been explored in enough depth to be persuasive to a skeptical audience.


Notes

1. Extreme AI capability does not imply strategic advantage. An AI program could be very capable - such that the sum of all instances of that AI worldwide were far superior (in capability, e.g. economic value) to the rest of humanity's joint efforts - and yet the AI could fail to have a decisive strategic advantage, because it may not be a strategic unit. Instances of the AI may be controlled by different parties across society. In fact this is the usual outcome for technological developments.

2. On gaps between the best AI project and the second best AI project (p79) A large gap might develop either because of an abrupt jump in capability or extremely fast progress (which is much like an abrupt jump), or from one project having consistent faster growth than other projects for a time. Consistently faster progress is a bit like a jump, in that there is presumably some particular highly valuable thing that changed at the start of the fast progress. Robin Hanson frames his Foom debate with Eliezer as about whether there are 'architectural' innovations to be made, by which he means innovations which have a large effect (or so I understood from conversation). This seems like much the same question. On this, Robin says:

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

3. What should activists do? Bostrom points out that activists seeking maximum expected impact might wish to focus their planning on high leverage scenarios, where larger players are not paying attention (p86). This is true, but it's worth noting that changing the probability of large players paying attention is also an option for activists, if they think the 'high leverage scenarios' are likely to be much better or worse.

4. Trade. One key question seems to be whether successful projects are likely to sell their products, or hoard them in the hope of soon taking over the world. I doubt this will be a strategic decision they will make - rather it seems that one of these options will be obviously better given the situation, and we are uncertain about which. A lone inventor of writing should probably not have hoarded it for a solitary power grab, even though it could reasonably have seemed like a good candidate for radically speeding up the process of self-improvement.

5. Disagreement. Note that though few people believe that a single AI project will get to dictate the future, this is often because they disagree with things in the previous chapter - e.g. that a single AI project will plausibly become more capable than the world in the space of less than a month.

6. How big is the AI project? Bostrom distinguishes between the size of the effort to make AI and the size of the group ultimately controlling its decisions. Note that the people making decisions for the AI project may also not be the people making decisions for the AI - i.e. the agents that emerge. For instance, the AI making company might sell versions of their AI to a range of organizations, modified for their particular goals. While in some sense their AI has taken over the world, the actual agents are acting on behalf of much of society.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

 

  1. When has anyone gained a 'decisive strategic advantage' at a smaller scale than the world? Can we learn anything interesting about what characteristics a project would need to have such an advantage with respect to the world?
  2. How scalable is innovative project secrecy? Examine past cases: Manhattan project, Bletchly park, Bitcoin, Anonymous, Stuxnet, Skunk Works, Phantom Works, Google X.
  3. How large are the gaps in development time between modern software projects? What dictates this? (e.g. is there diffusion of ideas from engineers talking to each other? From people changing organizations? Do people get far enough ahead that it is hard to follow them?)

 

If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about Cognitive superpowers (section 8). To prepare, read Chapter 6The discussion will go live at 6pm Pacific time next Monday 3 November. Sign up to be notified here.

Superintelligence 6: Intelligence explosion kinetics

9 KatjaGrace 21 October 2014 01:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the sixth section in the reading guideIntelligence explosion kinetics. This corresponds to Chapter 4 in the book, of a similar name. This section is about how fast a human-level artificial intelligence might become superintelligent.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: Chapter 4 (p62-77)


Summary

  1. Question: If and when a human-level general machine intelligence is developed, how long will it be from then until a machine becomes radically superintelligent? (p62)
  2. The following figure from p63 illustrates some important features in Bostrom's model of the growth of machine intelligence. He envisages machine intelligence passing human-level, then at some point reaching the level where most inputs to further intelligence growth come from the AI itself ('crossover'), then passing the level where a single AI system is as capable as all of human civilization, then reaching 'strong superintelligence'. The shape of the curve is probably intended an example rather than a prediction.
  3. A transition from human-level machine intelligence to superintelligence might be categorized into one of three scenarios: 'slow takeoff' takes decades or centuries, 'moderate takeoff' takes months or years and 'fast takeoff' takes minutes to days. Which scenario occurs has implications for the kinds of responses that might be feasible.
  4. We can model improvement in a system's intelligence with this equation:

    Rate of change in intelligence = Optimization power/Recalcitrance

    where 'optimization power' is effort being applied to the problem, and 'recalcitrance' is how hard it is to make the system smarter by applying effort.
  5. Bostrom's comments on recalcitrance of different methods of increasing kinds of intelligence:
    1. Cognitive enhancement via public health and diet: steeply diminishing returns (i.e. increasing recalcitrance)
    2. Pharmacological enhancers: diminishing returns, but perhaps there are still some easy wins because it hasn't had a lot of attention.
    3. Genetic cognitive enhancement: U-shaped recalcitrance - improvement will become easier as methods improve, but then returns will decline. Overall rates of growth are limited by maturation taking time.
    4. Networks and organizations: for organizations as a whole recalcitrance is high. A vast amount of effort is spent on this, and the world only becomes around a couple of percent more productive per year. The internet may have merely moderate recalcitrance, but this will likely increase as low-hanging fruits are depleted.
    5. Whole brain emulation: recalcitrance is hard to evaluate, but emulation of an insect will make the path much clearer. After human-level emulations arrive, recalcitrance will probably fall, e.g. because software manipulation techniques will replace physical-capital intensive scanning and image interpretation efforts as the primary ways to improve the intelligence of the system. Also there will be new opportunities for organizing the new creatures. Eventually diminishing returns will set in for these things. Restrictive regulations might increase recalcitrance.
    6. AI algorithms: recalcitrance is hard to judge. It could be very low if a single last key insight is discovered when much else is ready. Overall recalcitrance may drop abruptly if a low-recalcitrance system moves out ahead of higher recalcitrance systems as the most effective method for solving certain problems. We might overestimate the recalcitrance of sub-human systems in general if we see them all as just 'stupid'.
    7. AI 'content': recalcitrance might be very low because of the content already produced by human civilization, e.g. a smart AI might read the whole internet fast, and so become much better.
    8. Hardware (for AI or uploads): potentially low recalcitrance. A project might be scaled up by orders of magnitude by just purchasing more hardware. In the longer run, hardware tends to improve according to Moore's law, and the installed capacity might grow quickly if prices rise due to a demand spike from AI.
  6. Optimization power will probably increase after AI reaches human-level, because its newfound capabilities will attract interest and investment.
  7. Optimization power would increase more rapidly if AI reaches the 'crossover' point, when much of the optimization power is coming from the AI itself. Because smarter machines can improve their intelligence more than less smart machines, after the crossover a 'recursive self improvement' feedback loop would kick in.
  8. Thus optimization power is likely to increase during the takeoff, and this alone could produce a fast or medium takeoff. Further, recalcitrance is likely to decline. Bostrom concludes that a fast or medium takeoff looks likely, though a slow takeoff cannot be excluded.

Notes

1. The argument for a relatively fast takeoff is one of the most controversial arguments in the book, so it deserves some thought. Here is my somewhat formalized summary of the argument as it is presented in this chapter. I personally don't think it holds, so tell me if that's because I'm failing to do it justice. The pink bits are not explicitly in the chapter, but are assumptions the argument seems to use.

  1. Growth in intelligence  =  optimization power /  recalcitrance                                                  [true by definition]
  2. Recalcitrance of AI research will probably drop or be steady when AI reaches human-level               (p68-73)
  3. Optimization power spent on AI research will increase after AI reaches human level                         (p73-77)
  4. Optimization/Recalcitrance will stay similarly high for a while prior to crossover
  5. A 'high' O/R ratio prior to crossover will produce explosive growth OR crossover is close
  6. Within minutes to years, human-level intelligence will reach crossover                                           [from 1-5]
  7. Optimization power will climb ever faster after crossover, in line with the AI's own growing capacity     (p74)
  8. Recalcitrance will not grow much between crossover and superintelligence
  9. Within minutes to years, crossover-level intelligence will reach superintelligence                           [from 7 and 8]
  10. Within minutes to years, human-level AI will likely transition to superintelligence           [from 6 and 9]

Do you find this compelling? Should I have filled out the assumptions differently?

***

2. Other takes on the fast takeoff 

It seems to me that 5 above is the most controversial point. The famous Foom Debate was a long argument between Eliezer Yudkowsky and Robin Hanson over the plausibility of fast takeoff, among other things. Their arguments were mostly about both arms of 5, as well as the likelihood of an AI taking over the world (to be discussed in a future week). The Foom Debate included a live verbal component at Jane Street Capital: blog summaryvideotranscript. Hanson more recently reviewed Superintelligence, again criticizing the plausibility of a single project quickly matching the capacity of the world.

Kevin Kelly criticizes point 5 from a different angle: he thinks that speeding up human thought can't speed up progress all that much, because progress will quickly bottleneck on slower processes.

Others have compiled lists of criticisms and debates here and here.

3. A closer look at 'crossover'

Crossover is 'a point beyond which the system's further improvement is mainly driven by the system's own actions rather than by work performed upon it by others'. Another way to put this, avoiding certain ambiguities, is 'a point at which the inputs to a project are mostly its own outputs', such that improvements to its outputs feed back into its inputs. 

The nature and location of such a point seems an interesting and important question. If you think crossover is likely to be very nearby for AI, then you need only worry about the recursive self-improvement part of the story, which kicks in after crossover. If you think it will be very hard for an AI project to produce most of its own inputs, you may want to pay more attention to the arguments about fast progress before that point.

To have a concrete picture of crossover, consider Google. Suppose Google improves their search product such that one can find a thing on the internet a radical 10% faster. This makes Google's own work more effective, because people at Google look for things on the internet sometimes. How much more effective does this make Google overall? Maybe they spend a couple of minutes a day doing Google searches, i.e. 0.5% of their work hours, for an overall saving of .05% of work time. This suggests their next improvements made at Google will be made 1.0005 faster than the last. It will take a while for this positive feedback to take off. If Google coordinated your eating and organized your thoughts and drove your car for you and so on, and then Google improved efficiency using all of those services by 10% in one go, then this might make their employees close to 10% more productive, which might produce more noticeable feedback. Then Google would have reached the crossover. This is perhaps easier to imagine for Google than other projects, yet I think still fairly hard to imagine.

Hanson talks more about this issue when he asks why the explosion argument doesn't apply to other recursive tools. He points to Douglas Englebart's ambitious proposal to use computer technologies to produce a rapidly self-improving tool set.

Below is a simple model of a project which contributes all of its own inputs, and one which begins mostly being improved by the world. They are both normalized to begin one tenth as large as the world and to grow at the same pace as each other (this is why the one with help grows slower, perhaps counterintuitively). As you can see, the project which is responsible for its own improvement takes far less time to reach its 'singularity', and is more abrupt. It starts out at crossover. The project which is helped by the world doesn't reach crossover until it passes 1. 

 

 

4. How much difference does attention and funding make to research?

Interest and investments in AI at around human-level are (naturally) hypothesized to accelerate AI development in this chapter. It would be good to have more empirical evidence on the quantitative size of such an effect. I'll start with one example, because examples are a bit costly to investigate. I selected renewable energy before I knew the results, because they come up early in the Performance Curves Database, and I thought their funding likely to have been unstable. Indeed, OECD funding since the 70s looks like this apparently:

(from here)

The steep increase in funding in the early 80s was due to President Carter's energy policies, which were related to the 1979 oil crisis.

This is what various indicators of progress in renewable energies look like (click on them to see their sources):

 

 

 

There are quite a few more at the Performance Curves Database. I see surprisingly little relationship between the funding curves and these metrics of progress. Some of them are shockingly straight. What is going on? (I haven't looked into these more than you see here).

5. Other writings on recursive self-improvement

Eliezer Yudkowsky wrote about the idea originally, e.g. here. David Chalmers investigated the topic in some detail, and Marcus Hutter did some more. More pointers here.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. Model the intelligence explosion more precisely. Take inspiration from successful economic models, and evidence from a wide range of empirical areas such as evolutionary biology, technological history, algorithmic progress, and observed technological trends. Eliezer Yudkowsky has written at length about this project.
  2. Estimate empirically a specific interaction in the intelligence explosion model. For instance, how much and how quickly does investment increase in technologies that look promising? How much difference does that make to the rate of progress in the technology? How much does scaling up researchers change output in computer science? (Relevant to how much adding extra artificial AI researchers speeds up progress) How much do contemporary organizations contribute to their own inputs? (i.e. how hard would it be for a project to contribute more to its own inputs than the rest of the world put together, such that a substantial positive feedback might ensue?) Yudkowsky 2013 again has a few pointers (e.g. starting at p15).
  3. If human thought was sped up substantially, what would be the main limits to arbitrarily fast technological progress?
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about 'decisive strategic advantage': the possibility of a single AI project getting huge amounts of power in an AI transition. To prepare, read Chapter 5, Decisive Strategic Advantage (p78-90)The discussion will go live at 6pm Pacific time next Monday Oct 27. Sign up to be notified here.

Can AIXI be trained to do anything a human can?

3 Stuart_Armstrong 20 October 2014 01:12PM

There is some discussion as to whether an AIXI-like entity would be able to defend itself (or refrain from destroying itself). The problem is that such an entity would be unable to model itself as being part of the universe: AIXI itself is an uncomputable entity modelling a computable universe, and more limited variants like AIXI(tl) lack the power to simulate themselves. Therefore, they cannot identify "that computer running the code" with "me", and would cheerfully destroy themselves in the pursuit of their goals/reward.

I've pointed out that agents of the AIXI type could nevertheless learn to defend itself in certain circumstances. These were the circumstances where it could translate bad things happening to itself into bad things happening to the universe. For instance, if someone pressed an OFF swith to turn it off for an hour, it could model that as "the universe jumps forwards an hour when that button is pushed", and if that's a negative (which is likely is, since the AIXI loses an hour of influencing the universe), it would seek to prevent that OFF switch being pressed.

That was an example of the setup of the universe "training" the AIXI to do something that it didn't seem it could do. Can this be generalised? Let's go back to the initial AIXI design (the one with the reward channel) and put a human in charge of that reward channel with the mission of teaching the AIXI important facts. Could this work?

For instance, if anything dangerous approached the AIXI's location, the human could lower the AIXI's reward, until it became very effective at deflecting danger. The more variety of things that could potentially threaten the AIXI, the more likely it is to construct plans of actions that contain behaviours that look a lot like "defend myself." We could even imagine that there is a robot programmed to repair the AIXI if it gets (mildly) damaged. The human could then reward the AIXI if it leaves that robot intact or builds duplicates or improves it in some way. It's therefore possible the AIXI could come to come to value "repairing myself", still without explicit model of itself in the universe.

It seems this approach could be extended to many of the problems with AIXI. Sure, an AIXI couldn't restrict its own computation in order to win the HeatingUp game. But the AIXI could be trained to always use subagents to deal with these kinds of games, subagents that could achieve maximal score. In fact, if the human has good knowledge of the AIXI's construction, it could, for instance, pinpoint a button that causes the AIXI to cut short its own calculation. The AIXI could then learn that pushing that button in certain circumstances would get a higher reward. A similar reward mechanism, if kept up long enough, could get it around existential despair problems.

I'm not claiming this would necessarily work - it may require a human rewarder of unfeasibly large intelligence. But it seems there's a chance that it could work. So it seems that categorical statements of the type "AIXI wouldn't..." or "AIXI would..." are wrong, at least as AIXI's behaviour is concerned. An AIXI couldn't develop self-preservation - but it could behave as if it had. It can't learn about itself - but it can behave as if it did. The human rewarder may not be necessary - maybe certain spontaneously occurring situations in the universe ("AIXI training wheels arenas") could allow the AIXI to develop these skills without outside training. Or maybe somewhat stochastic AIXI's with evolution and natural selection could do so. There is an angle connected with embodied embedded cognition that might be worth exploring there (especially the embedded part).

It seems that agents of the AIXI type may not necessarily have the limitations we assume they must.

A few thoughts on a Friendly AGI (safe vs friendly, other minds problem, ETs and more)

3 the-citizen 19 October 2014 07:59AM

Friendly AI is an idea that I find to be an admirable goal. While I'm not yet sure an intelligence explosion is likely, or whether FAI is possible, I've found myself often thinking about it, and I'd like for my first post to share a few those thoughts on FAI with you.

Safe AGI vs Friendly AGI
-Let's assume an Intelligence Explosion is possible for now, and that an AGI with the ability to improve itself somehow is enough to achieve it.
-Let's define a safe AGI as an above-human general AI that does not threaten humanity or terran life (eg. FAI, Tool AGI, possibly Oracle AGI)
-Let's define a Friendly AGI as one that *ensures* the continuation of humanity and terran life.
-Let's say an unsafe AGI is all other AGIs.
-Safe AGIs must supress unsafe AGIs in order to be considered Friendly. Here's why:

-If we can build a safe AGI, we probably have the technology to build an unsafe AGI too.
-An unsafe AGI is likely to be built at that point because:
-It's very difficult to conceive of a way that humans alone will be able to permanently stop all humans from developing an unsafe AGI once the steps are known**
-Some people will find the safe AGI's goals unnacceptable
-Some people will rationalise or simply mistake that their AGI design is safe when it is not
-Some people will not care if their AGI design is safe, because they do not care about other people, or because they hold some extreme beliefs
-Most imaginable unsafe AGIs would outcompete safe AGIs, because they would not neccessarily be "hamstrung" by complex goals such as protecting us meatbags from destruction. Tool or Oracle AGIs would obviously not stand a chance due to their restrictions.
-Therefore, If a safe AGI does not prevent unsafe AGIs from coming into existence, humanity will very likely be destroyed.

-The AGI most likely to prevent unsafe AGIs from being created is one that actively predicted their development and terminates that development before or on completion.
-So to summarise

-An AGI is very likely only a Friendly AI if it actively supresses unsafe AGI.
-Oracle and Tool AGIs are not Friendly AIs, they are just safe AIs, because they don't suppress anything.
-Oracle and Tool AGIs are a bad plan for AI if we want to prevent the destruction of humanity, because hostile AGIs will surely follow.

(**On reflection I cannot be certain of this specific point, but I assume it would take a fairly restrictive regime for this to be wrong. Further comments on this very welcome.)

Other minds problem - Why should be philosophically careful when attempting to theorise about FAI

I read quite a few comments in AI discussions that I'd probably characterise as "the best utility function for a FAI is one that values all consciousness". I'm quite concerned that this persists as a deeply held and largely unchallenged assumption amongst some FAI supporters. I think in general I find consciousness to be an extremely contentious, vague and inconsistently defined concept, but here I want to talk about some specific philosophical failures.

My first concern is that while many AI theorists like to say that consciousness is a physical phenomenon, which seems to imply Monist/Physicalist views, they at the same time don't seem to understand that consciousness is a Dualist concept that is coherent only in a Dualist framework. A Dualist believes there is a thing called a "subject" (very crudely this equates with the mind) and then things called objects (the outside "empirical" world interpreted by that mind). Most of this reasoning begins with Descartes' cogito ergo sum or similar starting points ( https://en.wikipedia.org/wiki/Cartesian_dualism ). Subjective experience, qualia and consciousness make sense if you accept that framework. But if you're a Monist, this arbitrary distinction between a subject and object is generally something you don't accept. In the case of a Physicalist, there's just matter doing stuff. A proper Physicalist doesn't believe in "consciousness" or "subjective experience", there's just brains and the physical human behaviours that occur as a result. Your life exists from a certain point of view, I hear you say? The Physicalist replies, "well a bunch of matter arranged to process information would say and think that, wouldn't it?".

I don't really want to get into whether Dualism or Monism is correct/true, but I want to point out even if you try to avoid this by deciding Dualism is right and consciousness is a thing, there's yet another more dangerous problem. The core of the problem is that logically or empirically establishing the existence of minds, other than your own is extremely difficult (impossible according to many). They could just be physical things walking around acting similar to you, but by virtue of something purely mechanical - without actual minds. In philosophy this is called the "other minds problem" ( https://en.wikipedia.org/wiki/Problem_of_other_minds or http://plato.stanford.edu/entries/other-minds/). I recommend a proper read of it if the idea seems crazy to you. It's a problem that's been around for centuries, and yet to-date we don't really have any convincing solution (there are some attempts but they are highly contentious and IMHO also highly problematic). I won't get into it more than that for now, suffice to say that not many people accept that there is a logical/empirical solution to this problem.

Now extrapolate that to an AGI, and the design of its "safe" utility functions. If your AGI is designed as a Dualist (which is neccessary if you wish to encorporate "consciousness", "experience" or the like into your design), then you build-in a huge risk that the AGI will decide that other minds are unprovable or do not exist. In this case your friendly utility function designed to protect "conscious beings" fails and the AGI wipes out humanity because it poses a non-zero threat to the only consciousness it can confirm - its own. For this reason I feel "consciousness", "awareness", "experience" should be left out of FAI utility functions and designs, regardless of the truth of Monism/Dualism, in favour of more straight-forward definitions of organisms, intelligence, observable emotions and intentions. (I personally favour conceptualising any AGI as a sort of extension of biological humanity, but that's a discussion for another day) My greatest concern is there is such strong cultural attachment to the concept of consciousness that researchers will be unwilling to properly question the concept at all.

What if we're not alone?

It seems a little unusual to throw alien life into the mix at this point, but I think its justified because an intelligence explosion really puts an interstellar existence well within our civilisation's grasp. Because it seems that an intelligence explosion implies a very high rate of change, it makes sense to start considering even the long term implication early, particularly if the consequences are very serious, as I believe they may be in this realm of things.

Let's say we successfully achieved a FAI. In order to fufill its mission of protecting humanity and the biosphere, it begins expanding, colonising and terraforming other planets for potential habitation by Earth originating life. I would expect this expansion wouldn't really have a limit, because the more numourous the colonies, the less likely it is we could be wiped out by some interstellar disaster.

Of course, we can't really rule out the possibility that we're not alone in the universe, or even the galaxy. If we make it as far as AGI, then its possible another alien civilisation might reach a very high level of technological advancement too. Or there might be many. If our FAI is friendly to us but basically treats them as paperclip fodder, then potentially that's a big problem. Why? Well:

-Firstly, while a species' first loyalty is to itself, we should consider that it might be morally unsdesirable to wipe out alien civilisations, particularly as they might be in some distant way "related" (see panspermia) to own biosphere.
-Secondly, there is conceivable scenarios where alien civilisations might respond to this by destroying our FAI/Earth/the biosphere/humanity. The reason is fairly obvious when you think about it. An expansionist AGI could be reasonably viewed as an attack or possibly an act of war.

Let's go into a tiny bit more detai. Given that we've not been destroyed by any alien AGI just yet, I can think of a number of possible interstellar scenarios:

(1) There is no other advanced life
(2) There is advanced life, but it is inherently non-expansive (expand inwards, or refuse to develop dangerous AGI)
(3) There is advanced life, but they have not discovered AGI yet. There could potentially be a race-to-the-finish (FAI) scenario on.
(4) There is already expanding AGIs, but due to physical limits on the expansion rate, we are not aware of them yet. (this could use further analysis)
One civilisation, or an allied group of civilisations have develop FAIs and are dominant in the galaxy. They could be either:

(5) Whack-a-mole cilivisations that destroy all potential competitors as soon as they are identified
(6) Dominators that tolerate civilisations so long as they remain primitive and non-threatening by comparison.
(7) Some sort of interstellar community that allows safe civilisations to join (this community still needs to stomp on dangerous potential rival AGIs)

In the case of (6) or (7), developing a FAI that isn't equipped to deal with alien life will probably result in us being liquidated, or at least partially sanitised in some way. In (1) (2) or (5), it probably doesn't matter what we do in this regard, though in (2) we should consider being nice. In (3) and probably (4) we're going to need a FAI capable of expanding very quickly and disarming potential AGIs (or at least ensuring they are FAIs from our perspective).

The upshot of all this is that we probably want to design safety features into our FAI so that it doesn't destroy alien civilisations/life unless its a significant threat to us. I think the understandable reaction to this is something along the lines of "create an FAI that values all types of life" or "intelligent life" or something along these lines. I don't exactly disagree, but I think we must be cautious in how we formulate this too.

Say there are many different civilisations in the galaxy. What sort of criteria would ensure that, given some sort of zero-sum scenario, Earth life wouldn't be destroyed. Let's say there was some sort of tiny but non-zero probability that humanity could evade the FAI's efforts to prevent further AGI development. Or perhaps there was some loophole in the types of AGI's that humans were allowed to develop. Wouldn't it be sensible, in this scenario, for a universalist FAI to wipe out humanity to protect the countless other civilisations? Perhaps that is acceptable? Or perhaps not? Or less drastically, how does the FAI police warfare or other competition between civilisations? A slight change in the way life is quantified and valued could change drastically the outcome for humanity. I'd probably suggest we want to weight the FAI's values to start with human and Earth biosphere primacy, but then still give some non-zero weighting to other civilisations. There is probably more thought to be done in this area too.

Simulation

I want to also briefly note that one conceivable way we might postulate as a safe way to test Friendly AI designs is to simulate a worlds/universes of less complexity than our own, make it likely that it's inhabitants invent a AGI or FAI, and then closely study the results of these simluations. Then we could study failed FAI attempt with much greater safety. It also occured to me that if we consider the possibilty of our universe being a simulated one, then this is a conceivable scenario under which our simulation might be created. After all, if you're going to simulate something, why not something vital like modelling existential risks? I'm not sure yet sure of the implications exactly. Maybe we need to consider how it relates to our universe's continued existence, or perhaps it's just another case of Pascal's Mugging. Anyway I thought I'd mention it and see what people say.

A playground for FAI theories

I want to lastly mention this link (https://www.reddit.com/r/LessWrongLounge/comments/2f3y53/the_ai_game/). Basically its a challenge for people to briefly describe an FAI goal-set, and for others to respond by telling them how that will all go horribly wrong. I want to suggest this is a very worthwhile discussion, not because its content will include rigourous theories that are directly translatable into utility functions, because very clearly it won't, but because a well developed thread of this kind would be mixing pot of ideas and good introduction to common known mistakes in thinking about FAI. We should encourage a slightly more serious verison of this.

Thanks

FAI and AGI are very interesting topics. I don't consider myself able to really discern whether such things will occur, but its an interesting and potentially vital topic. I'm looking forward to a bit of feedback on my first LW post. Thanks for reading!

Superintelligence 5: Forms of Superintelligence

12 KatjaGrace 14 October 2014 01:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the fifth section in the reading guideForms of superintelligence. This corresponds to Chapter 3, on different ways in which an intelligence can be super.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: Chapter 3 (p52-61)


Summary

  1. A speed superintelligence could do what a human does, but faster. This would make the outside world seem very slow to it. It might cope with this partially by being very tiny, or virtual. (p53)
  2. A collective superintelligence is composed of smaller intellects, interacting in some way. It is especially good at tasks that can be broken into parts and completed in parallel. It can be improved by adding more smaller intellects, or by organizing them better. (p54)
  3. A quality superintelligence can carry out intellectual tasks that humans just can't in practice, without necessarily being better or faster at the things humans can do. This can be understood by analogy with the difference between other animals and humans, or the difference between humans with and without certain cognitive capabilities. (p56-7)
  4. These different kinds of superintelligence are especially good at different kinds of tasks. We might say they have different 'direct reach'. Ultimately they could all lead to one another, so can indirectly carry out the same tasks. We might say their 'indirect reach' is the same. (p58-9)
  5. We don't know how smart it is possible for a biological or a synthetic intelligence to be. Nonetheless we can be confident that synthetic entities can be much more intelligent than biological entities
    1. Digital intelligences would have better hardware: they would be made of components ten million times faster than neurons; the components could communicate about two million times faster than neurons can; they could use many more components while our brains are constrained to our skulls; it looks like better memory should be feasible; and they could be built to be more reliable, long-lasting, flexible, and well suited to their environment.
    2. Digital intelligences would have better software: they could be cheaply and non-destructively 'edited'; they could be duplicated arbitrarily; they could have well aligned goals as a result of this duplication; they could share memories (at least for some forms of AI); and they could have powerful dedicated software (like our vision system) for domains where we have to rely on slow general reasoning.

Notes

  1. This chapter is about different kinds of superintelligent entities that could exist. I like to think about the closely related question, 'what kinds of better can intelligence be?' You can be a better baker if you can bake a cake faster, or bake more cakes, or bake better cakes. Similarly, a system can become more intelligent if it can do the same intelligent things faster, or if it does things that are qualitatively more intelligent. (Collective intelligence seems somewhat different, in that it appears to be a means to be faster or able to do better things, though it may have benefits in dimensions I'm not thinking of.) I think the chapter is getting at different ways intelligence can be better rather than 'forms' in general, which might vary on many other dimensions (e.g. emulation vs AI, goal directed vs. reflexive, nice vs. nasty).
  2. Some of the hardware and software advantages mentioned would be pretty transformative on their own. If you haven't before, consider taking a moment to think about what the world would be like if people could be cheaply and perfectly replicated, with their skills intact. Or if people could live arbitrarily long by replacing worn components. 
  3. The main differences between increasing intelligence of a system via speed and via collectiveness seem to be: (1) the 'collective' route requires that you can break up the task into parallelizable subtasks, (2) it generally has larger costs from communication between those subparts, and (3) it can't produce a single unit as fast as a comparable 'speed-based' system. This suggests that anything a collective intelligence can do, a comparable speed intelligence can do at least as well. One counterexample to this I can think of is that often groups include people with a diversity of knowledge and approaches, and so the group can do a lot more productive thinking than a single person could. It seems wrong to count this as a virtue of collective intelligence in general however, since you could also have a single fast system with varied approaches at different times.
  4. For each task, we can think of curves for how performance increases as we increase intelligence in these different ways. For instance, take the task of finding a fact on the internet quickly. It seems to me that a person who ran at 10x speed would get the figure 10x faster. Ten times as many people working in parallel would do it only a bit faster than one, depending on the variance of their individual performance, and whether they found some clever way to complement each other. It's not obvious how to multiply qualitative intelligence by a particular factor, especially as there are different ways to improve the quality of a system. It also seems non-obvious to me how search speed would scale with a particular measure such as IQ. 
  5. How much more intelligent do human systems get as we add more humans? I can't find much of an answer, but people have investigated the effect of things like team sizecity size, and scientific collaboration on various measures of productivity.
  6. The things we might think of as collective intelligences - e.g. companies, governments, academic fields - seem notable to me for being slow-moving, relative to their components. If someone were to steal some chewing gum from Target, Target can respond in the sense that an employee can try to stop them. And this is no slower than an individual human acting to stop their chewing gum from being taken. However it also doesn't involve any extra problem-solving from the organization - to the extent that the organization's intelligence goes into the issue, it has to have already done the thinking ahead of time. Target was probably much smarter than an individual human about setting up the procedures and the incentives to have a person there ready to respond quickly and effectively, but that might have happened over months or years.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. Produce improved measures of (substrate-independent) general intelligence. Build on the ideas of Legg, Yudkowsky, Goertzel, Hernandez-Orallo & Dowe, etc. Differentiate intelligence quality from speed.
  2. List some feasible but non-realized cognitive talents for humans, and explore what could be achieved if they were given to some humans.
  3. List and examine some types of problems better solved by a speed superintelligence than by a collective superintelligence, and vice versa. Also, what are the returns on “more brains applied to the problem” (collective intelligence) for various problems? If there were merely a huge number of human-level agents added to the economy, how much would it speed up economic growth, technological progress, or other relevant metrics? If there were a large number of researchers added to the field of AI, how would it change progress?
  4. How does intelligence quality improve performance on economically relevant tasks?
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about 'intelligence explosion kinetics', a topic at the center of much contemporary debate over the arrival of machine intelligence. To prepare, read Chapter 4, The kinetics of an intelligence explosion (p62-77)The discussion will go live at 6pm Pacific time next Monday 20 October. Sign up to be notified here.

SRG 4: Biological Cognition, BCIs, Organizations

7 KatjaGrace 07 October 2014 01:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we finish chapter 2 with three more routes to superintelligence: enhancement of biological cognition, brain-computer interfaces, and well-organized networks of intelligent agents. This corresponds to the fourth section in the reading guideBiological Cognition, BCIs, Organizations

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading“Biological Cognition” and the rest of Chapter 2 (p36-51)


Summary

Biological intelligence

  1. Modest gains to intelligence are available with current interventions such as nutrition.
  2. Genetic technologies might produce a population whose average is smarter than anyone who has have ever lived.
  3. Some particularly interesting possibilities are 'iterated embryo selection' where many rounds of selection take place in a single generation, and 'spell-checking' where the genetic mutations which are ubiquitous in current human genomes are removed.

Brain-computer interfaces

  1. It is sometimes suggested that machines interfacing closely with the human brain will greatly enhance human cognition. For instance implants that allow perfect recall and fast arithmetic. (p44-45) 
  2. Brain-computer interfaces seem unlikely to produce superintelligence (p51) This is because they have substantial health risks, because our existing systems for getting information in and out of our brains are hard to compete with, and because our brains are probably bottlenecked in other ways anyway. (p45-6) 
  3. 'Downloading' directly from one brain to another seems infeasible because each brain represents concepts idiosyncratically, without a standard format. (p46-7)

Networks and organizations

  1. A large connected system of people (or something else) might become superintelligent. (p48) 
  2. Systems of connected people become more capable through technological and institutional innovations, such as enhanced communications channels, well-aligned incentives, elimination of bureaucratic failures, and mechanisms for aggregating information. The internet as a whole is a contender for a network of humans that might become superintelligent (p49) 

Summary

  1. Since there are many possible paths to superintelligence, we can be more confident that we will get there eventually (p50) 
  2. Whole brain emulation and biological enhancement are both likely to succeed after enough incremental progress in existing technologies. Networks and organizations are already improving gradually. 
  3. The path to AI is less clear, and may be discontinuous. Which route we take might matter a lot, even if we end up with similar capabilities anyway. (p50)

The book so far

Here's a recap of what we have seen so far, now at the end of Chapter 2:

  1. Economic history suggests big changes are plausible.
  2. AI progress is ongoing.
  3. AI progress is hard to predict, but AI experts tend to expect human-level AI in mid-century.
  4. Several plausible paths lead to superintelligence: brain emulations, AI, human cognitive enhancement, brain-computer interfaces, and organizations.
  5. Most of these probably lead to machine superintelligence ultimately.
  6. That there are several paths suggests we are likely to get there.

Do you disagree with any of these points? Tell us about it in the comments.

Notes

  1. Nootropics
    Snake Oil Supplements? is a nice illustration of scientific evidence for different supplements, here filtered for those with purported mental effects, many of which relate to intelligence. I don't know how accurate it is, or where to find a summary of apparent effect sizes rather than evidence, which I think would be more interesting.

    Ryan Carey and I talked to Gwern Branwen - an independent researcher with an interest in nootropics - about prospects for substantial intelligence amplification. I was most surprised that Gwern would not be surprised if creatine gave normal people an extra 3 IQ points.
  2. Environmental influences on intelligence
    And some more health-specific ones.
  3. The Flynn Effect
    People have apparently been getting smarter by about 3 points per decade for much of the twentieth century, though this trend may be ending. Several explanations have been proposed. Namesake James Flynn has a TED talk on the phenomenon. It is strangely hard to find a good summary picture of these changes, but here's a table from Flynn's classic 1978 paper of measured increases at that point:


    Here are changes in IQ test scores over time in a set of Polish teenagers, and a set of Norwegian military conscripts respectively:


  4. Prospects for genetic intelligence enhancement
    This study uses 'Genome-wide Complex Trait Analysis' (GCTA) to estimate that about half of variation in fluid intelligence in adults is explained by common genetic variation (childhood intelligence may be less heritable). These studies use genetic data to predict 1% of variation in intelligence. This genome-wide association study (GWAS) allowed prediction of 2% of education and IQ. This study finds several common genetic variants associated with cognitive performance. Stephen Hsu very roughly estimates that you would need a million samples in order to characterize the relationship between intelligence and genetics. According to Robertson et al, even among students in the top 1% of quantitative ability, cognitive performance predicts differences in occupational outcomes later in life. The Social Science Genetics Association Consortium (SSGAC) lead research efforts on genetics of education and intelligence, and are also investigating the genetics of other 'social science traits' such as self-employment, happiness and fertility. Carl Shulman and Nick Bostrom provide some estimates for the feasibility and impact of genetic selection for intelligence, along with a discussion of reproductive technologies that might facilitate more extreme selection. Robert Sparrow writes about 'in vitro eugenics'. Stephen Hsu also had an interesting interview with Luke Muehlhauser about several of these topics, and summarizes research on genetics and intelligence in a Google Tech Talk.
  5. Some brain computer interfaces in action
    For Parkinson's disease relief, allowing locked in patients to communicate, handwriting, and controlling robot arms.
  6. What changes have made human organizations 'smarter' in the past?
    Big ones I can think of include innovations in using text (writing, printing, digital text editing), communicating better in other ways (faster, further, more reliably), increasing population size (population growth, or connection between disjoint populations), systems for trade (e.g. currency, finance, different kinds of marketplace), innovations in business organization, improvements in governance, and forces leading to reduced conflict.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. How well does IQ predict relevant kinds of success? This is informative about what enhanced humans might achieve, in general and in terms of producing more enhancement. How much better is a person with IQ 150 at programming or doing genetics research than a person with IQ 120? How does IQ relate to philosophical ability, reflectiveness, or the ability to avoid catastrophic errors? (related project guide here).
  2. How promising are nootropics? Bostrom argues 'probably not very', but it seems worth checking more thoroughly. One related curiosity is that on casual inspection, there seem to be quite a few nootropics that appeared promising at some point and then haven't been studied much. This could be explained well by any of publication bias, whatever forces are usually blamed for relatively natural drugs receiving little attention, or the casualness of my casual inspection.
  3. How can we measure intelligence in non-human systems? e.g. What are good ways to track increasing 'intelligence' of social networks, quantitatively? We have the general sense that groups of humans are the level at which everything is a lot better than it was in 1000BC, but it would be nice to have an idea of how this is progressing over time. Is GDP a reasonable metric?  
  4. What are the trends in those things that make groups of humans smarter? e.g. How will world capacity for information communication change over the coming decades? (Hilbert and Lopez's work is probably relevant)
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about 'forms of superintelligence', in the sense of different dimensions in which general intelligence might be scaled up. To prepare, read Chapter 3, Forms of Superintelligence (p52-61)The discussion will go live at 6pm Pacific time next Monday 13 October. Sign up to be notified here.

View more: Next