This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome. This week we discuss the eighteenth section in the reading guide: Life in an algorithmic economy. This corresponds to the middle of Chapter 11.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: “Life in an algorithmic economy” from Chapter 11
Summary
- In a multipolar scenario, biological humans might lead poor and meager lives. (p166-7)
- The AIs might be worthy of moral consideration, and if so their wellbeing might be more important than that of the relatively few humans. (p167)
- AI minds might be much like slaves, even if they are not literally. They may be selected for liking this. (p167)
- Because brain emulations would be very cheap to copy, it will often be convenient to make a copy and then later turn it off (in a sense killing a person). (p168)
- There are various other reasons that very short lives might be optimal for some applications. (p168-9)
- It isn't obvious whether brain emulations would be happy working all of the time. Some relevant considerations are current human emotions in general and regarding work, probable selection for pro-work individuals, evolutionary adaptiveness of happiness in the past and future -- e.g. does happiness help you work harder?--and absence of present sources of unhappiness such as injury. (p169-171)
- In the long run, artificial minds may not even be conscious, or have valuable experiences, if these are not the most effective ways for them to earn wages. If such minds replace humans, Earth might have an advanced civilization with nobody there to benefit. (p172-3)
- In the long run, artificial minds may outsource many parts of their thinking, thus becoming decreasingly differentiated as individuals. (p172)
- Evolution does not imply positive progress. Even those good things that evolved in the past may not withstand evolutionary selection in a new circumstance. (p174-6)
Another view
Robin Hanson on others' hasty distaste for a future of emulations:
Parents sometimes disown their children, on the grounds that those children have betrayed key parental values. And if parents have the sort of values that kids could deeply betray, then it does make sense for parents to watch out for such betrayal, ready to go to extremes like disowning in response.
But surely parents who feel inclined to disown their kids should be encouraged to study their kids carefully before making such a choice. For example, parents considering whether to disown their child for refusing to fight a war for their nation, or for working for a cigarette manufacturer, should wonder to what extend national patriotism or anti-smoking really are core values, as opposed to being mere revisable opinions they collected at one point in support of other more-core values. Such parents would be wise to study the lives and opinions of their children in some detail before choosing to disown them.
I’d like people to think similarly about my attempts to analyze likely futures. The lives of our descendants in the next great era after this our industry era may be as different from ours’ as ours’ are from farmers’, or farmers’ are from foragers’. When they have lived as neighbors, foragers have often strongly criticized farmer culture, as farmers have often strongly criticized industry culture. Surely many have been tempted to disown any descendants who adopted such despised new ways. And while such disowning might hold them true to core values, if asked we would advise them to consider the lives and views of such descendants carefully, in some detail, before choosing to disown.
Similarly, many who live industry era lives and share industry era values, may be disturbed to see forecasts of descendants with life styles that appear to reject many values they hold dear. Such people may be tempted to reject such outcomes, and to fight to prevent them, perhaps preferring a continuation of our industry era to the arrival of such a very different era, even if that era would contain far more creatures who consider their lives worth living, and be far better able to prevent the extinction of Earth civilization. And such people may be correct that such a rejection and battle holds them true to their core values.
But I advise such people to first try hard to see this new era in some detail from the point of view of its typical residents. See what they enjoy and what fills them with pride, and listen to their criticisms of your era and values. I hope that my future analysis can assist such soul-searching examination. If after studying such detail, you still feel compelled to disown your likely descendants, I cannot confidently say you are wrong. My job, first and foremost, is to help you see them clearly.
More on whose lives are worth living here and here.
Notes
1. Robin Hanson is probably the foremost researcher on what the finer details of an economy of emulated human minds would be like. For instance, which company employees would run how fast, how big cities would be, whether people would hang out with their copies. See a TEDx talk, and writings here, here, here and here (some overlap - sorry). He is also writing a book on the subject, which you can read early if you ask him.
2. Bostrom says,
Life for biological humans in a post-transition Malthusian state need not resemble any of the historical states of man...the majority of humans in this scenario might be idle rentiers who eke out a marginal living on their savings. They would be very poor, yet derive what little income they have from savings or state subsidies. They would live in a world with extremely advanced technology, including not only superintelligent machines but also anti-aging medicine, virtual reality, and various enhancement technologies and pleasure drugs: yet these might be generally unaffordable....(p166)
It's true this might happen, but it doesn't seem like an especially likely scenario to me. As Bostrom has pointed out in various places earlier, biological humans would do quite well if they have some investments in capital, do not have too much of their property stolen or artfully manouvered away from them, and do not undergo too massive population growth themselves. These risks don't seem so large to me.
In-depth investigations
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.
- Is the first functional whole brain emulation likely to be (1) an emulation of low-level functionality that doesn’t require much understanding of human cognitive neuroscience at the computational level, as described in Sandberg & Bostrom (2008), or is it more likely to be (2) an emulation that makes heavy use of advanced human cognitive neuroscience, as described by (e.g.) Ken Hayworth, or is it likely to be (3) something else?
- Extend and update our understanding of when brain emulations might appear (see Sandberg & Bostrom (2008)).
- Investigate the likelihood of a multipolar outcome?
- Follow Robin Hanson (see above) in working out the social implications of an emulation scenario
- What kinds of responses to the default low-regulation multipolar outcome outlined in this section are likely to be made? e.g. is any strong regulation likely to emerge that avoids the features detailed in the current section?
- What measures are useful for ensuring good multipolar outcomes?
- What qualitatively different kinds of multipolar outcomes might we expect? e.g. brain emulation outcomes are one class.
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about the possibility of a multipolar outcome turning into a singleton later. To prepare, read “Post-transition formation of a singleton?” from Chapter 11. The discussion will go live at 6pm Pacific time next Monday 19 January. Sign up to be notified here.
Evolution tends to do a basically random walk exploration of the easily reached possibility space available to any specific life form. Given that it has to start from something very simple, initial exploration is towards greater complexity. Once a reasonable level of complexity is reached, the random walk is only slightly more likely to involve greater complexity, and is almost equally as likely to go back towards lesser complexity, in respect of any specific population. However, viewing the entire ecosystem of populations, there will be a general trajectory of expansion into new territory of possibility. The key thing to get is that in respect of any specific population or individual (when considering the population of behavioural memes within that individual), there is an almost equal likelihood of going back into territory already explored as there is of exploring new territory.
There is a view of evolution that is not commonly taught, that acknowledges the power of competition as a selection filter between variants, and also acknowledges that all major advances in complexity of systems are characterised by new levels of cooperation. And all cooperative strategies require attendant strategies to prevent invasion by "cheats". Each new level of complexity is a new level of cooperation.
There are many levels of attendant strategies that can and do speed evolution of subsets of any set of characters.
Evolution is an exceptionally complex set of systems within systems. At both the genetic and mimetic levels, evolution is a massively recursive process, with many levels of attendant strategies. Darwin is a good introduction, follow it with Axelrod, Maynard Smith, Wolfram; and there are many others worth reading - perhaps the best introduction is Richard Dawkins classic "Selfish Gene".
Unless it is deliberately or accidentally altered, an emulation will possess all of the evolved traits of human brains. These include powerful mechanisms to prevent an altruistic absurdity such as donating one's labor to an employer. (Pure altruism -- an act that benefits another at the expense of one's genetic interests -- is strongly selected against.) There are some varieties of altruism that survive: kin selection (e.g., rescuing a drowning nephew), status display (making a large donation to a hospital), and reciprocal aid (helping a neighbor in hope... (read more)