being psychotic is not some alpha out of control testosterone malfunction per se. You see I have been diagnosed as [among other madness-es] psychotic. I have an innate hatred of all humans [though when meeting them it's cool] but still I'd rather disengage than engage. An AI with intelligence could easily deduce that most humans are morons and a waste of space. Social media forums proves this abundantly by the vapid vomit espoused in such multitudinal mind numbing stupidity. Real intelligence is rare so perhaps an AI will be as stupid as their creators. From what I gathered a lot of logic and maths is involved which is intelligent-specific which is of a different order to eastern definition of comprehensive if not cosmic wisdom. So perhaps we have nothing to fear except stupidity itself.
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome. This week we discuss the twelfth section in the reading guide: Malignant failure modes.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: 'Malignant failure modes' from Chapter 8
Summary
Another view
In this chapter Bostrom discussed the difficulty he perceives in designing goals that don't lead to indefinite resource acquisition. Steven Pinker recently offered a different perspective on the inevitability of resource acquisition:
Notes
1. Perverse instantiation is a very old idea. It is what genies are most famous for. King Midas had similar problems. Apparently it was applied to AI by 1947, in With Folded Hands.
2. Adam Elga writes more on simulating people for blackmail and indexical uncertainty.
3. More directions for making AI which don't lead to infrastructure profusion:
In-depth investigations
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about capability control methods, section 13. To prepare, read “Two agency problems” and “Capability control methods” from Chapter 9. The discussion will go live at 6pm Pacific time next Monday December 8. Sign up to be notified here.