Thanks Katya. I'm diving in a bit late here but I would like to query the group on the potential threats posed by AI. I've been intrigued by AI for thirty years and have followed the field peripherally. Something is very appealing about the idea of creating truly intelligent machines and, even more exciting, seeing those machines be able to improve themselves. However, I have, with some others (including most recently Elon Musk) become increasingly concerned about the threat that our technology, and particularly AI, may pose to us. This chapter on potentia...
Bostrom summarized (p91):
We are a successful species. The reason for our success is slightly expanded mental faculties compared with other species, allowing better cultural transmission. Thus suggests that substantially greater intelligence would bring extreme power.
Our general intelligence isn't obviously the source of this improved cultural transmission. Why suppose general intelligence is the key thing, instead of improvements specific to storing and communicating information? Doesn't the observation that our cultural transmission abilities made u...
I think that there is tremendous risk from an AI that can beat the world in narrow fields, like finances or war. We might hope to outwit the narrow capability set of a war-planner or equities trader, but if such a super-optimizer works in the accepted frameworks like a national military or a hedge fund, it may be impossible to stop them before it's too late; world civilization could then be disrupted enough that the AI or its master can then gain control beyond these narrow spheres.
So in this chapter Bostrom discusses an AGI with a neutral, but "passionate" goal, such as "I will devote all of my energies to be the best possible chess player, come what may."
I am going to turn this around a little bit.
By human moral standards, that is not an innocuous goal at all. Having that goal ONLY actually runs counter to just about every ethical system ever taught in any school.
It's obviously not ethical for a person to murder all the competition in order to become the best chess player in the world, nor is it ethical for a c...
Can you think of strategically important narrow cognitive skills beyond those that Bostrom mentions? (p94)
The 'intelligence amplification' superpower seems much more important than the others.
This does seem like the most important, but it's not necessarily the only superpower that would suffice for takeoff. Superpower-level social manipulation could be used to get human AI researchers to help. Alternatively, lots of funds plus human-comparable social manipulation could likely achieve this; economic productivity or hacking could be used to attain lots of funds. With some basic ability to navigate the economy, technology research would imply economic producti...
If you had a super-duper ability to design further cognitive abilities, which would you build first? (suppose that it's only super enough to let you build other super-duper abilities in around a year, so you can't just build a lot of them now) (p94)
Occasionally in this crew, people discuss the idea of computer simulations of the introduction of an AGI into our world. Such simulations could utilize advanced technology, but significant progress could be made even if they were not themselves an AGI.
I would like to hear how people might flesh out that research direction? I am not completely against trying to prove theorems about formal systems-it's just that the simulation direction is perfectly good virgin research territory. If we made progress along that path, it would also be much easier to explain.
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome. This week we discuss the eighth section in the reading guide: Cognitive Superpowers. This corresponds to Chapter 6.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: Chapter 6
Summary
Another view
Bostrom starts the chapter claiming that humans' dominant position comes from their slightly expanded set of cognitive functions relative to other animals. Computer scientist Ernest Davis criticizes this claim in a recent review of Superintelligence:
Notes
In-depth investigations
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, almost entirely taken from Luke Muehlhauser's list, without my looking into them further.
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about the orthogonality of intelligence and goals, section 9. To prepare, read The relation between intelligence and motivation from Chapter 7. The discussion will go live at 6pm Pacific time next Monday November 10. Sign up to be notified here.