Target was probably much smarter than an individual human about setting up the procedures and the incentives to have a person there ready to respond quickly and effectively, but that might have happened over months or years.
We have not to underestimate slow superintelligences. Our judiciary is also slow. So some acts we could do are very slow.
Humanity could be overtaken also by slow (and alien) superintelligence.
It does not matter if you would quickly see that it is in wrong way. You still could slowly lose step by step your rights and power to act... (like slowly loosing pieces in chess game)
If strong entities in our world will (are?) driving by poorly designed goals - for example "maximize profit" then they could really be very dangerous to humanity.
I really dont want to spoil our discussion with politics rather I like to see rational discussion about all existential threats which could raise from superintelligent beings/entities.
We have not underestimate any form and not underestimate any method of our possible doom.
With bigdata comming, our society is more and more ruled by algorithms. And algorithms are smarter and smarter.
Algorithms are not independent from entities which have enough money or enough political power to use it.
BTW. Bostrom wrote (sorry not in chapter we discussed yet) about possible perverse instantiation which could be done due to not well designed goal by programmer. I am afraid that in our society it will be manager or politician who will/is design goal. (we have find way that there be also philosopher and mathematician)
In my oppinion first (if not singleton) superintelligence will be (or is) most probably 'mixed form'. Some group of well organized people (dont forget lawyers) with big database and supercomputer.
Next stages after intelligence explosion could have any other forms.
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome. This week we discuss the fifth section in the reading guide: Forms of superintelligence. This corresponds to Chapter 3, on different ways in which an intelligence can be super.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: Chapter 3 (p52-61)
Summary
Notes
In-depth investigations
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about 'intelligence explosion kinetics', a topic at the center of much contemporary debate over the arrival of machine intelligence. To prepare, read Chapter 4, The kinetics of an intelligence explosion (p62-77). The discussion will go live at 6pm Pacific time next Monday 20 October. Sign up to be notified here.