There are many reasons why the intelligence of AI+ greatly dwarfs that of human organizations; see Section 3.1 of the linked paper.
Since an organization's optimization power includes optimization power gained from information technology, I think that the "AI Advantages" in section 3.1 mostly apply just as well to organizations. Do you see an exception?
This sounds similar to a position of Robin Hanson addressed in Footnote 25 of the linked paper.
Ah, thanks for that. I think I see your point: rogue AI could kill everybody, whereas a dominant organization would still preserve some people and so is less 'interesting'.
Two responses:
First, a dominant organization seems like the perfect vehicle for a rogue AI, since it would already have all resources centralized and ready for AI hijacking. So, a study of the present dynamics between superintelligent organizations is important to the prediction of hard takeoff machine superintelligence.
Second, while I once again risk getting political at this point, I'd argue that an overriding concern for the total existence of humanity only makes sense if one doesn't have any skin in the game of any of the other power dynamics going on. I believe there are ethical reasons for being concerned with some of these other games. That is well beyond the scope of this post.
The Singularity Institute is completely aware that there are other existential risks to humanity; its purpose is to deal with one of them.
That's clear.
This sounds awfully suspicious. Are you sure you don't have the bottom line precomputed?
Honestly, I don't follow the line of reasoning in the post you've linked to. Could you summarize in your own terms?
My reason for not providing arguments up front is because I think excessive verbiage impairs readability. I would rather present justifications that are relevant to my interlocutor's objections than try to predict everything up front. Indeed, I can't predict all objections up front, since this audience has more information than I have available.
However, since I have faith that we are all in the same game of legitimate truth-seeking, I'm willing to pursue dialectical argumentation until it converges.
How long did it take you to come up with this line of reasoning?
I guess over 27 years. But I stand on the shoulders of giants.
Thanks for the quick reply.
I agree that certain "organizations" can be very, very dangerous. That's one reason why we want to create AI...because we can use it to beat these organizations (as well as fix/greatly reduce many other problems in society).
I hold that Unfriendly AI+ will be more dangerous, but, if these "organizations" are as dangerous as you say, you are correct that we should put some focus on them as well. If you have a better plan to stop them than creating Friendly AI, I'd be interested to hear it. The thing you might...
If I understand the Singularitarian argument espoused by many members of this community (eg. Muehlhauser and Salamon), it goes something like this:
I'm in danger of getting into politics. Since I understand that political arguments are not welcome here, I will refer to these potentially unfriendly human intelligences broadly as organizations.
Smart organizations
By "organization" I mean something commonplace, with a twist. It's commonplace because I'm talking about a bunch of people coordinated somehow. The twist is that I want to include the information technology infrastructure used by that bunch of people within the extension of "organization".
Do organizations have intelligence? I think so. Here's some of the reasons why:
I talked with Mr. Muehlhauser about this specifically. I gather that at least at the time he thought human organizations should not be counted as intelligences (or at least as intelligences with the potential to become superintelligences) because they are not as versatile as human beings.
...and then...
I think that Muehlhauser is slightly mistaken on a few subtle but important points. I'm going to assert my position on them without much argument because I think they are fairly sensible, but if any reader disagrees I will try to defend them in the comments.
Mean organizations
* My preferred standard of rationality is communicative rationality, a Habermasian ideal of a rationality aimed at consensus through principled communication. As a consequence, when I believe a position to be rational, I believe that it is possible and desirable to convince other rational agents of it.