I'd be very surprised if ultimately, neuromorphic AIs would be impossible to run significantly faster than meat-ware.
So would I. However, given our current level of technological development, I'd be very surprised if we had any kind of a neuromorphic AI at all in the near future (say, in the next 50 years). Still, I do agree with you in principle.
I said 10 times, but it would be probably still be incredibly dangerous at 2 or 3, or even lower.
There are tons of biological people alive today who are able to come up with solutions to problems 2x to 3x faster than you and me. They do not rule the world. To be fair, I doubt that there are many people -- if any -- who think 10x faster.
Because once the self-duplicating team has independently taken economic control of most of the world...
I doubt that you will be able to achieve that; that was my whole point. In fact, I have trouble envisioning what "economic control of most of the world" even means. What does it mean to you ?
In addition to the above, your botnet would face serveral significant threats, both external and internal:
These are just some problems off the top of my head; the list is far from exhaustive.
If I understand the Singularitarian argument espoused by many members of this community (eg. Muehlhauser and Salamon), it goes something like this:
I'm in danger of getting into politics. Since I understand that political arguments are not welcome here, I will refer to these potentially unfriendly human intelligences broadly as organizations.
Smart organizations
By "organization" I mean something commonplace, with a twist. It's commonplace because I'm talking about a bunch of people coordinated somehow. The twist is that I want to include the information technology infrastructure used by that bunch of people within the extension of "organization".
Do organizations have intelligence? I think so. Here's some of the reasons why:
I talked with Mr. Muehlhauser about this specifically. I gather that at least at the time he thought human organizations should not be counted as intelligences (or at least as intelligences with the potential to become superintelligences) because they are not as versatile as human beings.
...and then...
I think that Muehlhauser is slightly mistaken on a few subtle but important points. I'm going to assert my position on them without much argument because I think they are fairly sensible, but if any reader disagrees I will try to defend them in the comments.
Mean organizations
* My preferred standard of rationality is communicative rationality, a Habermasian ideal of a rationality aimed at consensus through principled communication. As a consequence, when I believe a position to be rational, I believe that it is possible and desirable to convince other rational agents of it.