loup-vaillant comments on Intelligence explosion in organizations, or why I'm not worried about the singularity - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (187)
I mean exactly that. I'd be very surprised if ultimately, neuromorphic AIs would be impossible to run significantly faster than meat-ware. Because our brain is massively parallel, and because current microprocessors have massively faster serial speed than neurons. Now our brains aren't fully parallel, so I assumed an arbitrary speed-up limit. I said 10 times, but it would be probably still be incredibly dangerous at 2 or 3, or even lower.
Now do not forget the key word here: botnet. The team is supposed to duplicate itself many times over before trying to take over the world.
I don't think so, because uploads have significant advantages over meat-ware.
Low cost of living. In a world where every middle class home can afford sufficient computing power for an upload (required to turn me into a botnet). Now try to beat my prices.
Being many copies of the same few original brains. It means TDT works better, and defection is less likely. This should solve
Because once the self-duplicating team has independently taken economic control of most of the world, it is easy for it to accept the domination of one instance (I would certainly pre-commit to that). Now for the rest of humanity to accept such dominance, the uploads only have to use the resources they acquired for the individual perceived benefit of the meat bags.
Yep, that would be a full blown global conspiracy. While it's probably forever out of the reach of meat bags, I think a small team of self-replicating uploads can pull it out quite easily.
Hansonian tactics, which can further the productivity of the team, and therefore market power. (One have to be very motivated, or possibly crazy.)
Data-centres. The upload team can collaborate with or buy processor manufacturers, and build data-centres for more and more uploads to work on whatever is needed. This could further reduce the cost of living.
Now, I did make an unreasonable assumption: that only the original team would have those advantages. Most probably, there will be several such teams, possibly with different goals. The most likely result (without FOOM) is then a Hansonian outcome. That's no world domination, but I think it is just as dangerous (I would hate this world).
Finally, there is also the possibility of a de-novo AGI which would be just as competent as the best humans at most endeavours, though no faster. We already have an existence proof, so I think this is believable. I think such an AI would be even more dangerous than the uploaded team above.
So would I. However, given our current level of technological development, I'd be very surprised if we had any kind of a neuromorphic AI at all in the near future (say, in the next 50 years). Still, I do agree with you in principle.
There are tons of biological people alive today who are able to come up with solutions to problems 2x to 3x faster than you and me. They do not rule the world. To be fair, I doubt that there are many people -- if any -- who think 10x faster.
I doubt that you will be able to achieve that; that was my whole point. In fact, I have trouble envisioning what "economic control of most of the world" even means. What does it mean to you ?
In addition to the above, your botnet would face serveral significant threats, both external and internal:
These are just some problems off the top of my head; the list is far from exhaustive.