Alex_Altair comments on Friendly AI Society - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (13)
I'm a little confused about this post. I believe that it is infinitesimally likely that any two AIs will have anywhere near the same optimization power. The first AI will have a very fast positive feedback loop, and will quickly become powerful enough to stop any later AI, should one come into existence before the first AI controls human activity. Do you reject part of this argument?
Giles wrote:
It is the "access to resources" part that is key here. You're looking at two categories of AI. Seed AIs, that are deliberately designed by humanity to not be self-improving (or even self-modifying) past a certain point, but which have high access to resources; and 'free citizen' AIs that are fully self-modifying, but which initially may have restricted access to resources.
When you (Alex) talk about "the first AI", what you're talking about is the first 'free citizen' AI, but there will already be seed AIs out there which (initially) will have greater optimisation power and the ability to choke off the access to resources of the new 'free citizen' AI if it doesn't play nicely.