DanielLC comments on Ends Don't Justify Means (Among Humans) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (87)
Just to extend on this, it seems most likely that multiple AIs would actually be subject to dynamics similar to evolution and a totally 'Friendly' AI would probably tend to lose out against a more self-serving (but not necessarily evil) AIs. Or just like the 'young revolutionary' of the first post, a truly enlightened Friendly AI would be forced to assume power to deny it to any less moral AIs.
Philosophical questions aside, the likely reality of the future AI development is surely that it will also go to those that are able to seize the resources to propagate and improve themselves.
Why would a Friendly AI lose out? They can do anything any other AI can do. They're not like humans, where they have to worry about becoming corrupt if they start committing atrocities for the good of humanity.