Stuart_Armstrong comments on The Great Filter is early, or AI is hard - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (74)
Alternatively the only stable AGI has a morality that doesn't make it behave in a way where it simply colonises the whole universe.
Not colonising the universe - many moralities could go with that.
Allowing potential rivals to colonise the universe... that's much rarer.