passive_fist comments on The Fermi paradox as evidence against the likelyhood of unfriendly AI - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (53)
It's definitely unlikely, I just brought it up as an example because chaosmage said "I fail to imagine any intelligent lifeform that wouldn't want to expand." There are plenty of lifeforms already that don't want to expand, and I can imagine some (unlikely but not impossible) situations where a SAI wouldn't want to expand either.