Utilitarian comments on [Link] Values Spreading is Often More Important than Extinction Risk - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (21)
Some really creative ideas, ChristianKl. :)
Even with what you describe, humans wouldn't become extinct, barring other outcomes like really bad nuclear war or whatever.
However, since the AI wouldn't be destroyed, it could bide its time. Maybe it could ally with some people and give them tech/power in exchange for carrying out its bidding. They could help build the robots, etc. that would be needed to actually wipe out humanity.
Obviously there's a lot of conjunction here. I'm not claiming this scenario specifically is likely. But it helps to stimulate the imagination to work out an existence proof for the extinction risk from AGI.
Some AI's already do this today. The outsource work they can't do to Amazon's mechanical turk where humans get payed money to do tasks for the AI.
Other humans take on job on rentacoder where they never see the human that's hiring them.
Human's wouldn't get extinct in a short time frame but if the AGI has decades of time than it can increase it's own power over time and decrease it's dependence on humans. Sooner or later the humans wouldn't be useful for the AGI anymore and then go extinct.