Yossarian comments on Open thread, March 17-31, 2013 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (173)
Today's SMBC
Has this idea been considered before? The idea that a self-improving capable AI would choose not to because it wouldn't be rational? And whether or not that calls into question the rationality of pursuing AI in the first place?
Well, it's been suggested in fiction, anyway - consider the Stable vs Ultimates faction in the TechnoCore of Simmon's Hyperion SF universe.
But the scenario trades on 2 dubious claims:
Without #2, there's no real distinction to be made between the present and future AIs. Without #1, there's no reason for the AI to care about being replaced.