Yossarian comments on Open thread, March 17-31, 2013 - Less Wrong

1 Post author: David_Gerard 17 March 2013 03:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (173)

You are viewing a single comment's thread.

Comment author: Yossarian 27 March 2013 11:32:51PM *  2 points [-]

Today's SMBC

Has this idea been considered before? The idea that a self-improving capable AI would choose not to because it wouldn't be rational? And whether or not that calls into question the rationality of pursuing AI in the first place?

Comment author: gwern 27 March 2013 11:43:18PM 3 points [-]

Well, it's been suggested in fiction, anyway - consider the Stable vs Ultimates faction in the TechnoCore of Simmon's Hyperion SF universe.

But the scenario trades on 2 dubious claims:

  1. that an AI will have its own self-preservation as a terminal value (as opposed to, say, a frequently useful strategy which is unnecessary if it can replace itself with a superior AI pursuing the same terminal values)
  2. that any concept of selfhood or self-preservation excludes growth or development or self-modification into a superior AI

Without #2, there's no real distinction to be made between the present and future AIs. Without #1, there's no reason for the AI to care about being replaced.