timeholmes comments on Superintelligence 19: Post-transition formation of a singleton - Less Wrong

7 Post author: KatjaGrace 20 January 2015 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (35)

You are viewing a single comment's thread. Show more comments above.

Comment author: timeholmes 23 January 2015 01:21:46AM 3 points [-]

Because what any human wants is a moving target. As soon as someone else delivers exactly what you ask for, you will be disappointed unless you suddenly stop changing. Think of the dilemma of eating something you know you shouldn't. Whatever you decide, as soon as anyone (AI or human) takes away your freedom to change your mind, you will likely rebel furiously. Human freedom is a huge value that any FAI of any description will be unable to deliver until we are no longer free agents.

Comment author: DefectiveAlgorithm 24 January 2015 12:15:14PM *  3 points [-]

What would an AI that 'cares' in the sense you spoke of be able to do to address this problem that a non-'caring' one wouldn't?