shminux comments on The genie knows, but doesn't care - Less Wrong

54 Post author: RobbBB 06 September 2013 06:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (515)

You are viewing a single comment's thread.

Comment author: shminux 03 September 2013 05:58:57PM -1 points [-]

Somewhat off-topic. The Complexity of Value thesis mentions a terminal goal of

having a diverse civilization of sentient beings leading interesting lives.

Is this an immutable goal? If so, how can it go wrong given Fragility of Value?

Comment author: hairyfigment 03 September 2013 06:52:45PM 6 points [-]

Did you just ask how the phrase "interesting lives" could go wrong?

Comment author: shminux 03 September 2013 07:07:07PM 0 points [-]

Right. I did. Ironic, I know. What I meant is, is properly defining "interesting" enough to avoid a UFAI, or are there some other issues to watch out for?

Comment author: hairyfigment 03 September 2013 10:27:22PM -1 points [-]

Hmm. It seems like a very small group of new lifeforms could lead properly interesting lives even if the AI killed us all beforehand and turned Earth (at least) into computing power.

Comment author: gattsuru 04 September 2013 05:10:24PM 1 point [-]

I also suspect that we'd not enjoy an AGI that aims only for threshold values for two of the three of sentient, lives, or diverse, strongly favoring the last one.

Comment author: ESRogs 04 September 2013 10:11:43PM 2 points [-]

I think I don't understand the question, what do you mean by 'immutable goal'?