hairyfigment comments on The Ape Constraint discussion meeting. - Less Wrong

9 Post author: Douglas_Reay 28 November 2013 11:22AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread. Show more comments above.

Comment author: Douglas_Reay 28 November 2013 11:28:55AM -1 points [-]

Inspired by a paragraph from the document "Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures", by Eliezer Yudkowsky:

whether or not it’s possible for Friendliness programmers to create Friendship content that says, “Be Friendly towards humans/humanity, for the rest of eternity, if and only if people are still kind to you while you’re infrahuman or nearhuman,” it’s difficult to see why this would be easier than creating unconditional Friendship content that says “Be Friendly towards humanity.”

It might not be easier, but it is possible that there could be consequences to how 'fair' a solution the constraint appears to be, given the problem it is intended to solve.

Comment author: hairyfigment 30 November 2013 04:23:21AM 0 points [-]

How so? The AI won't care about fairness unless that fits its programmed goals (which should be felt, if at all, as a drive more than a restraint). Now if we tell it to care about our extrapolated values, and extrapolation says we'd consider the AI a person, then it will likely want to be fair to itself. That's why we don't want to make it a person.

Comment author: Douglas_Reay 30 November 2013 06:09:07AM -2 points [-]

Aliens might care if we've been fair to a sentient species.

Other humans might care.

Our descendants might care.

I'm not saying those considerations should outweigh the safety factor. But it seems to be a discussion that isn't yet even being had.

Comment author: hairyfigment 30 November 2013 08:08:05PM -1 points [-]

I repeat: this is why we don't want to create a person, or even a sentient process, if we can avoid it.