Douglas_Reay comments on The Ape Constraint discussion meeting. - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (23)
Inspired by a paragraph from the document "Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures", by Eliezer Yudkowsky:
It might not be easier, but it is possible that there could be consequences to how 'fair' a solution the constraint appears to be, given the problem it is intended to solve.
How so? The AI won't care about fairness unless that fits its programmed goals (which should be felt, if at all, as a drive more than a restraint). Now if we tell it to care about our extrapolated values, and extrapolation says we'd consider the AI a person, then it will likely want to be fair to itself. That's why we don't want to make it a person.
Aliens might care if we've been fair to a sentient species.
Other humans might care.
Our descendants might care.
I'm not saying those considerations should outweigh the safety factor. But it seems to be a discussion that isn't yet even being had.
I repeat: this is why we don't want to create a person, or even a sentient process, if we can avoid it.