hairyfigment comments on The Ape Constraint discussion meeting. - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (23)
How so? The AI won't care about fairness unless that fits its programmed goals (which should be felt, if at all, as a drive more than a restraint). Now if we tell it to care about our extrapolated values, and extrapolation says we'd consider the AI a person, then it will likely want to be fair to itself. That's why we don't want to make it a person.
Aliens might care if we've been fair to a sentient species.
Other humans might care.
Our descendants might care.
I'm not saying those considerations should outweigh the safety factor. But it seems to be a discussion that isn't yet even being had.
I repeat: this is why we don't want to create a person, or even a sentient process, if we can avoid it.