PrawnOfFate
PrawnOfFate has not written any posts yet.

PrawnOfFate has not written any posts yet.

Asserting that some bases for comparison are "moral values" and others are merely "values" implicitly privileges a moral reference frame.
I don't see why. The question of what makes a value a moral value is metaethical, not part of object-level ethics.
Again: if I decide that this hammer is better than that hammer because it's blue, is that valid in the sense you mean it?
It isn't valid as a moral judgement because "blue" isn't a moral judgement, so a moral conclusion cannot validly follow from it.
Beyond that, I don't see where you are going. The standard accusation of invalidity to judgements of moral progress, is based on circularity or question-begging. The Tribe who Like Blue things are going to judge having all hammers painted blue as moral progress, the Tribe who Like Red Things are going to see it as retrogressive. But both are begging the question -- blue is good, because blue is good.
UFAI is about singletons. If you have an AI society whose members compare notes and share information -- which ins isntrumentally useful for them anyway -- your reduce the probability of singleton fooming.
The argument against moral progress is that judging one moral reference frame by another is circular and invalid--you need an outside view that doesn't presuppose the truth of any moral reference frame.
The argument for is that such outside views are available, because things like (in)coherence aren't moral values.
So much for avoiding the cliche.
And is that valid or not? If you can validly decide some systems are better than others, you are some of the way to deciding which is best.
Sorry..did you mean FAI is about societies, or FAI is about singletons?
But if ethics does emerge as an organisational principle in socieities, that's all you need for FAI. You don't even to to worry about one sociopathic AI turning unfriendly, because the majority will be able to restrain it.
If "better" is defined within a reference frame, there is not sensible was of defining moral progress. That is quite a hefty bullet to bite: one can no longer say that South Africa is better society after the fall of Apartheid, and so on.
But note, that "better" doesn't have to question-beggingly mean "morally better". it could mean "more coherent/objective/inclusive" etc.
I'm increasingly baffled as to why AI is always brought in to discussions of metaethics. Societies of rational agents need ethics to regulate their conduct. Out AIs aren't sophisticated enough to live in their own socieities. A wireheading AI isn't even going to be able to survive "in the wild". If you could build an artificial society of AI, then the questions of whether they spontaneously evolved ethics would be a very interesting and relevant datum. But AIs as we know them aren't good models for the kinds of entities to which morality is relevant. And Clippy is particularly exceptional example of an AI. So why do people keep saying "Ah, but Clippy..."...?
Isn't the idea of moral progress based on one reference frame being better than another?
I don't get it: any agent that fooms becomes superintelligent. It's values don't necessarily change at all, nor does its connection to its society.