Comment author: MugaSofer 25 April 2013 04:13:23PM -2 points [-]

Any agent that fooms becomes a singleton. Thus, it doesn't matter if they acted nice while in a society; all that matters is whether they act nice as a singleton.

Comment author: PrawnOfFate 25 April 2013 04:15:20PM 0 points [-]

I don't get it: any agent that fooms becomes superintelligent. It's values don't necessarily change at all, nor does its connection to its society.

Comment author: TheOtherDave 25 April 2013 02:23:58PM 0 points [-]

Asserting that some bases for comparison are "moral values" and others are merely "values" implicitly privileges a moral reference frame.

I still don't understand what you mean when you ask whether it's valid to do so, though. Again: if I decide that this hammer is better than that hammer because it's blue, is that valid in the sense you mean it? How could I tell?

Comment author: PrawnOfFate 25 April 2013 02:31:37PM -2 points [-]

Asserting that some bases for comparison are "moral values" and others are merely "values" implicitly privileges a moral reference frame.

I don't see why. The question of what makes a value a moral value is metaethical, not part of object-level ethics.

Again: if I decide that this hammer is better than that hammer because it's blue, is that valid in the sense you mean it?

It isn't valid as a moral judgement because "blue" isn't a moral judgement, so a moral conclusion cannot validly follow from it.

Beyond that, I don't see where you are going. The standard accusation of invalidity to judgements of moral progress, is based on circularity or question-begging. The Tribe who Like Blue things are going to judge having all hammers painted blue as moral progress, the Tribe who Like Red Things are going to see it as retrogressive. But both are begging the question -- blue is good, because blue is good.

Comment author: MugaSofer 25 April 2013 01:45:18PM *  -2 points [-]

FAI is about singletons, because the first one to foom wins, is the idea.

ETA: also, rational agents may be ethical in societies, but there's no advantage to being an ethical singleton.

Comment author: PrawnOfFate 25 April 2013 02:01:28PM -2 points [-]

UFAI is about singletons. If you have an AI society whose members compare notes and share information -- which ins isntrumentally useful for them anyway -- your reduce the probability of singleton fooming.

Comment author: TheOtherDave 25 April 2013 01:43:49PM 0 points [-]

Can you say more about what "valid" means here?

Just to make things crisper, let's move to a more concrete case for a moment... if I decide that this hammer is better than that hammer because it's blue, is that valid in the sense you mean it? How could I tell?

Comment author: PrawnOfFate 25 April 2013 01:50:42PM *  -3 points [-]

The argument against moral progress is that judging one moral reference frame by another is circular and invalid--you need an outside view that doesn't presuppose the truth of any moral reference frame.

The argument for is that such outside views are available, because things like (in)coherence aren't moral values.

Comment author: ArisKatsaris 25 April 2013 01:28:36PM *  3 points [-]

That is quite a hefty bullet to bite: one can no longer say that South Africa is better society after the fall of Apartheid, and so on.

That's hardly the best example you could have picked since there are obvious metrics by which South Africa can be quantifiably called a worse society now -- e.g. crime statistics. South Africa has been called the "crime capital of the world" and the "rape capital of the world" only after the fall of the Apartheid.

That makes the lack of moral progress in South Africa a very easy bullet to bite - I'd use something like Nazi Germany vs modern Germany as an example instead.

Comment author: PrawnOfFate 25 April 2013 01:38:21PM -3 points [-]

So much for avoiding the cliche.

Comment author: TheOtherDave 25 April 2013 01:04:47PM 0 points [-]

Yes, as typically understood the idea of moral progress is based on treating some reference frames as better than others.

Comment author: PrawnOfFate 25 April 2013 01:09:26PM -2 points [-]

And is that valid or not? If you can validly decide some systems are better than others, you are some of the way to deciding which is best.

Comment author: MugaSofer 25 April 2013 12:41:40PM -3 points [-]

Key word here being "societies". That is, not singletons. A lot of the discussion on metaethics here is implicitly aimed at FAI.

Comment author: PrawnOfFate 25 April 2013 12:56:33PM -3 points [-]

Sorry..did you mean FAI is about societies, or FAI is about singletons?

But if ethics does emerge as an organisational principle in socieities, that's all you need for FAI. You don't even to to worry about one sociopathic AI turning unfriendly, because the majority will be able to restrain it.

Comment author: MugaSofer 25 April 2013 12:27:03PM -1 points [-]

No, because "better" is defined within a reference frame.

Comment author: PrawnOfFate 25 April 2013 12:44:46PM -3 points [-]

If "better" is defined within a reference frame, there is not sensible was of defining moral progress. That is quite a hefty bullet to bite: one can no longer say that South Africa is better society after the fall of Apartheid, and so on.

But note, that "better" doesn't have to question-beggingly mean "morally better". it could mean "more coherent/objective/inclusive" etc.

Comment author: Nornagest 24 April 2013 02:19:21AM 4 points [-]

Not programmed to, or programmed not to? If you can code up a solution to value drift, lets see it. Otherwise, note that Life programmes can update to implement glider generators without being "programmed to".

...with extremely low probability. It's far more likely that the Life field will stabilize around some relatively boring state, empty or with a few simple stable patterns. Similarly, a system subject to value drift seems likely to converge on boring attractors in value space (like wireheading, which indeed has turned out to be a problem with even weak self-modifying AI) rather than stable complex value systems. Paperclippism is not a boring attractor in this context, and a working fully reflective Clippy would need a solution to value drift, but humanlike values are not obviously so, either.

Comment author: PrawnOfFate 25 April 2013 12:34:28PM *  1 point [-]

I'm increasingly baffled as to why AI is always brought in to discussions of metaethics. Societies of rational agents need ethics to regulate their conduct. Out AIs aren't sophisticated enough to live in their own socieities. A wireheading AI isn't even going to be able to survive "in the wild". If you could build an artificial society of AI, then the questions of whether they spontaneously evolved ethics would be a very interesting and relevant datum. But AIs as we know them aren't good models for the kinds of entities to which morality is relevant. And Clippy is particularly exceptional example of an AI. So why do people keep saying "Ah, but Clippy..."...?

Comment author: TheOtherDave 24 April 2013 11:53:53PM 3 points [-]

I generally understand the phrase "objective morality" to refer to a privileged moral reference frame.

It's not an incoherent idea... it might turn out, for example, that all value systems other than M turn out to be incoherent under sufficiently insightful reflection, or destructive to minds that operate under them, or for various other reasons not in-practice implementable by any sufficiently powerful optimizer. In such a world, I would agree that M was a privileged moral reference frame, and would not oppose calling it "objective morality", though I would understand that to be something of a term of art.

That said, I'd be very surprised to discover I live in such a world.

Comment author: PrawnOfFate 25 April 2013 11:28:54AM -2 points [-]

Isn't the idea of moral progress based on one reference frame being better than another?

View more: Next