Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

TheOtherDave comments on The Urgent Meta-Ethics of Friendly Artificial Intelligence - Less Wrong

45 Post author: lukeprog 01 February 2011 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 02 February 2011 01:01:43AM 1 point [-]

So if a system of ethics entails that "right" doesn't designate anything actual, you reject that system. Can you say more about why?

Comment author: Wei_Dai 02 February 2011 01:42:28AM 0 points [-]

Does my answer to Eliezer answer your question as well?

Comment author: TheOtherDave 02 February 2011 03:04:48AM 2 points [-]

I'm not sure.

Projecting that answer onto my question I get something like "Because ethical systems in which "right" has an actual referent are better, for unspecified reasons, than ones in which it doesn't, and Wei Dai's current unextrapolated preferences involve an actual though unspecified referent for "right," so we can at the very least reject all systems where "right" doesn't designate anything actual in favor of the system Wei Dai's current unextrapolated preferences implement, even if nothing better ever comes along."

Is that close enough to your answer?

Comment author: Wei_Dai 02 February 2011 03:50:06AM 0 points [-]

Yes, close enough.

Comment author: TheOtherDave 02 February 2011 03:57:54AM 2 points [-]

In that case, not really... what I was actually curious about is why "right" having a referent is important.

Comment author: Wei_Dai 02 February 2011 08:25:10PM 6 points [-]

I can make the practical case: If "right" refers to nothing, and we design an FAI to do what is right, then it will do nothing. We want the FAI to do something instead of nothing, so "right" having a referent is important.

Or the philosophical case: If "right" refers to nothing, then "it's right for me to save that child" would be equivalent to the null sentence. From introspection I think I must mean something non-empty when I say something like that.

Do either of these answer your question?

Comment author: XiXiDu 02 February 2011 08:47:02PM 7 points [-]

I can make the practical case: If "right" refers to nothing, and we design an FAI to do what is right, then it will do nothing.

Congratulations, you just solved the Fermi paradox.

Comment author: TheOtherDave 02 February 2011 10:09:14PM *  3 points [-]

(sigh) Sure, agreed... if our intention is to build an FAI to do what is right, it's important that "what is right" mean something. And I could ask why we should build an FAI that way, and you could tell me that that's what it means to be Friendly, and on and on.

I'm not trying to be pedantic here, but this does seem sort of pointlessly circular... a discussion about words rather than things.

When a Jewish theist says "God has commanded me to save that child," they may be entirely sincere, but that doesn't in and of itself constitute evidence that "God" has a referent, let alone that the referent of "God" (supposing it exists) actually so commanded them.

When you say "It's right for me to save that child," the situation may be different, but the mere fact that you can utter that sentence with sincerity doesn't constitute evidence of difference.

If we really want to save children, I would say we should talk about how most effectively to save children, and design our systems to save children, and that talking about whether God commanded us to save children or whether it's right to save children adds nothing of value to the process.

More generally, if we actually knew everything we wanted, as individuals and groups, then we could talk about how most effectively to achieve that and design our FAIs to achieve that and discussions about whether it's right would seem as extraneous as discussions about discussions about whether it's God-willed.

The problem is that we don't know what we want. So we attach labels to that-thing-we-don't-understand, and over time those labels adopt all kinds of connotations that make discussion difficult. The analogy to theism applies here as well.

At some point, it becomes useful to discard those labels.

A CEV-implementing FAI, supposing such a thing is possible, will do what we collectively want done, whatever that turns out to be. A FAI implementing some other strategy will do something else. Whether those things are right is just as useless to talk about as whether they are God's will; those terms add nothing to the conversation.

Comment author: Wei_Dai 02 February 2011 10:56:32PM 4 points [-]

TheOtherDave, I don't really want to argue about whether talking about "right" adds value. I suspect it might (i.e., I'm not so confident as you that it doesn't), but mainly I was trying to argue with Eliezer on his own terms. I do want to correct this:

A CEV-implementing FAI, supposing such a thing is possible, will do what we collectively want done, whatever that turns out to be.

CEV will not do "what we collectively want done", it will do what's "right" according to Eliezer's meta-ethics, which is whatever is coherent amongst the volitions it extrapolates from humanity, which as others and I have argued, might turn out to be "nothing". If you're proposing that we build an AI that does do "what we collectively want done", you'd have to define what that means first.

Comment author: TheOtherDave 02 February 2011 11:09:35PM 1 point [-]

I don't really want to argue about whether talking about "right" adds value.

OK. The question I started out with, way at the top of the chain, was precisely about why having a referent for "right" was important, so I will drop that question and everything that descends from it.

As for your correction, I actually don't understand the distinction you're drawing, but in any case I agree with you that it might turn out that human volition lacks a coherent core of any significance.

Comment author: Wei_Dai 02 February 2011 11:29:20PM 2 points [-]

To me, "what we collectively want done" means somehow aggregating (for example, through voting or bargaining) our current preferences. It lacks the elements of extrapolation and coherence that are central to CEV.