Cyan comments on Nonparametric Ethics - Less Wrong

27 Post author: Eliezer_Yudkowsky 20 June 2009 11:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (56)

You are viewing a single comment's thread. Show more comments above.

Comment author: Cyan 21 June 2009 06:28:01AM *  -1 points [-]

Parametric methods aren't any better at extrapolation. They are arguably worse, in that they make strong unjustified assumptions in regions with no data. The rule is "don't extrapolate if you can possibly avoid it". (And you avoid it by collecting relevant data.)

Comment author: Wei_Dai 21 June 2009 10:31:06AM *  5 points [-]

Parametric extrapolation actually works quite well in some cases. I'll cite a few examples that I'm familiar with:

I don't see any examples of nonparametric extrapolation that have similar success.

A major problem in Friendly AI is how to extrapolate human morality into transhuman realms. I don't know of any parametric approach to this problem that isn't without serious difficulties, but "nonparametric" doesn't really seem to help either. What does your advice "don't extrapolate if you can possibly avoid it" imply in this case? Pursue a non-AI path instead?

Comment author: Eliezer_Yudkowsky 21 June 2009 04:29:03PM 2 points [-]

I'm in essential agreement with Wei here. Nonparametric extrapolation sounds like a contradiction to me (though I'm open to counterexamples).

The "nonparametric" part of the FAI process is where you capture a detailed picture of human psychology as a starting point for extrapolation, instead of trying to give the AI Four Great Moral Principles. Applying extrapolative processes like "reflect to obtain self-judgments" or "update for the AI's superior knowledge" to this picture is not particularly nonparametric - in a sense it's not an estimator at all, it's a constructor. But yes, the "extrapolation" part is definitely not a nonparametric extrapolation, I'm not really sure what that would mean.

Comment author: Wei_Dai 21 June 2009 09:12:50PM *  0 points [-]

But every extrapolation process starts with gathering detailed data points, so it confused me that you focused on "nonparametric" as a response to Robin's argument. If Robin is right, an FAI should discard most of the detailed picture of human psychology it captures during its extrapolation process as errors and end up with a few simple moral principles on its own.

Can you clarify which of the following positions you agree with?

  1. An FAI will end up with a few simple moral principles on its own.
  2. We might as well do the extrapolation ourselves and program the results into the FAI.
  3. Robin's argument is wrong or doesn't apply to the kind of moral extrapolation an FAI would do. It will end up with a transhuman morality that's no less complex than human morality.

(Presumably you don't agree with 2. I put it in just for completeness.)

Comment author: Eliezer_Yudkowsky 21 June 2009 11:08:58PM 2 points [-]

2, certainly disagree. 1 vs. 3, don't know in advance. But an FAI should not discard its detailed psychology as "error"; an AI is not subject to most of the "error" that we are talking about here. It could, however, discard various conclusions as specifically erroneous after having actually judged the errors, which is not at all the sort of correction represented by using simple models or smoothed estimators.

Comment author: Vladimir_Nesov 21 June 2009 11:56:21AM 2 points [-]

I think connecting this to FAI is far-fetched. To talk technically about FAI you need to introduce more tools first.

Comment author: Cyan 22 June 2009 02:02:10AM *  1 point [-]

What does your advice "don't extrapolate if you can possibly avoid it" imply in this case?

I distinguish "extrapolation" in the sense of an extending an empirical regularity (as in Moore's law) from inferring a logical consequence of of well-supported theory (as in the black hole prediction). This is really a difference of degree, not kind, but for human science, this distinction is a good abstraction. For FAI, I'd say the implication is that an FAI's morality-predicting component should be a working model of human brains in action.

Comment author: tut 21 June 2009 11:49:04AM *  1 point [-]

A major problem in Friendly AI is how to extrapolate human morality into >transhuman realms. I don't know of any parametric approach to this problem that >isn't without serious difficulties, but "nonparametric" doesn't really seem to help >either. What does your advice "don't extrapolate if you can possibly avoid it" imply in >this case? Pursue a non-AI path instead?

I think it implies that a Friendly sysop should not dream up a transhuman society and then try to reshape humanity into that society, but rather let us evolve at our own pace just attending to things that are relevant at each time.