TimS comments on Intellectual insularity and productivity - Less Wrong

53 [deleted] 11 June 2012 03:10PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (169)

You are viewing a single comment's thread. Show more comments above.

Comment author: TimS 13 June 2012 03:10:08AM 2 points [-]

I agree with you about how difficult the problem of finding unbiased history - the problem is probably harder than gwern suggested. At best, this problem is Friendliness-complete, in that if Omega gave us a solution to Friendliness, it would include a solution to this problem. And I'm not optimistic that the best case is true.

I think solving the problem is a prerequisite to solving Friendliness. It's probably a prerequisite for a rigorous understanding of how CEV or its equivalent will work. The fact that the community (and SIAI to a lesser extent) think this type of analysis is irrelevant is terribly disturbing to me.

Comment author: beoShaffer 13 June 2012 03:27:20AM 0 points [-]

At best, this problem is Friendliness-complete, in that if Omega gave us a solution to Friendliness, it would include a solution to this problem.

I think solving the problem is a prerequisite to solving Friendliness. It's probably a prerequisite for a rigorous understanding of how CEV or its equivalent will work. T

Why do you believe this?

Comment author: TimS 13 June 2012 02:29:19PM *  0 points [-]

The FAI project is about finding the moral theory that is correct,(1) then implementing potential AGIs so that they will implement that process of making decisions. I'm not aware of anything other than history that is a viable candidate to be evidence that a particular moral theory is correct.

Further, a FAI would need the capacity to predict how a human society would react to various circumstances or interventions. Again, history is the only data on how human societies react.

(1) I acknowledge the need to taboo "correct" in this context in order to make progress on this front.

Comment author: beoShaffer 13 June 2012 10:28:46PM 0 points [-]

I'm not aware of anything other than history that is a viable candidate to be evidence that a particular moral theory is correct.

Its possible that you're using correct to mean something completely different than I would use it to mean, but I don't see how history is supposed to be evidence that a moral theory is correct. Are you saying that historically widespread moral theories are likely to be correct?

Further, a FAI would need the capacity to predict how a human society would react to various circumstances or interventions.

This is something that the AI is supposed to figure out for itself, not something that would be hardcoded in (at least not in currently favored designs).