You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

SilentCal comments on Superintelligence 9: The orthogonality of intelligence and goals - Less Wrong Discussion

8 Post author: KatjaGrace 11 November 2014 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

You are viewing a single comment's thread. Show more comments above.

Comment author: SilentCal 11 November 2014 04:42:19PM 1 point [-]

I think the correct answer is going to separate different notions of 'goal' (I think Aristotle might have done this; someone more erudite than I is welcome to pull that in).

One possible notion is the 'design' goal: in the case of a man-made machine, the designer's intent; in the case of a standard machine learner, the training function; in the case of a biological entity, reproductive fitness. There's also a sense in which the behavior itself can be thought of as the goal; that is, an entity's goal is to produce the outputs that it in fact produces.

There can also be internal structures that we might call 'deliberate goals'; this is what human self-help materials tell you to set. I'm not sure if there's a good general definition of this that's not parochial to human intelligence.

I'm not sure if there's a fourth kind, but I have an inkling that there might be: an approximate goal. If we say "Intelligence A maximizes function X", we can quantify how much simpler this is than the true description of A and how much error it introduces into our predictions. If the simplification is high and the error is low it might make sense to call X an approximate goal of A.