DanielLC comments on Belief in Intelligence - Less Wrong

17 Post author: Eliezer_Yudkowsky 25 October 2008 03:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: DanielLC 01 September 2013 06:18:24AM 1 point [-]

Previously I'd been thinking in terms of a more general agent, which needn't use a concept of utility and whose performance relative to an objective is found in retrospect.

It doesn't need to use utility explicitly. It's just whatever objective it tends to gravitate towards.

I'm not entirely sure what you're saying in the rest of the comment.

The reason I'm talking about "expected value" is that an optimizer must be able to work in a variety of environments. This is equivalent to talking about a probability distribution of environments.