Nornagest comments on Belief in Intelligence - Less Wrong

17 Post author: Eliezer_Yudkowsky 25 October 2008 03:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Nornagest 01 September 2013 03:36:53AM *  0 points [-]

Wait a minute. Not everything in our universe is real-valued, much less continuous. Unless you're saying that an optimization goal must produce a well-ordering of possible environment states (which isn't true for any definition of optimization I've ever heard of in an AI context), it should be fairly easy to come up with an objective that generates a cost function returning zero for many possible hypotheses.

For example, "optimize the number of electoral votes I get in the next US presidential election".

Comment author: DanielLC 01 September 2013 03:57:56AM *  0 points [-]

Unless you're saying that an optimization goal must produce a well-ordering of possible environment states (which isn't true for any definition of optimization I've ever heard of in an AI context)

You mean an ordering? The reals aren't well-ordered.

If there's no ordering, there's circular preferences.

In any case, that's not what I was talking about.

For example, "optimize the number of electoral votes I gain in the upcoming US presidential election".

Compare the expected number of electoral votes with and without the optimizer. The difference gives you how powerful the optimizer is, and it will almost never be zero.

Comment author: Nornagest 01 September 2013 04:03:12AM *  0 points [-]

You mean an ordering? The reals aren't well-ordered.

Shoot, you're right. I believe I meant a strict ordering; it's been a while since I last studied set theory.

I'm confused as to what you mean by an optimizer now, though. It sounds like you mean something along the lines of a utility-based agent, but expected utility in this context is an attribute of a hypothesis relative to a model, not of the hypothesis relative to the world, and we're just as free to define models as we are to define optimization objectives. Previously I'd been thinking in terms of a more general agent, which needn't use a concept of utility and whose performance relative to an objective is found in retrospect.

Comment author: DanielLC 01 September 2013 06:18:24AM 1 point [-]

Previously I'd been thinking in terms of a more general agent, which needn't use a concept of utility and whose performance relative to an objective is found in retrospect.

It doesn't need to use utility explicitly. It's just whatever objective it tends to gravitate towards.

I'm not entirely sure what you're saying in the rest of the comment.

The reason I'm talking about "expected value" is that an optimizer must be able to work in a variety of environments. This is equivalent to talking about a probability distribution of environments.