Psy-Kosh comments on Aiming at the Target - Less Wrong

8 Post author: Eliezer_Yudkowsky 26 October 2008 04:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Psy-Kosh 27 October 2008 07:49:35PM 0 points [-]

Tim: no, I'd think of it in reverse, that a utility function is a very special type of encoding for a set of preferences.

Again, I'm not denying that I think I have an intuitive sense of what I think I mean by the term. It's just that when I try to reduce it from something mind to something non mind, the best I can come up with is stuff like "that which an optimization process selects for"

At which point I have to declare everything an optimization process in some sense. (=I'm actually semisorta tempted to do this, to talk about optimization power as a property of processes in general, rather than distinguishing certain types of processes as optimization processes. This way I think I'd have a reasonably serviceable reduction of the notion of a preference. Except then with intelligent agents that aren't logically omnicient and, say, can't yet fully compute their morality (or primality or whatever as appropriate) and thus in a sense don't actually fully know their preferences.

Well, there's hopefully enough here to illustrate my confusion sufficiently that you or someone who's actually worked out the correct answer can help me out here. I'm annoyed that I don't know this. :)