Luke_A_Somers comments on Practical tools and agents - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (7)
Ohkay... but... if you're using a utility-function-maximizing system architecture, that is a great simplification to the system that really give a clear meaning to 'wanting' things, in a way that it doesn't have for neural nets or whatnot.
The mere fact that the utility function to be specified has to be far far more complex for a general intelligence than a driving robot doesn't change that. The vagueness is a marker for difficult work to be done, not something they're implying they've already done.