timtyler comments on Work harder on tabooing "Friendly AI" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (51)
AFAICS, I never made the corresponding claim - that the utility function was part of what made the model a model.
How else can I understand your words "utility-based models"? This is no more a utility-based model than a hamburger on a car is a hamburger-based car.
Well, I would say "utilitarian", but that word seems to be taken. I mean that the model calculates utilities associated with its possible actions - and then picks the action with the highest utility.
But that is exactly what this wrapping in a post-hoc utility function doesn't do. The model first picks an action in whatever way it does, then labels that with utility 1.