Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

rwallace comments on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future - Less Wrong

11 Post author: inklesspen 01 March 2010 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: FAWS 03 March 2010 01:18:16AM 0 points [-]

Any set of preferances can be represented as a sufficietly complex utility function.

Comment author: rwallace 03 March 2010 01:29:19AM 3 points [-]

Sure, but the whole point of having the concept of a utility function, is that utility functions are supposed to be simple. When you have a set of preferences that isn't simple, there's no point in thinking of it as a utility function. You're better off just thinking of it as a set of preferences - or, in the context of AGI, a toolkit, or a library, or command language, or partial order on heuristics, or whatever else is the most useful way to think about the things this entity does.

Comment author: timtyler 03 March 2010 09:11:12AM 0 points [-]

Re: "When you have a set of preferences that isn't simple, there's no point in thinking of it as a utility function."

Sure there is - say you want to compare the utility functions of two agents. Or compare the parts of the agents which are independent of the utility function. A general model that covers all goal-directed agents is very useful for such things.

Comment author: wedrifid 03 March 2010 01:41:20AM 0 points [-]

(Upvoted but) I would say utility functions are supposed to be coherent, albeit complex. Is that compatible with what you are saying?

Comment author: rwallace 03 March 2010 02:16:12AM 0 points [-]

Er, maybe? I would say a utility function is supposed to be simple, but perhaps what I mean by simple is compatible with what you mean by coherent, if we agree that something like 'morality in general' or 'what we want in general' is not simple/coherent.