Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

FAWS comments on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future - Less Wrong

11 Post author: inklesspen 01 March 2010 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: rwallace 03 March 2010 01:06:49AM 0 points [-]

Leaving aside the other reasons why this scenario is unrealistic, one of the big flaws in it is the assumption that a mind decomposes into an engine plus a utility function. In reality, this decomposition is a mathematical abstraction we use in certain limited domains because it makes analysis more tractable. It fails completely when you try to apply it to life as a whole, which is why no humans even try to be pure utilitarians. Of course if you postulate building a superintelligent AGI like that, it doesn't look good. How would it? You've postulated starting off with a sociopath that considers itself licensed to commit any crime whatsoever if doing so will serve its utility function, and then trying to cram the whole of morality into that mathematical function. It shouldn't be any surprise that this leads to absurd results and impossible research agendas. That's the consequence of trying to apply a mathematical abstraction outside the domain in which it is applicable.

Comment author: FAWS 03 March 2010 01:18:16AM 0 points [-]

Any set of preferances can be represented as a sufficietly complex utility function.

Comment author: rwallace 03 March 2010 01:29:19AM 3 points [-]

Sure, but the whole point of having the concept of a utility function, is that utility functions are supposed to be simple. When you have a set of preferences that isn't simple, there's no point in thinking of it as a utility function. You're better off just thinking of it as a set of preferences - or, in the context of AGI, a toolkit, or a library, or command language, or partial order on heuristics, or whatever else is the most useful way to think about the things this entity does.

Comment author: timtyler 03 March 2010 09:11:12AM 0 points [-]

Re: "When you have a set of preferences that isn't simple, there's no point in thinking of it as a utility function."

Sure there is - say you want to compare the utility functions of two agents. Or compare the parts of the agents which are independent of the utility function. A general model that covers all goal-directed agents is very useful for such things.

Comment author: wedrifid 03 March 2010 01:41:20AM 0 points [-]

(Upvoted but) I would say utility functions are supposed to be coherent, albeit complex. Is that compatible with what you are saying?

Comment author: rwallace 03 March 2010 02:16:12AM 0 points [-]

Er, maybe? I would say a utility function is supposed to be simple, but perhaps what I mean by simple is compatible with what you mean by coherent, if we agree that something like 'morality in general' or 'what we want in general' is not simple/coherent.