Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

timtyler comments on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future - Less Wrong

11 Post author: inklesspen 01 March 2010 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: rwallace 03 March 2010 01:06:49AM 0 points [-]

Leaving aside the other reasons why this scenario is unrealistic, one of the big flaws in it is the assumption that a mind decomposes into an engine plus a utility function. In reality, this decomposition is a mathematical abstraction we use in certain limited domains because it makes analysis more tractable. It fails completely when you try to apply it to life as a whole, which is why no humans even try to be pure utilitarians. Of course if you postulate building a superintelligent AGI like that, it doesn't look good. How would it? You've postulated starting off with a sociopath that considers itself licensed to commit any crime whatsoever if doing so will serve its utility function, and then trying to cram the whole of morality into that mathematical function. It shouldn't be any surprise that this leads to absurd results and impossible research agendas. That's the consequence of trying to apply a mathematical abstraction outside the domain in which it is applicable.

Comment author: timtyler 03 March 2010 09:08:12AM -1 points [-]

Humans regularly use utilitly-based agents - to do things like play the stockmarket. They seem to work OK to me. Nor do I agree with you about utility-based models of humans. Basically, most of your objections seem irrelevant to me.

Comment author: rwallace 03 March 2010 10:30:10AM 2 points [-]

When studying the stock market, we use the convenient approximation that people are utility maximizers (where the utility function is expected profit). But this is only an approximation, useful in this limited domain. Would you commit murder for money? No? Then your utility function isn't really expected profit. Nor, as it turns out, is it anything else that can be written down - other than "the sum total of all my preferences", at which point we have to acknowledge that we are not utility maximizers in any useful sense of the term.

Comment author: timtyler 03 March 2010 11:28:34AM *  0 points [-]

"We" don't have to acknowledge that.

I've gone over my views on this issue before - e.g. here:


If you reject utility-based frameworks in this context, then fine - but I am not planning to rephrase my point for you.

Comment author: rwallace 03 March 2010 11:36:11AM 0 points [-]

Right, I hadn't read your comments in the other thread, but they are perfectly clear, and I'm not asking you to rephrase. But the key term in my last comment is in any useful sense. I do reject utility-based frameworks in this context because their usefulness has been left far behind.

Comment author: timtyler 03 March 2010 11:57:23AM 0 points [-]

Personally, I think a utilitarian approach is very useful for understanding behaviour. One can model most organisms pretty well as expected fitness maximisers with limited resources. That idea is the foundation of much evolutionary psychology.

Comment author: Morendil 03 March 2010 12:13:18PM 0 points [-]

The question isn't whether the model is predictively useful with respect to most organisms, it's whether it is predictively useful with respect to a hypothetical algorithm which replicates salient human powers such as epistemic hunger, model building, hierarchical goal seeking, and so on.

Say we're looking to explain the process of inferring regularities (such as physical laws) by observing one's environment - what does modeling this as "maximizing a utility function" buy us?

Comment author: timtyler 03 March 2010 01:04:13PM *  0 points [-]

In comparison with what?

The main virtues of utility-based models are that they are general - and so allow comparisons across agents - and that they abstract goal-seeking behaviour away from the implementation details of finite memories, processing speed, etc - which helps if you are interested in focusing on either of those areas.