Comment author: Tim_Tyler 13 February 2009 12:06:45PM 0 points [-]

[Tim, you post this comment every time I talk about evolutionary psychology, and it's the same comment every time, and it doesn't add anything new on each new occasion. If these were standard theories I could forgive it, but not considering that they're your own personal versions. I've already asked you to stop. --EY]

Comment author: Tim_Tyler 11 February 2009 05:18:52PM 2 points [-]

Evolutionary psychologists are absolutely and uniformly cynical about the real reason why humans are universally wired with a chunk of complex purposeful functional circuitry X (e.g. an emotion) - we have X because it increased inclusive genetic fitness in the ancestral environment, full stop.

One big problem is that they tend to systematically ignore memes.

Human brains are parasitised by replicators that hijack them for their own ends. The behaviour of a catholic priest has relatively little to do with the inclusive genetic fitness of the priest - and a lot to do with the inclusive genetic fitness of the Catholicism meme. Pinker and many of the other evo-psych guys still show little sign of "getting" this.

Comment author: Tim_Tyler 08 February 2009 02:50:20PM 0 points [-]

Wasn't there some material in CFAI about solving the wirehead problem?

In response to Value is Fragile
Comment author: Tim_Tyler 03 February 2009 06:52:00PM 1 point [-]

The analogy between the theory that humans behave like expected utility maximisers - and the theory that atoms behave like billiard balls could be criticised - but it generally seems quite appropriate to me.

In response to Value is Fragile
Comment author: Tim_Tyler 01 February 2009 12:27:28PM 1 point [-]

In dealing with your example, I didn't "change the space of states or choices". All I did was specify a utility function. The input states and output states were exactly as you specified them to be. The agent could see what choices were available, and then it picked one of them - according to the maximum value of the utility function I specified.

The corresponding real world example is an agent that prefers Boston to Atlanta, Chicago to Boston, and Atlanta to Chicago. I simply showed how a utility maximiser could represent such preferences. Such an agent would drive in circles - but that is not necessarily irrational behaviour.

Of course much of the value of expected utility theory arises when you use short and simple utility functions - however, if you are prepared to use more complex utility functions, there really are very few limits on what behaviours can be represented.

The possibility of using complex utility functions does not in any way negate the value of the theory for providing a model of rational economic behaviour. In economics, the utility function is pretty fixed: maximise profit, with specified risk aversion and future discounting. That specifies an ideal which real economic agents approximate. Plugging in an arbitrary utility function is simply an illegal operation in that context.

In response to Value is Fragile
Comment author: Tim_Tyler 01 February 2009 12:28:37AM 0 points [-]

The core problem is simple. The targeting information disappears, so does the good outcome. Knowing enough to refute every fallacious remanufacturing of the value-information from nowhere, is the hard part.

The utility function of Deep Blue has 8,000 parts - and contained a lot of information. Throw all that information away, and all you really need to reconstruct Deep Blue is the knowledge that it's aim is to win games of chess. The exact details of the information in the original utility function are not recovered - but the eventual functional outcome would be much the same - a powerful chess computer.

The "targeting information" is actually a bunch of implementation details that can be effectively recreated from the goal - if that should prove to be necessary.

It is not precious information that must be preserved. If anything, attempts to preserve the 8,000 parts of Deep Blue's utility function while improving it would actually have a crippling negative effect on its future development. Similarly with human values: those are a bunch of implementation details - not the real target.

Comment author: Tim_Tyler 31 January 2009 06:01:29PM 1 point [-]

I note that filial cannibalism is quite common on this planet.

Gamete selection has quite a few problems. It only operates on half the genome at a time - and selection is performed before many of the genes can be expressed. Of course gamete selection is cheap.

What spiders do - i.e. produce lots of offspring, and have many die as infants - has a huge number of evolutionary benefits. The lost babies do not cost very much, and the value of the selection that acts on them is great.

Human beings can't get easily get there - since they currently rely on gestation inside a human female body for nine months, but - make no mistake - if we could produce lots of young, and kill most of them at a young age, then that would be a vastly superior system in terms of the quantity and quality of the resulting selection.

Human females do abort quite a few foetuses after a month or so - ones that fail internal and maternal integrity tests - but the whole system is obviously appalingly inefficient.

In response to Value is Fragile
Comment author: Tim_Tyler 31 January 2009 10:34:59AM -2 points [-]

Consider a program which when given the choices (A,B) outputs A. If you reset it and give it choices (B,C) it outputs B. If you reset it again and give it choices (C,A) it outputs C. The behavior of this program cannot be reproduced by a utility function.

That is silly - the associated utility function is the one you have just explicitly given. To rephrase:

if (senses contain (A,B)) selecting A has high utility; else if (senses contain (B,C)) selecting B has high utility; else if (senses contain (C,A)) selecting C has high utility;

Here's another example: When given (A,B) a program outputs "indifferent". When given (equal chance of A or B, A, B) it outputs "equal chance of A or B". This is also not allowed by EU maximization.

Again, you have just given the utility function by describing it. As for "indifference" being a problem for a maximisation algorithm - it really isn't in the context of decision theory. An agent either takes some positive action, or it doesn't. Indifference is usually modelled as lazyness - i.e. a preference for taking the path of least action.

In response to Value is Fragile
Comment author: Tim_Tyler 30 January 2009 10:35:06PM -1 points [-]

But there is no principled way to derive an utility function from something that is not an expected utility maximizer!

You can model any agent as in expected utility maximizer - with a few caveats about things such as uncomputability and infinitely complex functions.

You really can reverse-engineer their utility functions too - by considering them as Input-Transform-Output black boxes - and asking what expected utility maximizer would produce the observed transformation.

A utility function is like a program in a Turing-complete language. If the behaviour can be computed at all, it can be computed by a utility function.

In response to Value is Fragile
Comment author: Tim_Tyler 30 January 2009 10:22:13PM -1 points [-]

Another way of saying this is that human beings are not expected utility maximizers, not as individuals and certainly not as societies.

They are not perfect expected utility maximizers. However, no expected utility maximizer is perfect. Humans approach the ideal at least as well as other organisms. Fitness maximization is the central explanatory principle in biology - and the underlying idea is the same. The economic framework associated with utilitarianism is general, of broad applicability, and deserves considerable respect.

View more: Prev | Next