RichardKennaway comments on Morality is not about willpower - Less Wrong

9 Post author: PhilGoetz 08 October 2011 01:33AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (144)

You are viewing a single comment's thread. Show more comments above.

Comment author: RichardKennaway 09 October 2011 09:57:14AM 5 points [-]

A person's behavior can always be understood as optimizing a utility function, it just that if they are irrational (as in the Allais paradox) the utility functions start to look ridiculously complex. If all else fails, a utility function can be used that has a strong dependency on time in whatever way is required to match the observed behavior of the subject. "The subject had a strong preference for sneezing at 3:15:03pm October 8, 2011."

This is the Texas Sharpshooter fallacy again. Labelling what a system does with 1 and what it does not with 0 tells you nothing about the system. It makes no predictions. It does not constrain expectation in any way. It is woo.

Woo need not look like talk of chakras and crystals and angels. It can just as easily be dressed in the clothes of science and mathematics.

Comment author: TimFreeman 09 October 2011 11:59:47PM 1 point [-]

This is the Texas Sharpshooter fallacy again. Labelling what a system does with 1 and what it does not with 0 tells you nothing about the system.

You say "again", but in the cited link it's called the "Texas Sharpshooter Utility Function". The word "fallacy" does not appear. If you're going to claim there's a fallacy here, you should support that statement. Where's the fallacy?

It makes no predictions. It does not constrain expectation in any way. It is woo.

The original claim was that human behavior does not conform to optimizing a utility function, and I offered the trivial counterexample. You're talking like you disagree with me, but you aren't actually doing so.

If the only goal is to predict human behavior, you can probably do it better without using a utility function. If the goal is to help someone get what they want, so far as I can tell you have to model them as though they want something, and unless there's something relevant in that Wikipedia article about the Allais paradox that I don't understand yet, that requires modeling them as though they have a utility function.

You'll surely want a prior distribution over utility functions. Since they are computable functions, the usual Universal Prior works fine here, so far as I can tell. With this prior, TSUF-like utility functions aren't going to dominate the set of utility functions consistent with the person's behavior, but mentioning them makes it obvious that the set is not empty.

Comment author: RichardKennaway 10 October 2011 09:05:19AM 0 points [-]

You'll surely want a prior distribution over utility functions. Since they are computable functions, the usual Universal Prior works fine here, so far as I can tell. With this prior, TSUF-like utility functions aren't going to dominate the set of utility functions consistent with the person's behavior

How do you know this? If that's true, it can only be true by being a mathematical theorem, which will require defining mathematically what makes a UF a TSUF. I expect this is possible, but I'll have to think about it.

Comment author: TimFreeman 10 October 2011 06:30:16PM 0 points [-]

With [the universal] prior, TSUF-like utility functions aren't going to dominate the set of utility functions consistent with the person's behavior

How do you know this? If that's true, it can only be true by being a mathematical theorem...

No, it's true in the same sense that the statement "I have hands" is true. That is, it's an informal empirical statement about the world. People can be vaguely understood as having purposeful behavior. When you put them in strange situations, this breaks down a bit and if you wish to understand them as having purposeful behavior you have to contrive the utility function a bit, but for the most part people do things for a comprehensible purpose. If TSUF's were the simplest utility functions that described humans, then human behavior would be random, which is isn't. Thus the simplest utility functions that describe humans aren't going to be TSUF-like.

Comment author: RichardKennaway 10 October 2011 09:00:41AM *  0 points [-]

You say "again", but in the cited link it's called the "Texas Sharpshooter Utility Function". The word "fallacy" does not appear. If you're going to claim there's a fallacy here, you should support that statement. Where's the fallacy?

I was referring to the same fallacy in both cases. Perhaps I should have written out TSUF in full this time. The fallacy is the one I just described: attaching a utility function post hoc to what the system does and does not do.

The original claim was that human behavior does not conform to optimizing a utility function, and I offered the trivial counterexample. You're talking like you disagree with me, but you aren't actually doing so.

I am disagreeing, by saying that the triviality of the counterexample is so great as to vitiate it entirely. The TSUF is not a utility function. One might as well say that a rock has a utility of 1 for just lying there and 0 for leaping into the air.

If the goal is to help someone get what they want, so far as I can tell you have to model them as though they want something

You have to model them as if they want many things, some of them being from time to time in conflict with each other. The reason for this is that they do want many things, some of them being from time to time in conflict with each other. Members of LessWrong regularly make personal posts on such matters, generally under the heading of "akrasia", so it's not as if I was proposing here some strange new idea of human nature. The problem of dealing with such conflicts is a regular topic here. And yet there is still a (not universal but pervasive) assumption that acting according to a utility function is the pinnacle of rational behaviour. Responding to that conundrum with TSUFs is pretty much isomorphic to the parable of the Heartstone.

I know the von Neumann-Morgenstern theorem on utility functions, but since they begin by assuming a total preference ordering on states of the world, it would be begging the question to cite it in support of human utility functions.

Comment author: TimFreeman 10 October 2011 06:56:45PM -2 points [-]

The fallacy is the one I just described: attaching a utility function post hoc to what the system does and does not do.

A fallacy is a false statement. (Not all false statements are fallacies; a fallacy must also be plausible enough that someone is at risk of being deceived by it, but that doesn't matter here.) "Attaching a utility function post hoc to what the system does and does not do" is an activity. It is not a statement, so it cannot be false, and it cannot be a fallacy. You'll have to try again if you want to make sense here.

The TSUF is not a utility function.

It a function that maps world-states to utilities, so it is a utility function. You'll have to try again if you want to make sense here too.

We're nearly at the point where it's not worth my while to listen to you because you don't speak carefully enough. Can you do something to improve, please? Perhaps get a friend to review your posts, or write things one day and reread them the next before posting, or simply make an effort not to say things that are obviously false.

Comment author: lessdazed 10 October 2011 07:26:39PM 7 points [-]

A fallacy is a false statement

Not a pattern of an invalid argument?

Comment author: RichardKennaway 11 October 2011 08:02:08AM 0 points [-]

Tim, lessdazed has just spoken for me.

Comment author: RichardKennaway 12 October 2011 10:58:12AM 1 point [-]

A fallacy is a false statement.

It a function that maps world-states to utilities, so it is a utility function.

As lessdazed has said, that is simply not what the word "fallacy" means. Neither is a utility function, in the sense of VNM, merely a function from world states to numbers; it is a function from lotteries over outcomes to numbers that satisfies their axioms. The TSUF does not satisfy those axioms. No function whose range includes 0, 1, and nothing in between can satisfy the VNM axioms. The range of a VNM utility function must be an interval of real numbers.

We're nearly at the point where it's not worth my while to listen to you because you

Ignored.

Comment author: RichardKennaway 11 October 2011 08:09:01AM 1 point [-]

We're nearly at the point where it's not worth my while to listen to you because you don't speak carefully enough.

Perhaps you are not reading carefully enough.