Comment author: TimFreeman 23 June 2013 06:02:11PM *  1 point [-]

Consider an arbitrary probability distribution P, and the smallest integer (or the lexicographically least object) x such that P(x) < 1/3^^^3 (in Knuth's up-arrow notation). Since x has a short description, a universal distribution shouldn't assign it such a low probability, but P does, so P can't be a universal distribution.

The description of x has to include the description of P, and that has to be computable if a universal distribution is going to assign positive probability to x.

If P has a short computable description, then yes, you can conclude that P is not a universal distribution. Universal distributions are not computable.

If the shortest computable description of P is long, then you can't conclude from this argument that P is not a universal distribution, but I suspect that it still can't be a universal distribution, since P is computable.

If there is no computable description of P, then we don't know that there is a computable description of x, so you have no contradiction to start with.

In response to comment by TimFreeman on Crisis of Faith
Comment author: Kenny 23 June 2013 04:01:00PM -1 points [-]

That's not what I said.

And that's why I wrote "You seem to think that ..."; I was describing why I thought you would privilege the hypothesis that lying would be better.

You're absolutely right that learning to lie really well and actually lying to one's family, the "genuinely wonderful people" they know, everyone in one's "social structure" and business, as well as one's husband and daughter MIGHT be the "compassionate thing to do". But why would you pick out exactly that option among all the possibilities?

This is a rhetorical question ...

Actually it wasn't a rhetorical question. I was genuinely curious how you'd describe the boundary.

The reason why I think it's a justified presumption to be honest to others is in fact because of a slippery slope argument. Human being's minds run on corrupted hardware and deception is dangerous (for one reason) because it's not always easy to cleanly separate one's lies from one's true beliefs. But your implication (that lying is sometimes right) is correct; there are some obvious or well-known schelling fences on that slippery slope, such as lying to the Nazis when they come to your house while you're hiding Jews.

Your initial statement seemed rather cavalier and didn't seem to be the product of sympathetic consideration of the original commenter's situation.

Have you considered Crocker's rules? If you care about the truth or you have something to protect then the Litany of Gendlin is a reminder of why you might adopt Crocker's rules, despite the truth possibly not being the "compassionate thing to do".

In response to comment by Kenny on Crisis of Faith
Comment author: TimFreeman 23 June 2013 04:49:14PM *  0 points [-]

You're absolutely right that learning to lie really well and actually lying to one's family, the "genuinely wonderful people" they know, everyone in one's "social structure" and business, as well as one's husband and daughter MIGHT be the "compassionate thing to do". But why would you pick out exactly that option among all the possibilities?

Because it's a possibility that the post we're talking about apparently did not consider. The Litany of Gendlin was mentioned in the original post, and I think that when interpreted as a way to interact with others, the Litany of Gendlin is obviously the wrong thing to do in some circumstances.

Perhaps having these beautifully phrased things with a person's name attached is a liability. If I add a caveat that it's only about one's internal process, or it's only about communication with people that either aspire to be rational or that you have no meaningful relationship with, then it's not beautifully phrased anymore, and it's not the Litany of Gendlin anymore, and it seems hopeless for the resulting Litany of Tim to get enough mindshare to matter.

But where exactly is the boundary dividing those things that, however uncomfortable or even devastating, must be said or written and those things about which one can decieve or dupe those one loves and respects?

Actually it wasn't a rhetorical question. I was genuinely curious how you'd describe the boundary.

I'm not curious about that, and in the absence of financial incentives I'm not willing to try to answer that question. There is no simple description of how to deal with the world that's something a reasonable person will actually want to do.

In response to comment by TimFreeman on Crisis of Faith
Comment author: Kenny 10 June 2013 04:36:04PM 0 points [-]

I don't believe I should lie to you (or anyone) because there might be one way you might not benefit from my honest and forthright communication. So, unfortunately, I've decided to reply to you and tell you that your advice is terrible, however well-intentioned. You seem to think that if you can imagine even one possible short-term benefit from lying or not-disclosing something, then that's sufficient justification to do so. But where exactly is the boundary dividing those things that, however uncomfortable or even devastating, must be said or written and those things about which one can decieve or dupe those one loves and respects?

'Radical honesty' isn't obviously required, but I would think that honesty about fundamental beliefs would be more important than what is normally considered acceptable dishonesty or non-disclosure for social purposes.

In response to comment by Kenny on Crisis of Faith
Comment author: TimFreeman 22 June 2013 07:55:59PM *  1 point [-]

You seem to think that if you can imagine even one possible short-term benefit from lying or not-disclosing something, then that's sufficient justification to do so.

That's not what I said. I said several things, and it's not clear which one you're responding to; you should use quote-rebuttal format so people know what you're talking about. Best guess is that you're responding to this:

[learning to lie really well] might be the compassionate thing to do, if you believe that the people you interact with would not benefit from hearing that you no longer believe.

You sharpened my "might be" to "is" just so you could disagree.

But where exactly is the boundary dividing those things that, however uncomfortable or even devastating, must be said or written and those things about which one can decieve or dupe those one loves and respects?

This is a rhetorical question, and it only makes sense in context if your point is that in the absence of such a boundary with an exact location that makes it clear when to lie, we should be honest. But if you can clearly identify which side of the boundary the alternative you're considering is on because it is nowhere close to the boundary, then the fact that you don't know exactly where the boundary is doesn't affect what you should do with that alternative.

You're doing the slippery slope fallacy.

Heretics have been burned at the stake before, so compassion isn't the only consideration when you're deciding whether to lie to your peers about your religious beliefs. My main point is that the Litany of Gendlin is sometimes a bad idea. We should be clear that you haven't cast any doubt on that, even though you're debating whether lying to one's peers is compassionate.

Given that religious relatives tend to fubar cryonics arrangements, the analogy with being burned at the stake is apt. Religious books tend to say nothing about cryonics, but the actual social process of religious groups tends to be strongly against it in practice.

(Edit: This all assumes that the Litany of Gendlin is about how to interact with others. If it's about internal dialogue, then of course it's not saying that one should or should not lie to others. IMO it is too ambiguous.)

Comment author: TimFreeman 24 December 2012 11:34:52PM 2 points [-]

Just drink two tablespoons of extra-light olive oil early in the morning... don't eat anything else for at least an hour afterward... and in a few days it will no longer take willpower to eat less; you'll feel so full all the time, you'll have to remind yourself to eat.

...and then increase the dose to 4 tablespoons if that doesn't work, and then try some other stuff such as crazy-spicing your food if that doesn't work, according to page 62 and Chapter 6 of Roberts' "Shangri-La" Diet" book. I hope you at least tried the higher dose before giving up.

Comment author: [deleted] 05 August 2012 10:30:03PM -1 points [-]

How do you add two utilities together?

They are numbers. Add them.

So are the atmospheric pressure in my room and the price of silver. But you cannot add them together (unless you have a conversion factor from millibars to dollars per ounce).

In response to comment by [deleted] on Secrets of the eliminati
Comment author: TimFreeman 31 October 2012 04:33:13AM 1 point [-]

How do you add two utilities together?

They are numbers. Add them.

So are the atmospheric pressure in my room and the price of silver. But you cannot add them together (unless you have a conversion factor from millibars to dollars per ounce).

Your analogy is invalid, and in general analogy is a poor substitute for a rational argument. In the thread you're replying to, I proposed a scheme for getting Alice's utility to be commensurate with Bob's so they can be added. It makes sense to argue that the scheme doesn't work, but it doesn't make sense to pretend it does not exist.

Comment author: Viliam_Bur 13 March 2012 10:32:03AM *  1 point [-]

Does anyone know of an example where arguing objective morality with someone who is doing evil things made them stop?

I would expect that peer pressure can make people stop doing evil things (either by force, or by changing their cost-benefit calculation of evil acts). Objective morality, or rather a definition of morality consistent within the group can help organize efficient peer pressure. If everyone obeys the same morality, they should be more ready to defend it, because they know they will be in majority.

Without a shared morality, and it's twin, hypocrisy, organizing peer pressure on wrongdoers is difficult.

Comment author: TimFreeman 29 May 2012 03:57:36AM *  0 points [-]

I would expect that peer pressure can make people stop doing evil things (either by force, or by changing their cost-benefit calculation of evil acts). Objective morality, or rather a definition of morality consistent within the group can help organize efficient peer pressure.

So in a conversation between a person A who believes in objective morality and a person B who does not, a possible motive for A is to convince onlookers by any means possible that objective morality exists. Convincing B is not particularly important, since effective peer pressure merely requires having enough people on board and not having any particular individual on board. In those conversations, I always had the role of B, and I assumed, perhaps mistakenly, that A's primary goal was to persuade me since A was talking to me. Thank you for the insight.

Comment author: RichardKennaway 10 October 2011 09:00:41AM *  0 points [-]

You say "again", but in the cited link it's called the "Texas Sharpshooter Utility Function". The word "fallacy" does not appear. If you're going to claim there's a fallacy here, you should support that statement. Where's the fallacy?

I was referring to the same fallacy in both cases. Perhaps I should have written out TSUF in full this time. The fallacy is the one I just described: attaching a utility function post hoc to what the system does and does not do.

The original claim was that human behavior does not conform to optimizing a utility function, and I offered the trivial counterexample. You're talking like you disagree with me, but you aren't actually doing so.

I am disagreeing, by saying that the triviality of the counterexample is so great as to vitiate it entirely. The TSUF is not a utility function. One might as well say that a rock has a utility of 1 for just lying there and 0 for leaping into the air.

If the goal is to help someone get what they want, so far as I can tell you have to model them as though they want something

You have to model them as if they want many things, some of them being from time to time in conflict with each other. The reason for this is that they do want many things, some of them being from time to time in conflict with each other. Members of LessWrong regularly make personal posts on such matters, generally under the heading of "akrasia", so it's not as if I was proposing here some strange new idea of human nature. The problem of dealing with such conflicts is a regular topic here. And yet there is still a (not universal but pervasive) assumption that acting according to a utility function is the pinnacle of rational behaviour. Responding to that conundrum with TSUFs is pretty much isomorphic to the parable of the Heartstone.

I know the von Neumann-Morgenstern theorem on utility functions, but since they begin by assuming a total preference ordering on states of the world, it would be begging the question to cite it in support of human utility functions.

Comment author: TimFreeman 10 October 2011 06:56:45PM -2 points [-]

The fallacy is the one I just described: attaching a utility function post hoc to what the system does and does not do.

A fallacy is a false statement. (Not all false statements are fallacies; a fallacy must also be plausible enough that someone is at risk of being deceived by it, but that doesn't matter here.) "Attaching a utility function post hoc to what the system does and does not do" is an activity. It is not a statement, so it cannot be false, and it cannot be a fallacy. You'll have to try again if you want to make sense here.

The TSUF is not a utility function.

It a function that maps world-states to utilities, so it is a utility function. You'll have to try again if you want to make sense here too.

We're nearly at the point where it's not worth my while to listen to you because you don't speak carefully enough. Can you do something to improve, please? Perhaps get a friend to review your posts, or write things one day and reread them the next before posting, or simply make an effort not to say things that are obviously false.

Comment author: RichardKennaway 10 October 2011 09:05:19AM 0 points [-]

You'll surely want a prior distribution over utility functions. Since they are computable functions, the usual Universal Prior works fine here, so far as I can tell. With this prior, TSUF-like utility functions aren't going to dominate the set of utility functions consistent with the person's behavior

How do you know this? If that's true, it can only be true by being a mathematical theorem, which will require defining mathematically what makes a UF a TSUF. I expect this is possible, but I'll have to think about it.

Comment author: TimFreeman 10 October 2011 06:30:16PM 0 points [-]

With [the universal] prior, TSUF-like utility functions aren't going to dominate the set of utility functions consistent with the person's behavior

How do you know this? If that's true, it can only be true by being a mathematical theorem...

No, it's true in the same sense that the statement "I have hands" is true. That is, it's an informal empirical statement about the world. People can be vaguely understood as having purposeful behavior. When you put them in strange situations, this breaks down a bit and if you wish to understand them as having purposeful behavior you have to contrive the utility function a bit, but for the most part people do things for a comprehensible purpose. If TSUF's were the simplest utility functions that described humans, then human behavior would be random, which is isn't. Thus the simplest utility functions that describe humans aren't going to be TSUF-like.

Comment author: kjmiller 09 October 2011 06:47:34PM *  0 points [-]

Seems to me we've got a gen-u-ine semantic misunderstanding on our hands here, Tim :)

My understanding of these ideas is mostly taken from reinforcement learning theory in AI (a la Sutton & Barto 1998). In general, an agent is determined by a policy pi that determines the probability that the agent will make a particular action in a particular state, P = pi(s,a). In the most general case, Pi can also depend on time, and is typically quite complicated, though usually not complex ;).
Any computable agent operating over any possible state and action space can be represented by some function pi, though typically folks in this field deal in Markov Decision Processes since they're computationally tractable. More on that in the book, or in a longer post if folks are interested. It seems to me that when you say "utility function", you're thinking of something a lot like pi. If I'm wrong about that, please let me know

When folks in the RL field talk about "utility functions", generally they've got something a little different in mind. Some agents, but not all of them, determine their actions entirely using a time-invariant scalar function U(s) over the state space. U takes in future states of the world and outputs the reward that the agent can expect to receive upon reaching that state (loosely "how much the agent likes s"). Since each action in general leads to a range of different future states with different probabilities, you can use U(s) to get an expected utility U'(a,s):

U'(a,s) = sum((p(s,a,s')*U(s')),

where s is the state you're in, a is the action you take, s' are the possible future states, and p is the probability than action a taken in state s will lead to state s'. Once your agent has a U', some simple decision rule over that is enough to determine the agent's policy. There are a bunch of cool things about agents that do this, one of which (not the most important) is that their behavior is much easier to predict. This is because behavior is determined entirely by U, a function over just the state space, whereas Pi is over the conjunction of state and action spaces. From a limited sample of behavior, you can get a good estimate of U(s), and use this to predict future behavior, including in regions of state and action space that you've never actually observed. If your agent doesn't use this cool U(s) scheme, the only general way to learn Pi is to actually watch the thing behave in every possible region of action and state space. This I think is why von Neumann was so interested in specifying exactly when an agent could and could not be treated as a utility-maximizer.

Hopefully that makes some sense, and doesn't just look like an incomprehensible jargon-filled snow job. If folks are interested in this stuff I can write a longer article about it that'll (hopefully) be a lot more clear.

Comment author: TimFreeman 10 October 2011 12:21:23AM *  1 point [-]

Some agents, but not all of them, determine their actions entirely using a time-invariant scalar function U(s) over the state space.

If we're talking about ascribing utility functions to humans, then the state space is the universe, right? (That is, the same universe the astronomers talk about.) In that case, the state space contains clocks, so there's no problem with having a time-dependent utility function, since the time is already present in the domain of the utility function.

Thus, I don't see the semantic misunderstanding -- human behavior is consistent with at least one utility function even in the formalism you have in mind.

(Maybe the state space is the part of the universe outside of the decision-making apparatus of the subject. No matter, that state space contains clocks too.)

The interesting question here for me is whether any of those alternatives to having a utility function mentioned in the Allais paradox Wikipedia article are actually useful if you're trying to help the subject get what they want. Can someone give me a clue how to raise the level of discourse enough so it's possible to talk about that, instead of wading through trivialities? PM'ing me would be fine if you have a suggestion here but don't want it to generate responses that will be more trivialities to wade through.

Comment author: RichardKennaway 09 October 2011 09:57:14AM 5 points [-]

A person's behavior can always be understood as optimizing a utility function, it just that if they are irrational (as in the Allais paradox) the utility functions start to look ridiculously complex. If all else fails, a utility function can be used that has a strong dependency on time in whatever way is required to match the observed behavior of the subject. "The subject had a strong preference for sneezing at 3:15:03pm October 8, 2011."

This is the Texas Sharpshooter fallacy again. Labelling what a system does with 1 and what it does not with 0 tells you nothing about the system. It makes no predictions. It does not constrain expectation in any way. It is woo.

Woo need not look like talk of chakras and crystals and angels. It can just as easily be dressed in the clothes of science and mathematics.

Comment author: TimFreeman 09 October 2011 11:59:47PM 1 point [-]

This is the Texas Sharpshooter fallacy again. Labelling what a system does with 1 and what it does not with 0 tells you nothing about the system.

You say "again", but in the cited link it's called the "Texas Sharpshooter Utility Function". The word "fallacy" does not appear. If you're going to claim there's a fallacy here, you should support that statement. Where's the fallacy?

It makes no predictions. It does not constrain expectation in any way. It is woo.

The original claim was that human behavior does not conform to optimizing a utility function, and I offered the trivial counterexample. You're talking like you disagree with me, but you aren't actually doing so.

If the only goal is to predict human behavior, you can probably do it better without using a utility function. If the goal is to help someone get what they want, so far as I can tell you have to model them as though they want something, and unless there's something relevant in that Wikipedia article about the Allais paradox that I don't understand yet, that requires modeling them as though they have a utility function.

You'll surely want a prior distribution over utility functions. Since they are computable functions, the usual Universal Prior works fine here, so far as I can tell. With this prior, TSUF-like utility functions aren't going to dominate the set of utility functions consistent with the person's behavior, but mentioning them makes it obvious that the set is not empty.

View more: Prev | Next