Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Domain of Your Utility Function

24 Post author: Peter_de_Blanc 23 June 2009 04:58AM

Unofficial Followup to: Fake Selfishness, Post Your Utility Function

A perception-determined utility function is one which is determined only by the perceptual signals your mind receives from the world; for instance, pleasure minus pain. A noninstance would be number of living humans. There's an argument in favor of perception-determined utility functions which goes like this: clearly, the state of your mind screens off the state of the outside world from your decisions. Therefore, the argument to your utility function is not a world-state, but a mind-state, and so, when choosing between outcomes, you can only judge between anticipated experiences, and not external consequences. If one says, "I would willingly die to save the lives of others," the other replies, "that is only because you anticipate great satisfaction in the moments before death - enough satisfaction to outweigh the rest of your life put together."

Let's call this dogma perceptually determined utility. PDU can be criticized on both descriptive and prescriptive grounds. On descriptive grounds, we may observe that it is psychologically unrealistic for a human to experience a lifetime's worth of satisfaction in a few moments. (I don't have a good reference for this, but) I suspect that our brains count pain and joy in something like unary, rather than using a place-value system, so it is not possible to count very high.

The argument I've outlined for PDU is prescriptive, however, so I'd like to refute it on such grounds. To see what's wrong with the argument, let's look at some diagrams. Here's a picture of you doing an expected utility calculation - using a perception-determined utility function such as pleasure minus pain.

Here's what's happening: you extrapolate several (preferably all) possible futures that can result from a given plan. In each possible future, you extrapolate what would happen to you personally, and calculate the pleasure minus pain you would experience. You call this the utility of that future. Then you take a weighted average of the utilities of each future — the weights are probabilities. In this way you calculate the expected utility of your plan.

But this isn't the most general possible way to calculate utilities.

Instead, we could calculate utilities based on any properties of the extrapolated futures — anything at all, such as how many people there are, how many of those people have ice cream cones, etc. Our preferences over lotteries will be consistent with the Von Neumann-Morgenstern axioms. The basic error of PDU is to confuse the big box (labeled "your mind") with the tiny boxes labeled "Extrapolated Mind A," and so on. The inputs to your utility calculation exist inside your mind, but that does not mean they have to come from your extrapolated future mind.

So that's it! You're free to care about family, friends, humanity, fluffy animals, and all the wonderful things in the universe, and decision theory won't try to stop you — in fact, it will help.

Edit: Changed "PD" to "PDU."

Comments (94)

Comment author: dclayh 24 June 2009 03:47:21AM *  10 points [-]

A mild defense of PDU:

If one says, "I would willingly die to save the lives of others," the other replies, "that is only because you anticipate great satisfaction in the moments before death - enough satisfaction to outweigh the rest of your life put together."

The other could also reply: "You say now that you would die because it gives you pleasure now to think of yourself as the sort of person who would die to save others. Moreover, if you do someday actually sacrifice yourself for others, it would be because the disutility of shattering your self-perception would seem to outweigh (in that moment) the disutility of dying."

(And now we have come back yet again to Newcomb, it seems.)

Comment author: [deleted] 16 November 2011 12:25:30PM *  4 points [-]

"Would you kill someone for $100, if after killing them I could drug/hypnotize you so that you won't remember, and you'll never be able to find out?" You'd likely answer "yes" if your utility function is PD and "no" otherwise.

Comment author: wedrifid 16 November 2011 03:21:41PM *  2 points [-]

It is a rare person indeed who would answer 'yes' to that question (without being frivolous). It implies valuing signalling honesty more than signalling not-planning-to-kill-folks. MoR!Quirrel might, depending on who he was talking to.

Comment author: TheOtherDave 16 November 2011 04:10:50PM 2 points [-]

I know a lot of people who I expect would answer 'yes' for a hundred thousand dollars when talking to me -- maybe with a "depends on the person" caveat. A few for $1000. But $100? Yeah, not very many.

I suspect that threshold has more to do with the average level of wealth of my cohort than with our willingness to signal honesty.

Comment author: wedrifid 16 November 2011 04:15:33PM 4 points [-]

A hundred thousand is a lot of money! I deserve lots of trite costless signalling points for saying I wouldn't accept that offer. I'm holding out for a mil. Or at least a half! ;)

Comment author: [deleted] 16 November 2011 09:05:17PM 0 points [-]

I suspect that threshold has more to do with the average level of wealth of my cohort than with our willingness to signal honesty.

I would simply not trust the person making the offer for 100$. How do they make the consequences go away? Surely that costs at least a few thousand, assuming we're in a stable country. So why pay me so little? Besides the risk though, I don't see why murder should be expensive. It's not exactly complicated, assuming an unsuspecting civilian target. 100$ seems like a reasonable sum for the amount of work.

Comment author: Benito 20 May 2014 10:13:16PM 1 point [-]

I don't know that MoR!Quirrell would care about the memory wipe at all. Money is money.

Comment author: [deleted] 16 November 2011 05:30:29PM *  0 points [-]

I hadn't considered the possibility of lying. Make that “You likely would do that if ..., and you likely wouldn't otherwise.” Also, the amount of money and/or the number of people killed can be raised as needed for rich people/people who could kill one person for money anyway.

Comment author: wedrifid 16 November 2011 05:36:01PM 2 points [-]

(I would also usually specify "and there are no other consequences to you" as well given that most of the reason not to kill people is practical.)

Comment author: MineCanary 03 July 2009 02:51:25AM -1 points [-]

Or perhaps the pain of being a survivor when other's didn't and when you could have saved them (which can have an ongoing effect for the rest of your life) would outweigh the pleasure you could experience as a person living with survivor's guilt.

Although, if you were rational, you could probably overcome the survivor's guilt, but still.

I think in actual humans, if you were using this model as a metaphor for how they think, you'd have to say they sometimes irrational perceive another's brain as their own, so they're counting the net pleasure of the people they save in the utility calculation for their future mind. After all, throughout the past they've been able to derive pleasure from other people's pleasure or from imagining it, and it takes rational thought to eliminate that component from the calculation upon realizing that their brain will no longer be able to feel.

Comment author: Yvain 23 June 2009 03:03:45PM *  6 points [-]

I don't think this post adequately distinguishes between two concepts: how does the human utility function actually work, and how should it work.

The answer to the first question is (I thought people here agreed) that humans weren't actually utility maximizers; this makes things like your descriptive argument against perceptive determinism unnecessary and a lot of your wording misleading.

The second question is: if we're making some artificial utility function for an AI or just to prove a philosophical point, how should that work - and I think your answer is spot on. I would hope that people don't really disagree with you here and are just getting bogged down by confusion about real brains and some map-territory distinctions and importing epistemology where it's not really necessary.

Comment author: thomblake 23 June 2009 03:32:46PM 4 points [-]

Agreed. This post seems to add little to the discourse. However, it's useful to write clear, concise posts to sum these things up from time to time. With pictures!

Comment author: Wei_Dai 24 June 2009 02:51:29AM *  3 points [-]

The second question is: if we're making some artificial utility function for an AI or just to prove a philosophical point, how should that work - and I think your answer is spot on. I would hope that people don't really disagree with you here and are just getting bogged down by confusion about real brains and some map-territory distinctions and importing epistemology where it's not really necessary.

Where I've seen people use PDUs in AI or philosophy, they weren't confused, but rather chose to make the assumption of perception-determined utility functions (or even more restrictive assumptions) in order to prove some theorems. See these examples:

Here's a non-example, where the author managed to prove theorems without the PDU assumption:

Comment author: Wei_Dai 09 June 2011 01:56:15AM 2 points [-]

I wrote earlier:

Where I've seen people use PDUs in AI or philosophy, they weren't confused, but rather chose to make the assumption of perception-determined utility functions (or even more restrictive assumptions) in order to prove some theorems.

Well, here's a recent SIAI paper that uses perception-determined utility functions, but apparently not in order to prove theorems (since the paper contains no theorems). The author was advised by Peter de Blanc, who two years ago wrote the OP arguing against PDUs. Which makes me confused: does the author (Daniel Dewey) really think that PDUs are a good idea, and does Peter now agree?

Comment author: Peter_de_Blanc 11 June 2011 01:34:05PM 0 points [-]

I don't think that human values are well described by a PDU. I remember Daniel talking about a hidden reward tape at one point, but I guess that didn't make it into this paper.

Comment author: timtyler 11 June 2011 12:36:06PM *  0 points [-]

An adult agent has access to its internal state and its perceptions. If we model its access to its internal state as via internal sensors, then sense data are all it has access too - its only way of knowing about the world outside of its genetic heritage.

In that case, utility functions can only accept sense data as inputs - since that is the only thing that any agent ever has access to.

If you have a world-determined utility function, then - at some stage - the state of the world would first need to be reconstructed from perceptions before the function could be applied. That makes the world-determined utility functions an agent can calculate into a subset of perception-determined ones.

Comment author: pjeby 23 June 2009 11:53:39PM 0 points [-]

The second question is: if we're making some artificial utility function for an AI or just to prove a philosophical point, how should that work - and I think your answer is spot on.

Spot on for what, precisely? If one's goal is to make an AI that mirrors human values, it would not be every useful for it to use an utterly alien model of thought like utility maximization. ISTM that superhuman AI is the one place where you can't afford to use wishful thinking models in place of understanding what humans really do, and how they'll really act.

Comment author: Vladimir_Nesov 24 June 2009 12:15:36AM 6 points [-]

To model how humans really work, the AI needs to study real humans, not be a real human. The best bridge engineers are not themselves bridges.

(Maybe I completely misunderstood what you wrote, in which case please correct me, but it looks like you're suggesting that AIs that mirror human values must be implemented in the way humans really work.)

Comment author: pjeby 24 June 2009 01:07:30AM 1 point [-]

It looks like you're suggesting that AIs that mirror human values must be implemented in the way humans really work

I'm saying that a system that's based on utility maximizing is likely too alien of a creature to be able to be safely understood and utilized by humans.

That's more or less the premise of FAI, is it not? Any strictly-maximizing agent is bloody dangerous to anything that isn't maximizing the same thing. What's more, humans are ill-equipped to even grok this danger, let alone handle it safely.

Comment author: Vladimir_Nesov 24 June 2009 02:55:19AM 2 points [-]

The best bridges are not humans either.

Comment author: timtyler 24 June 2009 08:47:16PM -1 points [-]

Utility maximization can model any goal-oriented creature, within reason. Familiar, or alien, it makes not the slightest bit of difference to the theory.

Comment author: pjeby 24 June 2009 10:21:11PM 1 point [-]

Utility maximization can model any goal-oriented creature, within reason. Familiar, or alien, it makes not the slightest bit of difference to the theory.

Of course it can, just like you can model any computation with a Turing machine, or on top of the game of Life. And modeling humans (or most any living entity) as a utility maximizer is on a par with writing a spreadsheet program to run on a Turing machine. An interesting, perhaps fun or educational but exercise, but mostly futile.

I mean, sure, you could say that utility equals "minimum global error of all control systems", but it's rather ludicrous to expect this calculation to predict their actual behavior, since most of their "interests" operate independently. Why go to all the trouble to write a complex utility function when an error function is so much simpler and closer to the territory?

Comment author: timtyler 25 June 2009 04:47:46PM 0 points [-]

I think you are getting my position. Just as a universal computer can model any other type of machine, so a utilitiarian agent can model any other type of agent. These two concepts are closely analogous.

Comment author: pjeby 25 June 2009 06:11:30PM 0 points [-]

But your choice of platforms is not without efficiency and complexity costs, since maximizers inherently "blow up" more than satisficers.

Comment author: timtyler 23 June 2009 05:23:25PM 0 points [-]

I think humans can be accurately modelled as expected utility maximizers - provided the utility function is allowed to access partial recursive functions.

The agents you can't so model have things like uncomputable utility functions - and we don't needed to bother much about those.

People who claim humans are not expected utility maximizers usually seem to be making a much weaker claim: humans are irrational, human's don't optimise economic or fitness-based utility functions - or something like that - not that there exists no utility function that could possibly express their actions in terms of their sense history and state.

Comment author: pjeby 23 June 2009 06:14:40PM 2 points [-]

People who claim humans are not expected utility maximizers usually seem to be making a much weaker claim: humans are irrational, human's don't optimise economic or fitness-based utility functions - or something like that - not that there exists no utility function that could possibly express their actions in terms of their sense history and state.

PCT and Ainslie actually propose that humans are more like disutility minimizers and appetite satisficers. While you can abuse the notion of "utility" to cover these things, it leads to wrong ideas about how humans work, because the map has to be folded oddly to cover the territory.

Comment author: Cyan 23 June 2009 07:58:02PM 6 points [-]

Utility as a technical term in decision theory isn't equivalent to happiness and disutility isn't equivalent to unhappiness. Rather, the idea is to find some behaviorally descriptive function which takes things like negative affectivity and appetite satisfaction levels as arguments and return a summary, which for lack of a better term we call utility. The existence of such a function is required by certain axioms of consistency -- the thought is that if one's behavior cannot be described by a utility function, then they will have intransitive preferences.

Comment author: orthonormal 23 June 2009 09:28:27PM 2 points [-]

As a descriptive statement, human beings probably do have circular preferences; the prescriptive question is whether there is a legitimate utility function we can extrapolate from that mess without discarding too much.

Comment author: Vladimir_Nesov 23 June 2009 09:35:16PM *  1 point [-]

You inevitably draw specific actions, so there is no escaping forming a preference over actions (a decision procedure, not necessarily preference over things that won't play), and "discarding too much" can't be an argument against the inevitable. (Not that I particularly espouse the form of preference being utility+prior.)

Comment author: orthonormal 23 June 2009 09:44:23PM *  1 point [-]

Sorry, I meant something like "whether there is a relatively simple decision algorithm with consistent preferences that we can extrapolate from that mess without discarding too much". If not, then a superintelligence might be able to extrapolate us, but until then we'll be stymied in our attempts to think rationally about large unfamiliar decisions.

Comment author: Vladimir_Nesov 23 June 2009 10:01:47PM *  0 points [-]

Fair enough. Note that the superintelligence itself must be a simple decision algorithm for it to be knowably good, if that's at all possible (at the outset, before starting to process the particular data from observations), which kinda defeats the purpose of your statement. :-)

Comment author: orthonormal 23 June 2009 11:40:08PM 0 points [-]

Well, the code for the seed should be pretty simple, at least. But I don't see how that defeats the purpose of my statement; it may be that short of enlisting a superintelligence to help, all current attempts to approximate and extrapolate human preferences in a consistent fashion (e.g. explicit ethical or political theories) might be too crude to have any chance of success (by the standard of actual human preferences) in novel scenarios. I don't believe this will be the case, but it's a possibility worth keeping an eye on.

Comment author: Cyan 23 June 2009 10:17:22PM *  0 points [-]

Oh, indeed. I just want to distinguish between things that humans really experience and the technical meaning of the term "utility". In particular, I wanted to avoid a conversation in which disutility, which sounds like a euphemism for discomfort, is juxtaposed with decision theoretic utility.

Comment author: conchis 24 June 2009 09:12:54PM 0 points [-]

Nitpick: if one's behavior cannot be described by a utility function, then one will have preferences that are intransitive, incomplete, or violate continuity.

Comment author: Cyan 24 June 2009 09:22:29PM 0 points [-]

I'm with you on "incomplete" (thanks for the catch!) but I'm not so sure about "violate continuity". Can you give an example of preferences that are transitive and complete but violate continuity and are therefore not encodable in a utility function?

Comment author: conchis 24 June 2009 11:45:26PM 0 points [-]

Lexicographic preferences are the standard example: they are complete and transitive but violate continuity, and are therefore not encodable in a standard utility function (i.e. if the utility function is required to be real-valued; I confess I don't know enough about surreals/hyperreals etc. to know whether they will allow a representation).

Comment author: Cyan 25 June 2009 12:42:16AM *  0 points [-]

I'd heard that mentioned before around these parts, but I didn't recall it because I don't really understand it. I think I must be making a false assumption, because I'm thinking of lexicographic ordering as the ordering of words in a dictionary, and the function that maps words to their ordinal position in the list ought to qualify. Maybe the assumption I'm missing is a countably infinite alphabet? English lacks that.

Comment author: conchis 25 June 2009 01:06:50AM 0 points [-]

The wikipedia entry on lexicographic preferences isn't great, but gives the basic flavour:

Lexicographic preferences (lexicographical order based on the order of amount of each good) describe comparative preferences where an economic agent infinitely prefers one good (X) to another (Y). Thus if offered several bundles of goods, the agent will choose the bundle that offers the most X, no matter how much Y there is. Only when there is a tie of Xs between bundles will the agent start comparing Ys.

Comment author: Cyan 25 June 2009 03:31:56AM *  0 points [-]

That entry says,

...the classical example of rational preferences that are not representable by a utility function, if amounts can be any non-negative real value.

So my intuition above was not correct -- an uncountably infinite alphabet is what's required.

Comment author: timtyler 24 June 2009 08:38:18PM -1 points [-]

Intransitive preferences don't mean that you can't describe an agent's actions with a utitilty function. So what if an agent prefers A to B, B to C and C to A? It might mean they will drive in circles and waste their energy - but it doesn't mean you can't describe their preferences with a utility function. All it means is that their utility function will not be as simple as it could be.

Comment author: Cyan 24 June 2009 09:15:42PM 2 points [-]

In the standard definition, the domain of the utility function is the set of states of the world and the range is the set of real numbers; the preferences among states of the world are encoded as inequalities in the utility of those states. I read your comment as asserting that there exists real numbers a, b, c, such that a > b, b > c, and c > a. I conclude that you must have something other than the standard definition in mind.

Comment author: timtyler 24 June 2009 09:36:20PM 1 point [-]

If A is Alaska, B is Boston, and C is California, the preferences involve preferring being in Alaska if you are in Boston, preferring being in Boston if you are in California, and preferring being in California if you are in Alaska. The act of expressing those preferences using a utility function does not imply any false statements about the set of real numbers.

Comment author: conchis 25 June 2009 12:22:49AM *  2 points [-]

Preferring A to B means that, given the choice between A and B, you will pick A, regardless of where you currently are (you might be in California but have to leave). This is not the same thing as choosing A over B, contingent on being in B.

You can indeed express the latter set of preferences you describe using a standard utility function, but that's because you've redefined them so that they're no longer intransitive.

Comment author: MichaelBishop 24 June 2009 09:02:55PM 0 points [-]

Its not clear you're contradicting Cyan. You describe the converse of what he describes.

Even if a utility function can be written down which allows intransitive preferences, its worth noting that transitive preferences is a standard assumption.

Comment author: timtyler 24 June 2009 09:27:54PM 0 points [-]

ISTM that if an agent's preferences cannot be described by a utility function, then it is because the agent is either spatially or temporally infinite - or because it is uncomputable.

Comment author: conchis 24 June 2009 09:01:27PM *  0 points [-]

I'm struggling to see how such a utility function could work. Could you give an example of a utility function that describes the preferences you just set out, and has the implication that u(x)>u(y) <=> xPy?

Comment author: timtyler 24 June 2009 09:24:35PM 0 points [-]

It’s not difficult to code (if A:B,if B:C,if C:A) into a utilitarian system. If A is Alaska, B is Boston, and C is California, that would cause driving in circles.

Comment author: conchis 24 June 2009 11:48:51PM *  0 points [-]

With respect, that doesn't seem to meet my request. Like Cyan, I'm tempted to conclude that you are using a non-standard definition of "utility function".

ETA: Oh, wait... perhaps I've misunderstood you. Are you trying to say that you can represent these preferences with a function that assigns: u(A:B)>u(x:B) for x in {B,C}; u(B:C)>u(x:C) for x in {A,C} etc? If so, then you're right that you can encode these preferences into a utility function; but you've done so by redefining things such that the preferences no longer violate transitivity; so Cyan's original point stands.

Comment author: timtyler 25 June 2009 04:53:19PM -1 points [-]

Cyan claimed some agent's behaviour corresponded to intransitive preferences. My example is the one that is most frequently given as an example of circular preferences. If this doesn't qualify, then what behaviour are we talking about?

What is this behaviour pattern that supposedly can't be represented by a utility function due to intransitive preferences?

Comment author: conchis 25 June 2009 05:20:04PM 1 point [-]

Suppose I am in Alaska. If told I can either stay or go to Boston, I choose to stay. If told I can either stay or go to California, I choose California. If told I must leave for either Boston or California, I choose Boston. These preferences are intransitive, and AFAICT, cannot be represented by a utility function. To do so would require u(A:A)>u(B:A)>u(C:A)>u(A:A).

More generally, it is true that one can often redefine states of the world such that apparently intransitive preferences can be rendered transitive, and thus amenable to a utility representation. Whether it's wise or useful to do so will depend on the context.

Comment author: timtyler 24 June 2009 08:52:48PM *  2 points [-]

Utility maximisation is not really a theory about how humans work. AFAIK, nobody thinks that humans have an internal representation of utility which they strive to maximise. Those that entertain this idea are usually busy constructing a straw-man critique.

It is like how you can model catching a ball with PDEs. You can build a pretty good model like that - even though it bears little relationship to the actual internal operation.

[2011 edit: hmm - the mind actually works a lot more like that than I previously thought!]

Comment author: pjeby 24 June 2009 10:30:57PM 0 points [-]

It is like how you can model catching a ball with PDEs. You can build a pretty good model like that - even though it bears little relationship to the actual internal operation.

That's kind of ironic that you mention PDE's, since PCT actually proposes that we do use something very like an evolutionary algorithm to satisfice our multi-goal controller setups. IOW, I don't think it's quite accurate to say that PDE's "bear little relationship to the actual internal operation."

Comment author: taw 23 June 2009 09:24:14PM 1 point [-]

I think humans can be accurately modelled as expected utility maximizers

I thought so too even as recently as a month ago, but Post Your Utility Function and If it looks like utility maximizer and quacks like utility maximizer... for pretty strong arguments against this.

Comment author: timtyler 24 June 2009 08:41:18PM 2 points [-]

The arguments in the posts themselves seem unimpressive to me in this context. If there are strong arguments that human actions cannot, in principle, be modelled well by using a utility function, perhaps they should be made explicit.

Comment author: MichaelBishop 24 June 2009 09:16:06PM 0 points [-]

Agreed. Now, if it were possible to write a complete utility function for some person, it would be pretty clear that "utility" did not equal happiness, or anything simple like that.

Comment author: timtyler 24 June 2009 09:51:55PM *  0 points [-]

I tend to think that the best candidate in most organisms is "expected fitness". It's probably reasonable to expect fairly heavy correlations with reward systems in brains - if the organisms have brains.

Comment author: timtyler 24 June 2009 09:05:48PM *  0 points [-]

Agents which can't be modelled by a utility-based framework are:

  • Agents which are infinite;
  • Agents with uncomputable utility functions.

AFAIK, there's no good evidence that either kind of agent can actually exist. Counter-arguments are welcome, of course.

Comment author: MichaelBishop 24 June 2009 09:24:17PM 0 points [-]

Do you have models which explain economics which don't involve individual utility maximization and yet do as well or better. I'm not saying that models of utility maximization are always best, social scientists, including economists, are discovering this But I do think expected utility maximization is currently the best approach to a large class of problems.

Comment author: timtyler 18 March 2011 09:44:37PM *  -1 points [-]

The second question is: if we're making some artificial utility function for an AI or just to prove a philosophical point, how should that work - and I think your answer is spot on. I would hope that people don't really disagree with you here and are just getting bogged down by confusion about real brains and some map-territory distinctions and importing epistemology where it's not really necessary.

I'm pretty sure that the first reasonably-intelligent machines will work much as illustrated in the first diagram - for engineering reasons: it is so much easier to build them that way. Most animals are wired up that way too - as we can see from their drug-taking behaviour.

Comment author: TsviBT 14 November 2012 08:28:53AM 5 points [-]

A counterexample to the claim "psychologically normal humans (implicitly) have a utility function that looks something like a PDU function":

Your best friend is deathly ill. I give you a choice between Pill A and Pill B.

If you choose Pill A and have your friend swallow it, he will heal - but he will release a pheromone that will leave you convinced for the rest of your life that he died (and you won't interact with him ever again).

If you choose Pill B and swallow it, your friend will die - but you will be convinced for the rest of your life that he has fully healed, and is just on a different planet or something. From time to time you will hallucinate pleasant conversations with him, and will never be the wiser.

No, you can't have both pills. Presumably you will choose Pill A. You do not (only) desire to be in a state of mind where you believe your friend is healthy. You desire that your friend be healthy. You seek the object of your desire, not the state of mind produced by the object of your desire.

My brain has this example tagged as “similar to but not the same as something I’ve read”, but tell me if this is stolen.

Comment deleted 14 November 2012 01:58:02PM [-]
Comment author: wedrifid 14 November 2012 02:02:02PM *  0 points [-]

Comment author: [deleted] 15 November 2012 11:48:36AM *  0 points [-]

How did you do that? There was no reply to that comment when I reloaded the page after retracting it in order to delete it. Are you a ninja or something? :-)

Comment author: wedrifid 15 November 2012 01:47:46PM *  3 points [-]

How did you do that? There was no reply to that comment when I reloaded the page after retracting it in order to delete it. Are you a ninja or something?

Worse, a multitasker. That kind of things wreaks havoc on race conditions.

I've removed my reply and the associated quote.

Comment author: [deleted] 15 November 2012 04:08:58PM *  0 points [-]

Worse, a multitasker. That kind of things wreaks havoc on race conditions.

I know... Minutes ago I lost a hand in an online poker game (with fake money, fortunately) as a result of being talking to someone else at the same time for the umpteenth time.

I've removed my reply and the associated quote.

And I've removed the parenthetical in my reply to you.

Comment author: ArisKatsaris 15 November 2012 11:57:09AM *  2 points [-]

How did you do that?

One probably just needs to keep open the browser tab from a time when your post had not yet been deleted...

Comment author: thomblake 23 June 2009 02:00:20PM *  4 points [-]

When I read "PD" here I automatically think "prisoner's dilemma", no matter how many times I go back and reread "perceptual determinism".

ETA: thanks

Comment author: Peter_de_Blanc 23 June 2009 10:46:30PM 1 point [-]

OK, I changed it to PDU.

Comment author: Vladimir_Nesov 23 June 2009 06:24:16PM 0 points [-]

Expected utility maximization seems to be irrelevant to the main point of this article.

Comment deleted 23 June 2009 03:21:06PM [-]
Comment author: RichardKennaway 23 June 2009 04:04:37PM *  0 points [-]

I didn't know you could put pictures in LW posts. How does that work?

Seconded. I did know, having seen them before, but I don't know how. Writing the img tag is easy, the problem is uploading an image to the LW server (as these images have been). For want of a place on LW to make enquiries such as this, could someone post the answer here?

ETA: On the top right, below the masthead, is a link called "WIKI". From experience with other wikis, it is possible that there might be an answer to the question there. But the link does not load for me.

Comment author: Vladimir_Nesov 23 June 2009 05:36:06PM 2 points [-]

In the article editor, you can upload images using "Insert/edit image" tool.

Comment author: Psychohistorian 23 June 2009 06:27:14PM *  -1 points [-]

Edit: Probably skip to the ***, I suspect my original writing was unclear.

This seems to use two different definitions of utility. If utility is defined as direct perceptual experience, the argument fails. If utility is defined more broadly, it does not. If my current utility is determined entirely perceptually, it does not follow that I should try to assess my future utility more holistically.

The real question seems to be whether the broader definition of utility actually accounts for how we feel, how we live life, or what we actually maximize for.

***Edit: I may have expressed my point poorly. Or I may be making a bad point. Let me try again.

I simply fail to see how this disproves or invalidates PDU. If PDU is correct, then the larger box in the second diagram is not valid input into a utility function. Obviously, if you allowed the larger box to count as input in the utility function, then, yes, PDU is incorrect under that condition. But PDU seems to be defined as the larger box not counting, so this is trivial.

This is not to say PDU is in any sense necessary or preferable, but I just don't see where PDU gets refuted. It seems to define a domain for a utility function. I don't see how domains can be universally refuted.

A utility function could operate off of the number of paperclips perceived to be in one's field of vision, and it would be a perfectly coherent utility function that doesn't really care about the outside universe.

Comment author: Bongo 23 June 2009 11:21:50AM *  0 points [-]

The two pictures are identical.

Edit: My bad, they're not.

Comment author: Vladimir_Golovin 23 June 2009 12:05:54PM *  3 points [-]

They are not. Look at the origin of the arrows on the left.

(But yes, the difference was hard to spot. Peter, how about making these arrows red, in both pictures?)