Cyan comments on Post Your Utility Function - Less Wrong

28 Post author: taw 04 June 2009 05:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (273)

You are viewing a single comment's thread. Show more comments above.

Comment author: Cyan 04 June 2009 10:48:46PM *  1 point [-]

..I'm not really sure... why [] we couldn't decide mathematically, and then figure out how to "own" the decision afterwards.

There's an enormous gap between, "the math says do this, so I guess I'll do that", and "after considering the math, I have decided to do this." The felt-experience of those two things is very different, and it's not merely an issue of using different words.

One can imagine a person who has committed emotionally to the maxim "shut up and multiply (when at all possible)" and made it an integral part of their identity. For such an individual, the commitment precedes the act of doing the math, and the enormous gap referred to above does not exist.

Comment author: pjeby 05 June 2009 01:10:25AM 1 point [-]

For such an individual, the commitment precedes the act of doing the math, and the enormous gap referred to above does not exist.

If such an individual existed, they would still have the same problem of shifting decisions, unless they also included a commitment to not recalculate before a certain point.

Consider, e.g. Newcomb's problem. If you do the calculation before, you should one-box. But doing the calculation at the actual time, means you should two-box.

So, to stick to their commitments, human beings need to precommit to not revisiting the math, which is a big part of my point here.

Your hypothetical committed-to-the-math person is not committed to their "decisions", they are committed to doing what the math says to do. This algorithm will not produce the same results as actual commitment will, when run on human hardware.

To put it more specifically, this person will not get the perceptual benefits of a committed decision for decisions which are not processed through the machinery I described earlier. They will be perceptually tuned to the math, not the situation, for example, and will not have the same level of motivation, due to a lack of personal stake in their decision.

In theory there's no difference between theory and practice, but in practice there is. This is because System 2 is very bad at intuitively predicting System 1's behavior, as we don't have a built-in reflective model of our own decision-making and motivation machinery. Thus, we don't know (and can't tell) how bad our theories are without comparing decision-making strategies across different people.

Comment author: Vladimir_Nesov 05 June 2009 09:49:20AM 1 point [-]

Consider, e.g. Newcomb's problem. If you do the calculation before, you should one-box. But doing the calculation at the actual time, means you should two-box.

This is incorrect. You are doing something very wrong if changing the time when you perform a calculation changes the result. That's an important issue in decision theory being reflectively consistent.

Comment author: pjeby 05 June 2009 03:01:24PM 1 point [-]

This is incorrect. You are doing something very wrong if changing the time when you perform a calculation changes the result. That's an important issue in decision theory being reflectively consistent.

That's the major point I'm making: that humans are NOT reflectively consistent without precommitment... and that the precommitment in question must be concretely specified, with the degree of concreteness and specificity required being proportional to the degree of "temptation" involved.

Comment author: Vladimir_Nesov 05 June 2009 03:47:25PM 1 point [-]

That may usually be the case, but this is not a law. Certain people could conceivably precommit to being reflectively consistent, to follow the results of calculations whenever the calculations are available.

Comment author: pjeby 05 June 2009 04:55:40PM *  1 point [-]

Certain people could conceivably precommit to being reflectively consistent, to follow the results of calculations whenever the calculations are available.

Of course they could. And they would not get as good results from either an experiential or practical perspective as the person who explicitly committed to actual, concrete results, for the reasons previously explained.

The brain makes happen what you decide to have happen, at the level of abstraction you specify. If you decide in the abstract to be a good person, you will only be a good person in the abstract.

In the same way, if you "precommit to reflective consistency", then reflective consistency is all that you will get.

It is more useful to commit to obtaining specific, concrete, desired results, since you will then obtain specific, concrete assistance from your brain for achieving those results, rather than merely abstract, general assistance.

Edit to add: In particular, note that a precommitment to reflective consistency does not rule out the possibility of one's exercising selective attention and rationalization as to which calculations to perform or observe. This sort of "commit to being a certain kind of person" thing tends to produce hypocrisy in practice, when used in the abstract. So much so, in fact, that it seems to be an "intentionally" evolved mechanism for self-deception and hypocrisy. (Which is why I consider it a particularly heinous form of error to try to use it to escape the need for concrete commitments -- the only thing I know of that saves one from hypocrisy!)

Comment author: Vladimir_Nesov 05 June 2009 04:59:39PM *  0 points [-]

I can't understand you.

Comment author: pjeby 05 June 2009 05:10:29PM *  2 points [-]

A person who decides to be "a good person" will selectively perceive those acts that make them a "good person", and largely fail to perceive those that do not, regardless of the proportions of these events, or whether these events are actually good in their effects. They will also be more likely perceive to be good, anything that they already want to do or which benefits them, and they will find ways to consider it a higher good to refrain from doing anything they'd rather not do in the first place.

Similarly, a person who decides to be "reflectively consistent" will not only selectively perceive their acts of reflective consistency, they will also fail to observe the lopsided way in which they apply the concept, nor will they notice how their "reflective consistency" is not, in itself, achieving any other results or benefits for themselves or others.

Brains operate on the level of abstraction you give them, so the more abstract the goal, the less connected to reality the results will be, and the more wiggle room there will be for motivated reasoning and selective perception.

So in theory you can precommit to reflective consistency, but in practice you will only get an illusion of reflective consistency.

(Edit to add: If you're still confused by this, it's probably because you're thinking about thinking, and I'm talking about actual behavior.)

Comment author: conchis 05 June 2009 05:32:02PM *  2 points [-]

I can't speak for Vladimir, but from my perspective, this is much clearer now. Thanks!

(ETA: FWIW, while most of your comments on this post leave me with a sense that you have useful information to share, I've also found them somewhat frustrating, in that I really struggle to figure out exactly what it is. I don't know if this is your writing style, my slow-wittedness, or just the fact that there's a lot of inferential distance between us; but I just thought it might be useful for you to know.)

Comment author: pjeby 05 June 2009 06:13:07PM 1 point [-]

FWIW, while most of your comments on this post leave me with a sense that you have useful information to share, I've also found them somewhat frustrating, in that I really struggle to figure out exactly what it is.

Since I'm trying to rapidly summarize a segment of what Robert Fritz took a couple of books to get across to me ("The Path of Least Resistance" and "Creating"), inferential distance is likely a factor.

It's mostly his model of decisionmaking and commitment that I'm describing, with a few added twists of mine regarding the ranking bit, and the "worst that could happen" part, as well as links from it to the System 1/2 model. (And of course I've been talking about Fritz's idea of the ideal-belief-reality-conflict in other threads, and that relates here as well.)

Comment author: Vladimir_Nesov 05 June 2009 07:09:44PM *  -1 points [-]

Basically, our conversation went like this:

You: People can't be reflectively consistent.
Me: Yes they can, sometimes.
You: Of course they can.
Me: I'm confused.
You: Of course people can be reflectively consistent. But only in the dreamland. If you are still confused, it's probably because you are still thinking about the dreamland, while I'm talking about reality.

Comment author: AdeleneDawner 05 June 2009 08:07:17PM 2 points [-]

I think pjeby's point was that reflective consistency is a way of thinking - so if you commit to thinking in a reflectively consistent way, you will think in that way when you think, but you may still wind up not acting according to that kind of thoughts every time you would want to, because you're not entirely likely to notice that you need to think them in the first place.

Comment author: pjeby 05 June 2009 07:59:57PM *  0 points [-]

Basically, our conversation went like this: You: People can't be reflectively consistent. Me: Yes they can, sometimes. You: Of course they can. Me: I'm confused.

No, it went like this:

Me: People can't be reflectively consistent
You: But they can precommit to be
Me: But that won't *actually make them so*
You: But they could precommit to acting as if they were
Me: Of course they can, but it still won't actually make them so.

See also Abraham Lincoln's, "If you call a tail a leg, how many legs does a dog have? Four, because calling a tail a leg doesn't make it so."

Comment author: Cyan 05 June 2009 02:05:03AM *  1 point [-]

Newcomb's problem is a bad example to use here, because it depends on which math the person has committed to, e.g., Eliezer claims to have worked out a general analysis that justifies one-boxing...

They will be perceptually tuned to the math, not the situation, for example, and will not have the same level of motivation, due to a lack of personal stake in their decision.

The personal stake I envision is defending their concept of their own identity. "I will do this because that's the kind of person I am."

Comment author: pjeby 05 June 2009 02:52:24AM 1 point [-]

The personal stake I envision is defending their concept of their own identity. "I will do this because that's the kind of person I am."

Then their perception will be attuned to what kind of person they are, instead of the result. You can't cheat your brain - it tunes in on whatever you've decided your "territory" is, whatever you "own". This is not a generalized abstraction, but a concrete one.

You know how, once you buy a car, you start seeing that model everywhere? That's an example of the principle at work. Notice that it's not that you start noticing cars in general, you notice cars that look like yours. When you "own" a decision, you notice things specifically connected with that particular decision or goal, not "things that match a mathematical model of decision-making". The hardware just isn't built for that.

You also still seem to be ignoring the part where, if your decisions are made solely on the basis of any external data, then your decision is conditional and can change when the circumstances do, which is a bad idea if your real goal or intent is unconditional.

I've already mentioned how a conditional decision based on one's weight leads to stop-and-start dieting, but another good example is when somebody decides to start an exercise program when they're feeling well and happy, without considering what will happen on the days they're running late or feeling depressed. The default response in such cases may be to give up the previous decision, since the conditions it was made under "no longer apply".

What I'm saying is, it doesn't matter what conditions you base a decision on, if it is based solely on conditions, and not on actually going through the emotional decision process to un-conditionalize it, then you don't actually have a commitment to the course of action. You just have a conditional decision to engage in that course, until conditions change.

And the practical difference between a commitment and a conditional decision is huge, when it comes to one's personal and individual goals.

Comment author: Cyan 05 June 2009 04:32:36AM 2 points [-]

Thank you for this interesting discussion. Although I posed the "emotionally committed to math" case as a specific hypothetical, many of the things you've written in response apply more generally, so I've got a lot more material to incorporate into my understanding of the pjeby model of cognition. (I know that's a misnomer, but since you're my main source for this material, that's how I think of it.) I'm going to have to go over this exchange more thoroughly after I get some sleep.

Comment author: conchis 05 June 2009 03:53:58PM *  0 points [-]

Of course, there are presumably situations where one's decision should change with the conditions. (I do get that there's a trade-off between retaining the ability to change with the right conditions and opening yourself up to changing with the wrong conditions though.)

Comment author: pjeby 05 June 2009 04:58:20PM 0 points [-]

Of course, there are presumably situations where one's decision should change with the conditions. (I do get that there's a trade-off between retaining the ability to change with the right conditions and opening yourself up to changing with the wrong conditions though.)

The trade-off optimum is usually in making decisions aimed at producing concrete results, while leaving one's self largely free to determine how to achieve those results. But again, the level of required specificity is determined by the degree of conflict you can expect to arise (temptations and frustrations).