conchis comments on Post Your Utility Function - Less Wrong

28 Post author: taw 04 June 2009 05:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (273)

You are viewing a single comment's thread. Show more comments above.

Comment author: pjeby 04 June 2009 10:16:24PM 1 point [-]

(FWIW, I was only claiming 1.) I'm fairly sympathetic to 2(a), although I would have thought we could get better at it with the right training. I can see how 2(b) could be a problem, but I guess I'm not really sure (i) that akrasia is always an issue, and (ii) why (assuming we could overcome 2(a)) we couldn't decide mathematically, and then figure out how to "own" the decision afterwards.

To own it, you'd need to not mathematically decide; the math could only ever be a factor in your decision. There's an enormous gap between, "the math says do this, so I guess I'll do that", and "after considering the math, I have decided to do this." The felt-experience of those two things is very different, and it's not merely an issue of using different words.

Regarding getting better at making decisions off of mathematics, I think perhaps you miss my point. For humans, the process by which decision-making is done, has consequences for how it's implemented, and for the person's experience and satisfaction regarding the decision itself. See more below...

(This seems to have worked for me, at least; and stopping to do the math has at sometimes stopped me "owning" the wrong decision, which can be worse than half-heartedly following through on the right one.)

I'd like to see an actual, non-contrived example of that. Mostly, my experience is that people are generally better off with a 50% plan executed 100% than a 100% plan executed 50%. It's a bit of a cliche -- one that I also used to be skeptical/cynical about -- but it's a cliche because it's true. (Note also that in the absence of catastrophic failure, the primary downside of a bad plan is that you learn something, and you still usually make some progress towards your goals.)

It's one of those places where in theory there's no difference between theory and practice, but in practice there is. We just think differently when we're considering something from when we're committed to it -- our brains just highlight different perceptions and memories for our attention, so much so that it seems like all sorts of fortunate coincidences are coming your way.

Our conscious thought process in System 2 is unchanged, but something on the System 1 level operates differently with respect to a decision that's passed through the full process.

I used to be skeptical about this, before I grasped the system 1/system 2 distinction (which I used to call the "you" (S2) vs. "yourself" (S1) distinction). I assumed that I could make a better plan before deciding to do something or taking any action, and refused to believe otherwise. Now I try to plan just enough to get S1 buy-in, and start taking action so I can get feedback sooner.

Comment author: conchis 04 June 2009 11:47:04PM *  1 point [-]

the math could only ever be a factor in your decision.

Sure. I don't think this is inconsistent with what I was suggesting, which was really just that that the math could start the process off.

For humans, the process by which decision-making is done, has consequences for how it's implemented, and for the person's experience and satisfaction regarding the decision itself.

All of which I agree with; but again, I don't see how this rules out learning to use math better.

Mostly, my experience is that people are generally better off with a 50% plan executed 100% than a 100% plan executed 50%.

Fair enough. The examples I'm thinking of typically involve "owned" decisions that are more accurately characterised as 0% plans (i.e. do nothing) or -X% plans (i.e. do things that are actively counterproductive).

Now I try to plan just enough to get S1 buy-in, and start taking action so I can get feedback sooner.

  1. How do you decide what to get S1 to buy in to?
  2. What do you do in situations where feedback comes too late (long term investments with distant payoffs) or never (e.g. ethical decisions where the world will never let you know whether you're right or not).

P.S. Yes, I'm avoiding the concrete example request. I actually have a few, but they'd take longer to write up than I have time available at the moment, and involve things I'm not sure I'm entirely comfortable sharing.

Comment author: pjeby 05 June 2009 12:49:29AM 0 points [-]

How do you decide what to get S1 to buy in to?

I already explained: you select options by comparing their positive traits. The devil is in the details, of course, but as you might imagine I do entire training CDs on this stuff. I've also written a few blog articles about this in the past.

What do you do in situations where feedback comes too late (long term investments with distant payoffs) or never (e.g. ethical decisions where the world will never let you know whether you're right or not).

I don't understand the question. If you're asking how I'd know whether I made the best possible decision, I wouldn't. Maximizers do very badly at long-term happiness, so I've taught myself to be a satisficer. I assume that the decision to invest something for the long term is better than investing nothing, and that regarding an ethical decision I will know by the consequences and my regrets or lack thereof whether I've done the "right thing"... and I probably won't have to wait very long for that feedback.