Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: David_Allen 14 March 2011 09:52:33PM 10 points [-]

Utilizing TDT gave me several key abilities that I previously lacked. The most important was realizing that what I chose now would be the same choice I would make at other times under the same circumstances.

This is similar to the mind hack I am working on to bypass my own hyperbolic discounting.

I assume that I will always make the same choice in similar circumstances. I find that this is a very good approximation of my actual behavior.

I determine the potential consequences of the alternatives in relation to my goals. Sometimes it helps me if I specify the consequences in a way that captures an opportunity cost. For example instead of cost in dollars, I'll consider the cost in terms of new tires for my truck.

I decide what to do -- treating the consequences as though they will occur immediately. In practice I only focus on the top one or two consequences for each alternative -- based on my current value weighting.

For example, every morning at work I am tempted by the pile of donuts in my office's cafeteria.

If I ate a donut every day, in a year I could gain an extra 13 pounds (50 work weeks * 5 days per week * 180 calories per donut / 3500 calories per pound).

These donuts would cost me about $190 (50 work weeks * 5 days per week * 0.75 dollars per donut).

I could consider more consequences, but these are enough. I don't want to pay $190 and gain 13 lbs of weight today -- just for the enjoyment of the donuts. In fact I would probably pay $190 just to lose 13 lbs right now; forget the donuts.

When I started to implement this approach I discovered that I could engage in automatic behavior that was counter to my choice. For example I would choose to not buy the donut only to get up and walk to the cafeteria. I would reaffirm the choice and yet still select a donut and pay for it. This behavior had almost an alien hand sense to it.

To break those automatic behaviors I found that I could simply stop and refuse to do anything that wasn't in keeping with my intellectual choice; I would even close my eyes. Every time I had an "urge" and would start to do or to think something, I would stop and check it against my current goal; if it didn't match I would I would refuse to continue. With repetition this replaced the negative automatic behavior with positive behavior.

Comment author: LauraABJ 15 March 2011 03:52:57PM 4 points [-]

It seems to me that one needs to place a large amount of trust in one's future self to implement such a strategy. It also requires that you be able to predict your future self's utility function. If you have a difficult time predicting what you will want and how you will feel, it becomes difficult to calculate the utility of any given precomittment. For example, I would be unconvinced that deciding to eat a donut now means that I will eat a donut every day and that not eating a donut now means I will not eat a donut every day. Knowing that I want a donut now and will be satisfied with that seems like an immediate win, while I do not know that I will be fat later. To me this seems like trading a definite win for a definite loss + potential bigger win. Also, it is not clear that there wouldn't be other effects. Not eating the donut now might make me dissatisfied and want to eat twice as much later in the day to compensate. If I knew exactly what the effects of action EAT DONUT vs NOT EAT DONUT were (including mental duress, alternative pitfalls to avoid, etc), then I would be better able to pick a strategy. The more predictable you are, the more you can plan a strategy that makes sense in the long term. In the absence of this information, most of just 'wing it' and do what seems best at the given moment. It would seem that deciding to be a TDT agent is deciding to always be predictable in certain ways. But that also requires trusting that future you will want to stick to that decision.

Comment author: sketerpot 18 February 2011 08:47:07PM *  8 points [-]

I've noticed a very specific feeling -- a conscious decision to stop fretting about how badly my current situation could go wrong, and to genuinely be calm and composed, focusing entirely on the situation itself. It's hugely useful, but I don't know how it works, or how to teach someone else to do it. I think it's this ability that you're talking about.

And you're right that it transfers to other domains. I once saw a guy with this ability step on a nail. It went right through his shoe and into the sole of his foot. After a few seconds of shouting, he calmed way down, sat down, and removed his shoe. It was pretty damn bloody, and several people around him started freaking out. He began talking in a slow, confident voice to try to calm them down, and then asked them to fetch some bandages and antiseptic, while he used his sock to stanch the immediate bleeding. The guy with the bleeding wound was the one with the most level head!

If anybody can figure out a repeatable way to instill this anti-freakout reflex in someone, that would be potentially life-saving.

In response to comment by sketerpot on Ability to react
Comment author: LauraABJ 21 February 2011 07:47:26AM 3 points [-]

I know that feeling, but I don't know how conscious it is. Basically when then outcome matters in a real immediate way and is heavily dependent on my actions, I get calm and go into 'I must do what needs to be done' mode. When my car lost traction in the rain and spun on the highway, I probably saved my life by reasoning how to best get control of it, pumping the break, and getting it into a clearing away from other vehicles/trees, all within a time frame that was under a minute. Immediately afterwards the thoughts running through my head were not, 'Oh fuck I could have died!' but 'How could I have handled that better.' and 'Oh fuck, I think the car is trashed.' It was only after I climbed out of the car that I realized I was physically shaking.

Likewise, when a man collapsed at synogogue after most people had left (there were only 6 of us), and hit his head on the table leaving a not unimpressive pool of blood on the floor, I immediately went over to him and checked his vitals and declared that someone should call an ambulance. The other people just stood around looking dumbfounded, and it turned out the problem was no one had a cell-phone on Saturday, so I called and was already giving the address by the time the man's friend realized there was something wrong and began screaming.

Doing these things did not feel like a choice. They were the necessary next action and so I did them. Period. I don't know how to describe that. "Emergency Programming"?

Comment author: LauraABJ 08 February 2011 06:34:01AM 12 points [-]

Ok- folding a fitted sheet is really fucking hard! I don't think that deserves to be on that list, since it really makes no difference whatsoever in life whether or not you properly fold a fitted sheet, or just kinda bundle it up and stuff it away. Not being able to deposit a check, mail a letter, or read a bus schedule, on the other hand can get you in trouble when you actually need to. Here's to not caring about linen care!

Comment author: ata 06 February 2011 01:38:05AM *  2 points [-]

I think Newcomb's problem would be more interesting if the 1st box contained 1/2 million and the 2nd box contained 1 million, and omega was only right, say 75% of the time... See how fast answers start changing. What if omega thought you were a dirty two-boxer and only put money in box b? Then you would be screwed if you one-boxed! Try telling your wife that you made the correct 'timeless decision theoretical' answer when you come home with nothing.

You can't change the form of the problem like that and expect the same answer to apply! If, when you two-box, Omega has a 25% chance of misidentifying you as a one-boxer, and vice versa, then you can use that in a normal expected utility calculation.

If you one-box, you have a 75% chance of getting $1 million, 25% nothing; if you two-box, 75% $.5 million, 25% $1.5 million. With linear utility over money, one-boxing and two-boxing are equivalent (expected value: $750,000), and given even a slightly risk-averse dollars->utils mapping, two-boxing is the better deal. (I don't think TDT disagrees with that reasoning...)

Comment author: LauraABJ 06 February 2011 03:42:19AM 1 point [-]

That's kind of my point-- it is a utility calculation, not some mystical er-problem. TDT-type problems occur all the time in real life, but they tend not to involve 'perfect' predictors, but rather other flawed agents. The decision to cooperate or not cooperate is thus dependent on the calculated utility of doing so.

Comment author: ShardPhoenix 01 February 2011 11:31:34AM *  2 points [-]

It seems to me that if you find yourself having a choice, you should two-box. If the premise is true then you probably won't feel like you have a choice, and your choice will be to one-box.

I guess you were selected by Prometheus :).

edit: this is related to the idea about going back in time and killing your grandfather. Either this is possible, or it's not. Either way you can't erase yourself and end up with the universe in an inconsistent state.

edit2: In other words, either the premise is impossible, or most people will one-box regardless of any recommendations or stratagems devised here or elsewhere.

edit3: I think this is different from the traditional Newcomb's problem in that by the time you know there's a problem, it's certainly too late to change anything. With Newcomb's you can pre-commit to one-boxing if you've heard about the problem beforehand.

Comment author: LauraABJ 06 February 2011 12:56:07AM *  0 points [-]

"I think this is different from the traditional Newcomb's problem in that by the time you know there's a problem, it's certainly too late to change anything. With Newcomb's you can pre-commit to one-boxing if you've heard about the problem beforehand."

Agreed. It would be like opening the first box, finding the million dollars, and then having someone explain Newcomb's problem to you as you consider whether or not to open the second. My thought would be, "Ha! Omega was WRONG!!!! " laughing as I dove into the second box.

edit: Because there was no contract made between TDT agents before the first box was opened, there seems to be no reason to honor that contract, which was drawn afterwards.

Comment author: LauraABJ 06 February 2011 12:33:14AM 1 point [-]

Ok, so as I understand timeless decision theory, one wants to honor precommitments that one would have made if the outcome actually depended on the answer regardless of whether or not the outcome actually depends on the answer or not. The reason for this seems to be that behaving as a timeless decision agent makes your behavior predictable to other timeless decision theoretical agents (including your future selves), and therefore big wins can be had all around for all, especially when trying to predict your own future behavior.

So, if you buy the idea that there are multiple universes, and multiple instantiations of this problem, and you somehow care about the results in these of these other universes, and your actions indicate probabalistically how other instantiations of your predicted self will act, then by all means, One Box on problem #1.

However, if you do NOT care about other universes, and believe this is in fact a single instantiation, and you are not totally freaked out by the idea of disobeying the desires of your just revealed upon you creator (or actually get some pleasure out of this idea), then please Two Box. You as you are in this universe will NOT unexist if you do so. You know that going into it. So, calculate the utility you gain from getting a million dollars this one time vs the utility you lose from being an imperfect timeless decision theoretical agent. Sure, there's some loss, but at a high enough pay out, it becomes a worthy trade.

I think Newcomb's problem would be more interesting if the 1st box contained 1/2 million and the 2nd box contained 1 million, and omega was only right, say 75% of the time... See how fast answers start changing. What if omega thought you were a dirty two-boxer and only put money in box b? Then you would be screwed if you one-boxed! Try telling your wife that you made the correct 'timeless decision theoretical' answer when you come home with nothing.

Comment author: LauraABJ 05 February 2011 07:34:22PM 5 points [-]

This is a truly excellent post. You bring the problem that we are dealing with into a completely graspable inferential distance and set up a mental model that essentially asks us to think like an AI and succeeds. I haven't read anything that has made me feel the urgency of the problem as much as this has in a really long time...

Comment author: wedrifid 23 January 2011 05:23:41AM 3 points [-]

"How many dates until....?" And I stared at him for a moment and said, "What makes you think there would be a second if the first didn't go so well?"

By the ellipsis do you mean 'sex', and indicate that lack of it on the first date constitutes a failure? (Good for you if you know what you want!)

Comment author: LauraABJ 23 January 2011 05:26:44AM 3 points [-]

Yes.

Comment author: wedrifid 23 January 2011 05:05:48AM 1 point [-]

But yeah, there were a couple of months there when I thought

A couple of months. Even that is a little unusual. :)

Comment author: LauraABJ 23 January 2011 05:13:53AM 3 points [-]

This is true. We were (and are) in the same social group, so I didn't need to go out of my way for repeated interaction. Had I met him once and he failed to pick up my sigs, then NO, we would NOT be together now... This reminds me of a conversation I had with Silas, in which he asked me, "How many dates until....?" And I stared at him for a moment and said, "What makes you think there would be a second if the first didn't go so well?"

Comment author: LauraABJ 23 January 2011 05:06:58AM 10 points [-]

Self help usually fails because people are terrible at identifying what their actual problems are. Even when they are told! (Ahh, sweet, sweet denial.) As a regular member of the (increasingly successful) OB-NYC meetup, I have witnessed a great deal of 'rationalist therapy,' and frequently we end up talking about something completely different from what the person originally asked for therapy for (myself included). The outside view of other people (preferably rationalists) is required to move forward on the vast majority of problems. We should also not underestimate the importance of social support and social accountability in general as positive motivating factors. Another reason that self-help might fail is that the people reading these particular techniques are trying to help themselves by themselves. I really hope others from this site take the initiative in forming supportive groups, like the one we have running in NYC.

View more: Next