LauraABJ

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

It seems to me that one needs to place a large amount of trust in one's future self to implement such a strategy. It also requires that you be able to predict your future self's utility function. If you have a difficult time predicting what you will want and how you will feel, it becomes difficult to calculate the utility of any given precomittment. For example, I would be unconvinced that deciding to eat a donut now means that I will eat a donut every day and that not eating a donut now means I will not eat a donut every day. Knowing that I want a donut now and will be satisfied with that seems like an immediate win, while I do not know that I will be fat later. To me this seems like trading a definite win for a definite loss + potential bigger win. Also, it is not clear that there wouldn't be other effects. Not eating the donut now might make me dissatisfied and want to eat twice as much later in the day to compensate. If I knew exactly what the effects of action EAT DONUT vs NOT EAT DONUT were (including mental duress, alternative pitfalls to avoid, etc), then I would be better able to pick a strategy. The more predictable you are, the more you can plan a strategy that makes sense in the long term. In the absence of this information, most of just 'wing it' and do what seems best at the given moment. It would seem that deciding to be a TDT agent is deciding to always be predictable in certain ways. But that also requires trusting that future you will want to stick to that decision.

I know that feeling, but I don't know how conscious it is. Basically when then outcome matters in a real immediate way and is heavily dependent on my actions, I get calm and go into 'I must do what needs to be done' mode. When my car lost traction in the rain and spun on the highway, I probably saved my life by reasoning how to best get control of it, pumping the break, and getting it into a clearing away from other vehicles/trees, all within a time frame that was under a minute. Immediately afterwards the thoughts running through my head were not, 'Oh fuck I could have died!' but 'How could I have handled that better.' and 'Oh fuck, I think the car is trashed.' It was only after I climbed out of the car that I realized I was physically shaking.

Likewise, when a man collapsed at synogogue after most people had left (there were only 6 of us), and hit his head on the table leaving a not unimpressive pool of blood on the floor, I immediately went over to him and checked his vitals and declared that someone should call an ambulance. The other people just stood around looking dumbfounded, and it turned out the problem was no one had a cell-phone on Saturday, so I called and was already giving the address by the time the man's friend realized there was something wrong and began screaming.

Doing these things did not feel like a choice. They were the necessary next action and so I did them. Period. I don't know how to describe that. "Emergency Programming"?

LauraABJ140

Ok- folding a fitted sheet is really fucking hard! I don't think that deserves to be on that list, since it really makes no difference whatsoever in life whether or not you properly fold a fitted sheet, or just kinda bundle it up and stuff it away. Not being able to deposit a check, mail a letter, or read a bus schedule, on the other hand can get you in trouble when you actually need to. Here's to not caring about linen care!

That's kind of my point-- it is a utility calculation, not some mystical er-problem. TDT-type problems occur all the time in real life, but they tend not to involve 'perfect' predictors, but rather other flawed agents. The decision to cooperate or not cooperate is thus dependent on the calculated utility of doing so.

"I think this is different from the traditional Newcomb's problem in that by the time you know there's a problem, it's certainly too late to change anything. With Newcomb's you can pre-commit to one-boxing if you've heard about the problem beforehand."

Agreed. It would be like opening the first box, finding the million dollars, and then having someone explain Newcomb's problem to you as you consider whether or not to open the second. My thought would be, "Ha! Omega was WRONG!!!! " laughing as I dove into the second box.

edit: Because there was no contract made between TDT agents before the first box was opened, there seems to be no reason to honor that contract, which was drawn afterwards.

Ok, so as I understand timeless decision theory, one wants to honor precommitments that one would have made if the outcome actually depended on the answer regardless of whether or not the outcome actually depends on the answer or not. The reason for this seems to be that behaving as a timeless decision agent makes your behavior predictable to other timeless decision theoretical agents (including your future selves), and therefore big wins can be had all around for all, especially when trying to predict your own future behavior.

So, if you buy the idea that there are multiple universes, and multiple instantiations of this problem, and you somehow care about the results in these of these other universes, and your actions indicate probabalistically how other instantiations of your predicted self will act, then by all means, One Box on problem #1.

However, if you do NOT care about other universes, and believe this is in fact a single instantiation, and you are not totally freaked out by the idea of disobeying the desires of your just revealed upon you creator (or actually get some pleasure out of this idea), then please Two Box. You as you are in this universe will NOT unexist if you do so. You know that going into it. So, calculate the utility you gain from getting a million dollars this one time vs the utility you lose from being an imperfect timeless decision theoretical agent. Sure, there's some loss, but at a high enough pay out, it becomes a worthy trade.

I think Newcomb's problem would be more interesting if the 1st box contained 1/2 million and the 2nd box contained 1 million, and omega was only right, say 75% of the time... See how fast answers start changing. What if omega thought you were a dirty two-boxer and only put money in box b? Then you would be screwed if you one-boxed! Try telling your wife that you made the correct 'timeless decision theoretical' answer when you come home with nothing.

This is a truly excellent post. You bring the problem that we are dealing with into a completely graspable inferential distance and set up a mental model that essentially asks us to think like an AI and succeeds. I haven't read anything that has made me feel the urgency of the problem as much as this has in a really long time...

This is true. We were (and are) in the same social group, so I didn't need to go out of my way for repeated interaction. Had I met him once and he failed to pick up my sigs, then NO, we would NOT be together now... This reminds me of a conversation I had with Silas, in which he asked me, "How many dates until....?" And I stared at him for a moment and said, "What makes you think there would be a second if the first didn't go so well?"

Load More