You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

IlyaShpitser comments on What do professional philosophers believe, and why? - Less Wrong Discussion

31 Post author: RobbBB 01 May 2013 02:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: IlyaShpitser 02 May 2013 08:50:53AM *  0 points [-]

You are not listening to me. Suppose this fellow comes by and offers to play a game with you. He asks you to punch him in the face, where he is not allowed to dodge or push your hand. If you hit him, he gives you 1000 dollars, if you miss, you give him 1000 dollars. He also informs you that he has a success rate of over 90% playing this game with randomly sampled strangers. He can show you videos of previous games, etc.

This game is not a philosophical contrivance. There are people who can do this here in physical reality where we both live.

Now, what is the right reaction here? My point is that if your right reaction is to not play then you are giving up too soon. The reaction to not play is to assume a certain model of the situation and leave it there. In fact, all models are wrong, and there is much to be learned about e.g. how punching works in digging deeper into how this fellow wins this game. To not play and leave it at that is incurious.

Certainly the success rate this fellow has with the punching game has nothing to do with any grand philosophical statement about the lack of physical volition by humans.

Learning about how punching works, rather than winning 1000 dollars, is the entire point of this game.


My answer to Newcomb's problem is to one-box if and only if Omega is not defeatable and two-box in a way that defeats Omega otherwise. Omega can be non-defeatable only if certain things hold. For example if it is possible to fully simulate in physical reality a given human's decision process at a particular point in time, and have this simulation be "referentially transparent."

edit: fixed a typo.

Comment author: arundelo 02 May 2013 02:53:10PM 2 points [-]

If you hit him, he gives you 1000 dollars, if you miss, he gives you 1000 dollars.

There is a typo here.

Comment author: pjeby 02 May 2013 11:51:19PM 1 point [-]

My answer to Newcomb's problem is to one-box if and only if Omega is not defeatable and two-box in a way that defeats Omega otherwise

But now you've laid out your decision-making process, so all Omega needs to do now is to predict whether you think he's defeatable. ;-)

In general, I expect Omega could actually be implemented just by being able to tell whether somebody is likely to overthink the problem, and if so, predict they will two-box. That might be sufficient to get better-than-chance predictions.

To put it yet another way: if you're trying to outsmart Omega, that means you're trying to figure out a rationalization that will let you two-box... which means Omega should predict you'll two-box. ;-)