You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Punoxysm comments on Open thread, 3-8 June 2014 - Less Wrong Discussion

3 Post author: David_Gerard 03 June 2014 08:57AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (153)

You are viewing a single comment's thread. Show more comments above.

Comment author: Punoxysm 08 June 2014 05:53:33PM *  0 points [-]

Even if we live in a simulation, I've never heard of anybody being presented a newcomblike problem.

Make a coin flip < 7 seconds before deciding.

Comment author: ChristianKl 08 June 2014 08:27:43PM 0 points [-]

Make a coin flip < 7 seconds before deciding.

Most people don't make coin flips. You can set the rule that making a coin flip is equivalent to picking both boxes.

Comment author: Punoxysm 08 June 2014 09:35:48PM 0 points [-]

Fine, but most people can notice a brain scanner attached to their heads, and would then realize that the game starts at "convince the brain scanner that you will pick one box". Newcomblike problems reduce to this multi-stage game too.

Comment author: ChristianKl 08 June 2014 10:23:31PM 0 points [-]

Brain scanner are technology that's very straightforward to think about. Humans reading other humans is a lot more complicated. People have a hard time accepting that Eliezer won the AI box challenge. "Mind reading" and predicting choices of other people is a task with a similar difficulty than the AI box challenge.

Let's take contact improvisation as an illustrating example. It's a dance form without hard rules. If I'm dancing contact improvisation with a woman than she expects me to be in a state where I follow the situation and express my intuition. If I'm in that state and that means that I touch her breast with my arms that's no real problem. If I on the other hand make a conscious decision that I want to touch her breast and act accordingly I'm likely to creep her out.

There are plenty of people in the contact improvisation field who's awareness of other people is good enough to tell the difference.

Another case where decision frameworks is diplomacy. A diplomat gets told beforehand how he's supposed to negotiate and there might be instances where that information leaks.

Comment author: Punoxysm 08 June 2014 11:23:42PM *  0 points [-]

I don't think this contradicts any of my points. Causal Decision theory would never tell to the state department to behave as if leaks are impossible. Yet because leak probability is low, I think any diplomatic group openly published all its internal orders would find itself greatly hampered against others that didn't.

Playing a game against an opponent with an imperfect model of yourself, especially one whose model-building process you understand, does not require a new decision theory.

Comment author: ChristianKl 09 June 2014 07:44:00AM 0 points [-]

I think any diplomatic group openly published all its internal orders would find itself greatly hampered against others that didn't.

It's possible that the channel through which the diplomatic group internally communicates is completely compromised.