You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Punoxysm comments on Open thread, 3-8 June 2014 - Less Wrong Discussion

3 Post author: David_Gerard 03 June 2014 08:57AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (153)

You are viewing a single comment's thread.

Comment author: Punoxysm 08 June 2014 06:45:20AM 3 points [-]

I do not understand - and I mean this respectfully - why anyone would care about Newcomblike problems or UDT or TDT, beyond mathematical interest. An Omega is physically impossible - and if I were ever to find myself in an apparently Newcomblike problem in real life, I'd obviously choose to take both boxes.

Comment author: Kaj_Sotala 08 June 2014 07:23:39AM 4 points [-]

An Omega is physically impossible

I don't think it's physically impossible for someone to predict my behavior in some situation with a high degree of accuracy.

Comment author: Punoxysm 08 June 2014 05:51:55PM 0 points [-]

If I wanted to thwart or discredit pseudo-Omega, I could base my decision on a source of randomness. This brings me out of reach of any real-world attempt at setting up the Newcomblike problem. It's not the same as guaranteeing a win, but it undermines the premise.

Certainly, anybody trying to play pseudo-omega against random-decider would start losing lots of money until they settled on always keeping box B empty.

And if it's a repeated game where Omega explicitly guarantees it will attempt to keep its accuracy high, choosing only box B emerges as the right choice from non-TDT theories.

Comment author: DanielLC 09 June 2014 08:53:41PM 1 point [-]

If I wanted to thwart or discredit pseudo-Omega, I could base my decision on a source of randomness. This brings me out of reach of any real-world attempt at setting up the Newcomblike problem.

It's not a zero-sum game. Using randomness means pseudo-Omega will guess wrong, so he'll lose, but it doesn't mean that he'll guess you'll one-box, so you don't win. There is no mixed Nash equilibrium. The only Nash equilibrium is to always one-box.

Comment author: ChristianKl 08 June 2014 07:49:42AM 3 points [-]

An Omega is physically impossible

The idea that we live in a simulation is not a physical impossibility.

At the moment choices can often be predicted 7 seconds in advance by reading brain signals.

Comment author: DanielLC 09 June 2014 08:54:38PM 0 points [-]

Source?

How accurate is this prediction?

Comment author: ChristianKl 10 June 2014 07:25:45AM 0 points [-]
Comment author: Punoxysm 08 June 2014 05:53:33PM *  0 points [-]

Even if we live in a simulation, I've never heard of anybody being presented a newcomblike problem.

Make a coin flip < 7 seconds before deciding.

Comment author: ChristianKl 08 June 2014 08:27:43PM 0 points [-]

Make a coin flip < 7 seconds before deciding.

Most people don't make coin flips. You can set the rule that making a coin flip is equivalent to picking both boxes.

Comment author: Punoxysm 08 June 2014 09:35:48PM 0 points [-]

Fine, but most people can notice a brain scanner attached to their heads, and would then realize that the game starts at "convince the brain scanner that you will pick one box". Newcomblike problems reduce to this multi-stage game too.

Comment author: ChristianKl 08 June 2014 10:23:31PM 0 points [-]

Brain scanner are technology that's very straightforward to think about. Humans reading other humans is a lot more complicated. People have a hard time accepting that Eliezer won the AI box challenge. "Mind reading" and predicting choices of other people is a task with a similar difficulty than the AI box challenge.

Let's take contact improvisation as an illustrating example. It's a dance form without hard rules. If I'm dancing contact improvisation with a woman than she expects me to be in a state where I follow the situation and express my intuition. If I'm in that state and that means that I touch her breast with my arms that's no real problem. If I on the other hand make a conscious decision that I want to touch her breast and act accordingly I'm likely to creep her out.

There are plenty of people in the contact improvisation field who's awareness of other people is good enough to tell the difference.

Another case where decision frameworks is diplomacy. A diplomat gets told beforehand how he's supposed to negotiate and there might be instances where that information leaks.

Comment author: Punoxysm 08 June 2014 11:23:42PM *  0 points [-]

I don't think this contradicts any of my points. Causal Decision theory would never tell to the state department to behave as if leaks are impossible. Yet because leak probability is low, I think any diplomatic group openly published all its internal orders would find itself greatly hampered against others that didn't.

Playing a game against an opponent with an imperfect model of yourself, especially one whose model-building process you understand, does not require a new decision theory.

Comment author: ChristianKl 09 June 2014 07:44:00AM 0 points [-]

I think any diplomatic group openly published all its internal orders would find itself greatly hampered against others that didn't.

It's possible that the channel through which the diplomatic group internally communicates is completely compromised.

Comment author: David_Gerard 08 June 2014 09:25:28AM 1 point [-]

I believe the application was how a duplicable intelligence like an AI could reason effectively. (Hence TDT thinking in terms of all instances of you.)

Comment author: Punoxysm 08 June 2014 05:53:14PM 0 points [-]

Communication and pre-planning would be a superior coordination method.

Comment author: David_Gerard 08 June 2014 08:35:26PM -1 points [-]

This is assuming you know that you might be just one copy of many, at varying points in a timeline.

Comment author: shminux 08 June 2014 07:23:31AM *  1 point [-]

Do you think that someone can predict your behavior with maybe 80% accuracy? Like, for example, whether you would one-box or two-box, based on what you wrote? And then confidently leave the $1M box empty because they know you'd two-box? And use that fact to win a bet, for example? Seems very practical.

Comment author: Punoxysm 08 June 2014 06:02:34PM 0 points [-]

If I bet $1001 that I'd one-box I'd have a natural incentive to do so.

However, if the boxes were already stocked and I gain nothing for proving pseudo-Omega wrong, then two-boxing is clearly superior. Otherwise I open one empty box, have nothing, yell at pseudo-Omega for being wrong, get a shrug in response, and go to bed regretting that I'd ever heard of TDT.

Comment author: drethelin 08 June 2014 05:44:47PM 1 point [-]

So as several people said, Omega is probably more within the realm of possibility than you give it credit for, but MORE IMPORTANTLY, Omega is definitely possible for non-humans. As David_Gerard said, the point of this thought exercise is for AI, not for humans. For an AI written by humans, we can know all of its code and predict the answers it will give to certain questions. This means that the AI needs to deal with us as if we are an Omega that can predict the future. For the purposes of AI, you need decision theories that can deal with entities having arbitrarily strong models of each other, recursively. And TDT is one way of trying to do that.

Comment author: Punoxysm 08 June 2014 09:45:03PM 0 points [-]

In general, predicting what code does can be as hard as executing the code. But I know that's been considered and I guess that gets into other areas.

Comment author: drethelin 09 June 2014 04:55:39PM 0 points [-]

Even if that's the case, when dealing with AI we more easily have the option of simulation. You can run a program over and over again, and see how it reacts to different inputs.

Comment author: Risto_Saarelma 08 June 2014 07:55:08AM 0 points [-]

I understood that people here mostly do care about them because of mathematical interest. It's a part of the "how can we design an AGI" math problem.