SilasBarta comments on My Fundamental Question About Omega - Less Wrong

6 Post author: MrHen 10 February 2010 05:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread. Show more comments above.

Comment author: MrHen 10 February 2010 08:30:57PM 5 points [-]

I don't know how to respond to this or Morendil's second comment. I feel like I am missing something obvious to everyone else but when I read explanations I feel like they are talking about a completely unrelated topic.

Things like this:

You seem to be confused about free will. Keep reading the Sequences and you won't be.

Confuse me because as far as I can tell, this has nothing to do with free will. I don't care about free will. I care about what happens when a perfect predictor enters the room.

Is such a thing just completely impossible? I wouldn't have expected the answer to this to be Yes.

If you do know what the prediction is, then the way in which you react to that prediction determines which prediction you'll hear. For example, if I walk up to someone and say, "I'm good at predicting people in simple problems, I'm truthful, and I predict you'll give me $5," they won't give me anything. Since I know this, I won't make that prediction. If people did decide to give me $5 in this sort of situation, I might well go around making such predictions.

Okay, yeah, so restrict yourself only to the situations where people will give you the $5 even though you told them the prediction. This is a good example of my frustration. I feel like your response is completely irrelevant. Experience tells me this is highly unlikely. So what am I missing? Some key component to free will? A bad definition of "perfect predictor"? What?

To me the scenario seems to be as simple as: If Omega predicts X, X will happen. If X wouldn't have happened, Omega wouldn't predict X.

I don't see how including "knowledge of the prediction" into X makes any difference. I don't see how whatever definition of free will you are using makes any difference.

"Go read the Sequences" is fair enough, but I wouldn't mind a hint as to what I am supposed to be looking for. "Free will" doesn't satiate my curiosity. Can you at least tell me why Free Will matters here? Is it something as simple as, "You cannot predict past a free will choice?"

As it is right now, I haven't learned anything other than, "You're wrong."

Comment author: SilasBarta 10 February 2010 09:08:32PM *  3 points [-]

I sympathize with your frustration at those who point you to references without adequate functional summaries. Unfortunately, I struggle with some of the same problems you're asking about.

Still, I can point you to the causal map that Eliezer_Yudkowsky believes captures this problem accurately (ETA: That means Newcomb's problem, though this discussion started off on a different one).

The final diagram in this post shows how he views it. He justifies this causal model by the constraints of the problem, which he states here.

it is pretty clear that the Newcomb's Problem setup, if it is to be analyzed in causal terms at all, must have nodes corresponding to logical uncertainty, on pain of violating the axioms governing causal graphs. Furthermore, in being told that Omega's leaving box B full or empty correlates to our decision to take only one box or both boxes, and that Omega's act lies in the past, and that Omega's act is not directly influencing us, and that we have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, and that our computation is the only direct ancestor of our logical output, then we're being told in unambiguous terms (I think) to make our own physical act and Omega's act a common descendant of the unknown logical output of our known computation.[italics left off]

Also, here's my expanded, modified network to account for a few other things (click to enlarge).

ETA: Bolding was irritating, so I've decided to separately list what his criteria for a causal map are, given the problem statement. (The implication for the causal graph follows each one in parentheses.)

  • Must have nodes corresponding to logical uncertainty (Self-explanatory)

  • Omega's decision on box B correlates to our decision of which boxes to take (Box decision and Omega decision are d-connected)

  • Omega's act lies in the past. (Actions after Omega's act are uncorrelated with actions before Omega's act, once you know Omega's act.)

  • Omega's act is not directly influencing us (No causal arrow directly from Omega to us/our choice.)

  • We have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, and that our computation is the only direct ancestor of our logical output. (Seem to be saying the same thing: arrow from computation directly to logical output.)

  • Our computation is the only direct ancestor of our logical output. (Only arrow pointing to our logical output comes from our computation.)

Comment author: MrHen 10 February 2010 09:17:26PM 1 point [-]

Ah, okay, thanks. I can start reading those, then.