John_Maxwell_IV comments on How does an infovore manage information overload? - Less Wrong

4 Post author: haig 25 August 2009 06:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (26)

You are viewing a single comment's thread. Show more comments above.

Comment author: John_Maxwell_IV 25 August 2009 11:13:14PM 0 points [-]
Comment author: SilasBarta 27 August 2009 08:32:46PM *  1 point [-]

I used to ignore Newcomb's problem for exactly that reason, until someone pointed out that there's a mapping to the issue of retaliation. (I called it revenge in the link, but that connotes vigilantism, so retaliation is a better term.) The problem doesn't require an all-knowing superintelligence, just some predictor with a "pretty darn good" chance of correctly guessing what you'll do.

In general, it's applicable to any problem where:

a) Someone else chooses actions based on how they predict you'll act, and they're pretty good at predicting.

b) If the predictor predicts you taking the seemingly dominant strategy, they treat you worse.

c) You have to make a choice after "the die is cast" (i.e. the predictor can't take back their treatment).

Note that in real life, it actually is common for people to a) predict your decisions well, and b) base their treatment of you on that prediction.

ETA: Well, in fairness I should add that life is, shall we say, an iterated game, which takes away a lot of the "die is cast" aspect of it...

Comment author: Douglas_Knight 27 August 2009 08:05:30PM 0 points [-]

Newcomb's problem is widely accepted as being related to the prisoner's dilemma. If you 2-box in Newcomb's problem, you'll never cooperate in (one-shot) PD, which is generally considered to have real-world applications.

Comment author: thomblake 27 August 2009 09:11:45PM *  -1 points [-]

This seems strange to me. It seems that someone sufficiently altruistic or utilitarian would cooperate on a one-shot PD, since it's not a zero-sum game (except in weird hypothetical land) and that would have no bearing on what choice one might make on Newcomb's.

ETA: for some payoff matrices.

Comment author: Sideways 27 August 2009 08:23:45PM 0 points [-]

Newcomb's problem is applicable to the general class of game-type problems where the other players try to guess your actions. As far as I can tell, the only reason to introduce Omega is to avoid having to deal with messy, complicated probability estimates from the other players.

Unfortunately, in a forum where the idea that Omega could actually exist is widely accepted, people get caught up in trying to predict Omega's actions instead of focusing on the problem of decision-making under prediction.