AllanCrossman comments on How does an infovore manage information overload? - Less Wrong

4 Post author: haig 25 August 2009 06:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (26)

You are viewing a single comment's thread. Show more comments above.

Comment author: AllanCrossman 25 August 2009 07:51:23PM 3 points [-]

There's been so much here lately on things like Newcomb and whatnot, we could do with some more normal threads...

Comment author: Vladimir_Nesov 25 August 2009 09:01:18PM 1 point [-]

I agree, but "normal" threads on LW are not supposed to be just normal threads.

Comment author: haig 25 August 2009 10:32:05PM 4 points [-]

The post was supposed to be in the spirit of much of the self-improvement posts regarding akrasia, rationality, etc. It seemed logical that managing your information is an important component with the rest of the mental hygiene practices discussed here. It I was mistaken I apologize.

Comment author: gjm 25 August 2009 11:16:01PM 3 points [-]

There's nothing wrong with the topic. Whether it turns out to be a good LW post probably depends on whether anyone contributes any substantially non-obvious advice.

Comment deleted 26 August 2009 04:43:18AM *  [-]
Comment author: haig 26 August 2009 03:30:58PM 1 point [-]

I agree and admit laziness on my part for hoping someone else to insightfully reflect on my problem instead of offering at least a minimum of a solution to start things off. Ironically, I can't seem t make time to analyze how I can make more time!

Comment author: PlaidX 25 August 2009 09:47:24PM 1 point [-]

I think there should be MORE Newcomb threads! It has very important real-world implications, which are left as an exercise for the reader.

Comment author: John_Maxwell_IV 25 August 2009 11:13:14PM 0 points [-]
Comment author: SilasBarta 27 August 2009 08:32:46PM *  1 point [-]

I used to ignore Newcomb's problem for exactly that reason, until someone pointed out that there's a mapping to the issue of retaliation. (I called it revenge in the link, but that connotes vigilantism, so retaliation is a better term.) The problem doesn't require an all-knowing superintelligence, just some predictor with a "pretty darn good" chance of correctly guessing what you'll do.

In general, it's applicable to any problem where:

a) Someone else chooses actions based on how they predict you'll act, and they're pretty good at predicting.

b) If the predictor predicts you taking the seemingly dominant strategy, they treat you worse.

c) You have to make a choice after "the die is cast" (i.e. the predictor can't take back their treatment).

Note that in real life, it actually is common for people to a) predict your decisions well, and b) base their treatment of you on that prediction.

ETA: Well, in fairness I should add that life is, shall we say, an iterated game, which takes away a lot of the "die is cast" aspect of it...

Comment author: Douglas_Knight 27 August 2009 08:05:30PM 0 points [-]

Newcomb's problem is widely accepted as being related to the prisoner's dilemma. If you 2-box in Newcomb's problem, you'll never cooperate in (one-shot) PD, which is generally considered to have real-world applications.

Comment author: thomblake 27 August 2009 09:11:45PM *  -1 points [-]

This seems strange to me. It seems that someone sufficiently altruistic or utilitarian would cooperate on a one-shot PD, since it's not a zero-sum game (except in weird hypothetical land) and that would have no bearing on what choice one might make on Newcomb's.

ETA: for some payoff matrices.

Comment author: Sideways 27 August 2009 08:23:45PM 0 points [-]

Newcomb's problem is applicable to the general class of game-type problems where the other players try to guess your actions. As far as I can tell, the only reason to introduce Omega is to avoid having to deal with messy, complicated probability estimates from the other players.

Unfortunately, in a forum where the idea that Omega could actually exist is widely accepted, people get caught up in trying to predict Omega's actions instead of focusing on the problem of decision-making under prediction.

Comment author: Larks 25 August 2009 10:33:08PM -1 points [-]

After all, working them out yourself is equivalent to oneboxing.