magfrump comments on How can we compare decision theories? - Less Wrong

6 Post author: bentarm 18 August 2010 01:29PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (41)

You are viewing a single comment's thread. Show more comments above.

Comment author: magfrump 19 August 2010 06:08:57AM 0 points [-]

But you don't have absolute knowledge of Omega, you have a probability estimate of whether ve is, say, omniscient versus stalking you on the internet, or on the other hand if ve even has a million dollars to put in the other box.

The sort of newcomb- (or kavka-) like problem you might expect to run into on the street hinges almost entirely on the probability that there's a million dollars in the box. So if you're trying to create a decision theory that gives you optimal action assuming the probability distributions of real life I don't see how optimizing for a particular, uncommon, problem (where other problems are more common and might be decided differently!) will help out with being rational on hostile hardware.

If we spend TOO MUCH time preparing for the least convenient possible world we may miss out on the real world.

Comment author: magfrump 19 August 2010 06:18:43AM 0 points [-]

I got wrapped up in writing this comment and forgot about the larger context; my point is that it may be necessary (in the least convenient possible world) to choose a decision theory that does poorly on Newcomb's problem but better elsewhere, given that Newcomb's problem is unlikely to occur and similar-seeming but more common problems give better results with a different strategy.

So like the original post, I ask why Newcomb's problem seems to be (have been?) driving discussions of decision theory? Is it because this is the easiest place to make improvements, or because it's fun to think about?