dsoodak
dsoodak has not written any posts yet.

.51 1000000 + .49 1001000 = 1000490
I figured that if Omega is required to try its best to predict you and you are permitted to do something that is physically random in your decision making process, then it will probably be able to work out that I am going to choose just one box with slightly more probability than choosing 2. Therefore, it will gain the most status on average (it MUST be after status since it obviously has no interest in money) by guessing that I will go with one box.
Didn't realize anyone watched the older threads so wasn't expecting such a fast response...
I've already heard about the version where "intelligent alien" is replaced with "psychic" or "predictor", but not the "human is required to be deterministic" or quantum version (which I'm pretty sure would require the ability to measure the complete waveform of something without affecting it). I didn't think of the "halting problem" objection, though I'm pretty sure its already expected to do things even more difficult to get such a good success rate with something as complicated as a human CNS (does it just passively observe the player for a few days preceding the event or is... (read more)
As I understand it, most types of decision theory (including game theory) assume that all agents have about the same intelligence and that this intelligence is effectively infinite (or at least large enough so everyone has a complete understanding of the mathematical implications of the relevant utility functions).
In Newcomb's problem, one of the players is explicitly defined as vastly more intelligent than the other.
In any situation where someone might be really good at predicting your thought processes, its best to add some randomness to your actions. Therefore, my strategy would be to use a quantum random number generator to choose just box B with 51% probability. I should be able to win an average of $1000490.
If there isn't a problem with this argument and if it hasn't been thought of before, I'll call it "variable intelligence decision theory" or maybe "practical decision theory".
Dustin Soodak
I believe that what you have proven is that it will probably not help your career to investigate fringe phenomena. Unfortunately, science needs the occasional martyr who is willing to be completely irrational in their life path (unless you assign a really large value to having "he was right after all" written on your tombstone) while maintaining very strict rationality in their subject of interest. For example, the theory that "falling stars were" were caused rocks falling out of the sky was considered laughable since this had already been lumped together with ghosts, etc.