All of dsoodak's Comments + Replies

dsoodak00

I believe that what you have proven is that it will probably not help your career to investigate fringe phenomena. Unfortunately, science needs the occasional martyr who is willing to be completely irrational in their life path (unless you assign a really large value to having "he was right after all" written on your tombstone) while maintaining very strict rationality in their subject of interest. For example, the theory that "falling stars were" were caused rocks falling out of the sky was considered laughable since this had already been lumped together with ghosts, etc.

dsoodak00

.51 1000000 + .49 1001000 = 1000490

dsoodak00

I figured that if Omega is required to try its best to predict you and you are permitted to do something that is physically random in your decision making process, then it will probably be able to work out that I am going to choose just one box with slightly more probability than choosing 2. Therefore, it will gain the most status on average (it MUST be after status since it obviously has no interest in money) by guessing that I will go with one box.

0dsoodak
.51 1000000 + .49 1001000 = 1000490
dsoodak00

Didn't realize anyone watched the older threads so wasn't expecting such a fast response...

I've already heard about the version where "intelligent alien" is replaced with "psychic" or "predictor", but not the "human is required to be deterministic" or quantum version (which I'm pretty sure would require the ability to measure the complete waveform of something without affecting it). I didn't think of the "halting problem" objection, though I'm pretty sure its already expected to do things even more diffic... (read more)

dsoodak00

As I understand it, most types of decision theory (including game theory) assume that all agents have about the same intelligence and that this intelligence is effectively infinite (or at least large enough so everyone has a complete understanding of the mathematical implications of the relevant utility functions).

In Newcomb's problem, one of the players is explicitly defined as vastly more intelligent than the other.

In any situation where someone might be really good at predicting your thought processes, its best to add some randomness to your actions. Th... (read more)

0LawrenceC
I think generally there's an addendum to the problem where if Omega sees you using a quantum randomness generator, Omega will put nothing in box B, specifically to prevent this kind of solution. :P Also, how did you reach your $1000490 figure? If Omega just simulates you once, your payoff is: 0.51 (0.51 (1000000) + 0.49 (1001000)) + 0.49 (0.51 0 + 0.49 (1000)) = $510490 < $1000000, so you're better off one-boxing unless Omega simulates you multiple times.
0dsoodak
Didn't realize anyone watched the older threads so wasn't expecting such a fast response... I've already heard about the version where "intelligent alien" is replaced with "psychic" or "predictor", but not the "human is required to be deterministic" or quantum version (which I'm pretty sure would require the ability to measure the complete waveform of something without affecting it). I didn't think of the "halting problem" objection, though I'm pretty sure its already expected to do things even more difficult to get such a good success rate with something as complicated as a human CNS (does it just passively observe the player for a few days preceding the event or is it allowed to do a complete brain scan?). I still think my solution will work in any realistic case (where the alien isn't magical, and doesn't require your thought processes to be both deterministic and computable while not placing any such limits on itself). What I find particularly interesting, however, is that such a troublesome example explicitly states that the agents have vastly unequal intelligence, while most examples seem to assume "perfectly rational" agents (which seems to be interpreted as being intelligent and rational enough so that further increases in intelligence and rationality will make no difference). Are there any other examples where causal decision theory fails which don't involve non-equal agents? If not, I wonder if you could construct a proof that it DEPENDS on this as an axiom. Has anyone tried adding "relative ability of one agent to predict another agent" as a parameter in decision theory examples? It seems like this might be applicable in the prisoner's dilemma as well. For example, a simple tit-for-tat bot modified so it doesn't defect unless it has received 2 negative feedbacks in a row might do reasonably well against other bots but would do badly against a human player as soon as they figured out how it worked.
0Shmi
You are fighting the hypothetical. It is a common pitfall when faced with a counterintuitive issue like that. Don't do it, unless you can prove a contradiction in the problem statement. Omega is defined as a perfect predictor of your actions no matter what you do. That includes any quantum tricks. Also see the recent introduction to Newcomblike problems for a detailed analysis.
4A1987dM
Some variants of the Newcomb problem specify that if Omega isn't sure what you will do he will assume you're going to two-box. (And if Omega is really that smart he will leave box A in a quantum superposition entangled with that of your RNG. :-))