I noticed we now live in a world where we can run Newcomb's problem as an actual experiment, so I ran the experiment! 

Roleplaying as the predictor in the Newcomb problem (with an LLM as the "decision maker") finally made me grok why one-boxing is the correct solution to the original problem. 

Wondering if anyone else feels the same way?

New Comment