gRR comments on Anthropic Reasoning by CDT in Newcomb's Problem - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (36)
Oh. I missed that. This would also break the similarity to Absentminded driver problem...
But no, this doesn't work, because Omega is known to always guess correctly, and there exist agents that one-box if the opaque box is red and two-box if it's blue. So, the simulation must be perfect.
It's still an almost-Newcomb problem that sane decision theories should pass.