Unnamed comments on Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? - Less Wrong

15 Post author: CarlShulman 19 June 2013 01:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (299)

You are viewing a single comment's thread. Show more comments above.

Comment author: Unnamed 19 June 2013 05:47:12AM 3 points [-]

I share the intuition that Newcomb's problem might be "unfair" (not a meaningful problem / not worth trying to win at), and have generally found LW/MIRI discussions of decision theory more enlightening when they dealt with other scenarios (like AIs exchanging source code) rather than Newcomb.

One way to frame the "unfairness" issue: if you knew in advance that you would encounter something like Newcomb's problem, then it would clearly be beneficial to adopt a decision-making algorithm that (predictably) one-boxes. (Even CDT supports this, if you apply CDT to the decision of what algorithm to adopt, and have the option of adopting an algorithm that binds your future decision.) But why choose to optimize your decision-making algorithm for the possibility that you might encounter something like Newcomb's problem? The answer to the question "What algorithm should I adopt?" depends on what decision problems I am likely to face - why is it a priority to prepare for Newcomb-like problems?

Well-defined games (like modal combat) seem to give more traction on this question than a fanciful thought experiment like Newcomb, although perhaps I just haven't read the right pro-one-boxing rejoinder.

Comment author: Qiaochu_Yuan 19 June 2013 05:52:40AM 9 points [-]

You may not expect to encounter Newcomb's problems, but you might expect to encounter prisoner's dilemmas, and CDT recommends defecting on these.