SapientPearwood comments on Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (299)
Adding to your story, it's not just Eliezer Yudkowsky's introduction to Newcomb's problem. It's the entire Bayesian / Less Wrong mindset. Here, Eliezer wrote:
I felt something similar when I was reading through the sequences. Everything "clicked" for me - it just made sense. I couldn't imagine thinking another way.
Same with Newcomb's problem. I wasn't introduced to it by Eliezer, but I still thought one-boxing was obvious; it works.
Many Less Wrongers that have stuck around probably have had a similar experience; the Bayesian standpoint seems intuitive. Eliezer's support certainly helps to propagate one-boxing, but LessWrongers seem to be a self-selecting group.
It also helps that most Bayesian decision algorithms actually take on the
arg max_a U(a)*P(a)reasoning of Evidential Decision Theory, which means that whenever you invoke your self-image as a capital-B Bayesian you are semi-consciously invoking Evidential Decision Theory, which does actually get the right answer, even if it messes up on other problems.(Commenting because I got here while looking for citations for my WIP post about another way to handle Newcomb-like problems.)