You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

mwengler comments on Is Omega Impossible? Can we even ask? - Less Wrong Discussion

-8 Post author: mwengler 24 October 2012 02:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (51)

You are viewing a single comment's thread. Show more comments above.

Comment author: mwengler 25 October 2012 02:39:37PM *  -1 points [-]

Personally, I think I can reliably predict that Eliezer would one-box against Omega, based on his public writings. I'm not sure if that implies that he would one-box against me,

And since any FAI Eliezer codes is (nearly) infinitely more likely to be presented Newcomb's boxes by one such as you, or Penn and Teller, or Madoff than by Omega or his ilk, this would seem to be a more important question than the Newcomb's problem with Omega.

Really the main point of my post is "Omega is (nearly) impossible therefore problems presuming Omega are (nearly) useless". But the discussion has come mostly to my Newcomb's example making explicit its lack of dependence on an Omega. But here in this comment you do point out that the "magical" aspect of Omega MAY influence the coding choice made. I think this supports my claim that even Newcomb's problem, which COULD be stated without an Omega, may have a different answer than when stated with an Omega. That it is important when coding an FAI to consider just how much evidence it should require that it has an Omega it is dealing with before it concludes that it does. In the long run, my concern is that an FAI coded to accept an Omega will be susceptible to accepting people deliberately faking Omega, which are in our universe (nearly) infinitely more present than true Omegas.

Comment author: RichardKennaway 25 October 2012 04:12:49PM 1 point [-]

Omega problems are not posed for the purpose of being prepared to deal with Omega should you, or an FAI, ever meet him. They are idealised test problems, thought experiments, for probing the strengths and weaknesses of formalised decision theories, especially regarding issues of self-reference and agents modelling themselves and each other. Some of these problems may turn out to be ill-posed, but you have to look at each such problem to decide whether it makes sense or not.