JGWeissman comments on Formalizing Newcomb's - Less Wrong

18 Post author: cousin_it 05 April 2009 03:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (111)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 05 April 2009 04:48:56PM *  2 points [-]

Eliezer has repeatedly stated in discussions of NP that Omega only cares about the outcome, not any particular "ritual of cognition". This is an essential part of the puzzle because once you start punishing agents for their reasoning you might as well go all the way: reward only irrational agents and say nyah nyah puny rationalists. Your Omega bounds how rational I can be and outright forbids thinking certain thoughts. In other words, the original raison d'etre was refining the notion of perfect rationality, whereas your formulation is about approximations to rationality. Well, who defines what is a good approximation and what isn't? I'm gonna one-box without explanation and call this rationality. Is this bad? By what metric?

Believe or not, I have considered the most inconvenient worlds repeatedly while writing this, or I would have had just one or two cases instead of four.

Comment author: JGWeissman 05 April 2009 05:43:37PM 2 points [-]

A strategy Omega uses to avoid paradox which has the effect of punishing certain rituals of cognition because they lead to paradox is different than Omega deliberately handicapping your thought process. It is not a winning strategy to pursue a line of thought that produces a paradox instead of a winning decision. I would wait until Omega forbids strategies that would otherwise win before complaining that he "bounds how rational I can be".