private_messaging comments on Newcomb's Problem and Regret of Rationality - Less Wrong

64 Post author: Eliezer_Yudkowsky 31 January 2008 07:36PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (588)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: private_messaging 03 June 2012 12:39:56PM *  0 points [-]

Well, each philosopher's understanding of CDT seem to differ from the other:

http://www.public.asu.edu/~armendtb/docs/A%20Foundation%20for%20Causal%20Decision%20Theory.pdf

The notion that the actions should be chosen based on consequences - as expressed in the formula here - is perfectly fine, albeit incredibly trivial. Can formalize that all the way into agent. Written such agents myself. Still need a symbol to describe this type of agent.

But philosophers go from this to "my actions should be chosen based on consequences", and it is all about the true meaning of self and falls within the purview of your conundrums of philosophy .

Having 1 computer control 2 robots arms wired in parallel, and having 2 computers running exact same software as before, controlling 2 robot arms, there's no difference for software engineering, its a minor detail that has been entirely abstracted from software. There is difference for philosophizing thought because you can't collapse logical consequences and physical causality into one thing in the latter case.

edit: anyhow. to summarize my point: In terms of agents actually formalized in software, one-boxing is only a matter of implementing predictor into world model somehow, either as second servo controlled by same control variables, or as uncertain world state outside the senses (in the unseen there's either real world or simulator that affects real world via hand of predictor). No conceptual problems what so ever. edit: Good analogy, 'twin paradox' in special relativity. There's only paradox if nobody done the math right.