Vladimir_Nesov comments on BOOK DRAFT: 'Ethics and Superintelligence' (part 1, revised) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (38)
After a fashion, since causal networks are not exactly CDT, modeling correlated computations with causal networks makes them less "causal" (i.e. related to physical causality), and the paper doesn't achieve clear specification of how to do that (it's an open problem, but I can say that any nontrivial causal network relating computations may need to be revised in face of new logical evidence, which makes the decision procedure that itself works with resolution of logical uncertainty brittle).
That CDT/EDT agents with self-modification would become more TDT-like is somewhat different from saying that TDT "suits the needs of a self-modifying AI". TDT is a more sane theory, and to the extent CDT/EDT agents would prefer to be more effective, they'd prefer to adopt TDT's greater sanity. But TDT is not a fixed point, suiting the needs of a self-modifying AI is a tall order that probably can't be met to any reasonable extent, since that would mean establishing some rules that AI itself would not have much opportunity to rebuild. Perhaps laws of physics or logic can quality, with appropriate framing.
(I agree with your description more now than when I posted the comment, my perception of the paper having clouded the memory of its wording.)
Fair enough. Thanks for this. I've clarified the wording in my copy of this intro.