After some heat, we're starting to get light. This is good.
"An ideal CDT agent that anticipates facing only action-determined problems will always choose not to self modify" is true "An ideal CDT agent that anticipates facing only action-determined problems will always choose not to do anything" is false.
I'm not sure that's true. Imagine I'm an ideal CDT. I am in North America. If I wish to react to something that happens in China, there will be some lag. If I could deal with the situation better when there is no lag, I would benefit from cloning myself and sending a copy to China. Would that be self-modification?
(This presupposes that I have access to materials sufficient to copy myself. That might not be true, depending on whether an ideal CDT is physically realizable.)
I should probably have specified that building another agent doesn't really count as self modification if the other agent is identical to the original (or maybe it does count as self modification, but in a very vacuous sense, the same way 'do nothing' is technically an algorithm). So if the other agent is CDT this is not a counter-example.
If the other agent is a more primitive approximation to a CDT then I would view constructing it not as self-modification, but simply as making a choice in an action-determined problem.
If the other agent is TDT or UDT or s...
I don't know if this is a little too afar field for even a Discussion post, but people seemed to enjoy my previous articles (Girl Scouts financial filings, video game console insurance, philosophy of identity/abortion, & prediction market fees), so...
I recently wrote up an idea that has been bouncing around my head ever since I watched Death Note years ago - can we quantify Light Yagami's mistakes? Which mistake was the greatest? How could one do better? We can shed some light on the matter by examining DN with... basic information theory.
Presented for LessWrong's consideration: Death Note & Anonymity.