Kawoomba comments on Reflection in Probabilistic Logic - Less Wrong

63 Post author: Eliezer_Yudkowsky 24 March 2013 04:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (171)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kawoomba 24 March 2013 08:25:57AM *  2 points [-]

Each time you create a successor or equivalently self-modify, or rather each time that an action that you take has a non-zero chance of modifying yourself? Wouldn't that mean you'd have to check constantly? What if the effect of an action is typically unpredictable at least to some degree?

Also, since any system is implemented using a physical substrate, some change is inevitable (and until the AI has powered up to multiple redundancy level not yet so stochastically unlikely as to be ignorable). What happens if that change affects the "prove the system sound" physical component, is there a provable way to safeguard against that? (Obligatory "who watches the watcher".)

It's a hard problem any way you slice it. You there, go solve it.