endoself comments on Reflection in Probabilistic Logic - Less Wrong

63 Post author: Eliezer_Yudkowsky 24 March 2013 04:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (171)

You are viewing a single comment's thread. Show more comments above.

Comment author: endoself 26 March 2013 10:52:20PM 1 point [-]

Is that because any non-trivial action could run a chance of changing the AGI, and thus the AGI wouldn't dare do anything at all? (If (false), disregard the following. Return 0;).

That or it takes actions changing itself without caring that they would make it worse because it doesn't know that its current algorithms are worth preserving. Your scenario is what might happen if someone notices this problem and tries to fix it by telling the AI to never modify itself, depending on how exactly they formalize 'never modify itself'.