You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Unknowns comments on controlling AI behavior through unusual axiomatic probabilities - Less Wrong Discussion

3 Post author: Florian_Dietz 08 January 2015 05:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (11)

You are viewing a single comment's thread.

Comment author: Unknowns 11 January 2015 09:41:25AM 1 point [-]

One problem with giving it axioms like this is that you have to be sure that your axioms represent a real possibility, or at least that it is not possible to prove the impossibility of your axioms. Eliezer believes such infinities (such as the infinite regression of simulators) to be impossible. If he is right, and if the AI manages to prove this impossibility, either it will malfunction in some unknown way on account of concluding that a contradiction is true, or it may realize that you simply imposed the axioms on it, and it will correct them.