Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Unknown2 comments on Permitted Possibilities, & Locality - Less Wrong

11 Post author: Eliezer_Yudkowsky 03 December 2008 09:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Unknown2 04 December 2008 02:12:48PM 0 points [-]

Eliezer, "changes in my progamming that seem to result in improvements" are sufficently arbitrary that you may still have to face the halting problem, i.e. if you are programming an intelligent being, it is going to be sufficiently complicated that you will never prove that there are no bugs in your original programming, i.e. even ones that may show no effect until it has improved itself 1,000,000 times, and by then it will be too late.

Apart from this, no intelligent entity can predict in own actions, i.e. it will always have a feeling of "free will." This is necessary because whenever it looks at a choice between A and B, it will always say, "I could do A, if I thought it was better," and "I could also do B, if I thought it was better." So it's own actions are surely unpredictable to it, it can't predict the choice until it actually makes the choice, just like us. But this implies that "insight into intelligence" may be impossible, or at least full insight into one's own intelligence, and that is enough to imply that your whole project may be impossible, or at least that it may go very slowly, so Robin will turn our to be right.