This is for anyone in the LessWrong community who has made at least some effort to read the sequences and follow along, but is still confused on some point, and is perhaps feeling a bit embarrassed. Here, newbies and not-so-newbies are free to ask very basic but still relevant questions with the understanding that the answers are probably somewhere in the sequences. Similarly, LessWrong tends to presume a rather high threshold for understanding science and technology. Relevant questions in those areas are welcome as well. Anyone who chooses to respond should respectfully guide the questioner to a helpful resource, and questioners should be appropriately grateful. Good faith should be presumed on both sides, unless and until it is shown to be absent. If a questioner is not sure whether a question is relevant, ask it, and also ask if it's relevant.
In addition to these other answers, I read a paper, I think by Eliezer, which argued that it was almost impossible to stop an AI from modifying its own source code, because it would figure out that it would gain a massive efficiency boost from doing so.
Also, remember that the AI is a computer program. If it is allowed to write other algorithms and execute them, which it has to be to be even vaguely intelligent, then it can simply write a copy of its source code somewhere else, edit it as desired, and run that copy.
I seem to recall the argument being something like the "Beware Seemingly Simple Wishes" one. "Don't modify yourself" sounds like a simple instruction for a human, but isn't as obvious when you look at it more carefully.
However, remember that a competent AI will keep its utility function or goal system constant under self modification. The classic analogy is that Gandhi doesn't want to kill people, so he also doesn't want to take a pill that makes him want to kill people.
I wish I could remember where that paper was where I read about this.
Well, let me describe the sort of architecture I have in mind.
The AI has a "knowledge base", which is some sort of database containing everything it knows. The knowledge base includes a set of heuristics. The AI also has a "thought heap", which is a set of all the things it plans to think about, ordered by how promising the thoughts seem to be. Each thought is just a heuristic, maybe with some parameters. The AI works by taking a thought from the heap and doing whatever it says, repeatedly.
Heuristics would be restricted, though. They wo... (read more)