Nick_Beckstead comments on Tiling Agents for Self-Modifying AI (OPFAI #2) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (260)
Very helpful. This seems like something that could lead to a satisfying answer to my question. And don't worry, I won't engage in a terminological dispute about "self-modification."
Can you clarify a bit what you mean by "low-level algorithms"? I'll give you a couple of examples related to what I'm wondering about.
Suppose I am working with a computer to make predictions about the the weather, and we consider the operations of the computer along with my brain as a single entity for the purposes testing whether the Lobian obstacles you are thinking of arise in practice. Now suppose I make basic modifications to the computer, expecting that the joint operation of my brain with the computer will yield improved output. This will not cause me to trip over Lobian obstacles. Why does whatever concern you have about the Lob problem predict that it would not, but also predict that future AIs might stumble over the Lob problem?
Another example. Humans learn different mental habits without stumbling over Lobian obstacles, and they can convince themselves that adopting the new mental habits is an improvement. Some of these are more derivative ("Don't do X when I have emotion Y") and others are perhaps more basic ("Try to update through explicit reasoning via Bayes' Rule in circumstances C"). Why does whatever concern you have about the Lob problem predict that humans can make these modifications without stumbling, but also predict that future AIs might stumble over the Lob problem?
If the answer to both examples is "those are not cases of directly editing one's low-level algorithms using high-level deliberative processes," can you explain why your concern about Lobian issues only arises in that type of case? This is not me questioning your definition of "fzoom," it is my asking why Lobian issues only arise when you are worrying about fzoom.
The first example is related to what I had in mind when I talked about fundamental epistemic standards in a previous comment: