"AI is disabled" and "world more similar to the world as it would have been without the AI interfering" are both magical categories. Your qualitative ontology has big, block objects labeled "AI" and "world" and an arrow from "AI" to "world" that can be either present or absent. The real world is a borderless, continuous process of quantum fields in which shaking one electron affects another electron on the opposite side of the universe.
I understand the general point, but "AI is disabled" seems like a special case, in that an AI able to do any sort of reasoning about itself, allocate internal resources, etc. (I don't know how necessary this is for it to do anything useful), will have to have concepts in its qualitative ontology of, or sufficient to define, its disability – though perhaps not in a way easily available for framing a goal system (e.g. if it developed them itself, assuming it could build up to them in their absence), and probably complicated in some other ways that haven't occurred to me in two minutes.
I provide our monthly place to discuss Less Wrong topics that have not appeared in recent posts. Work your brain and gain prestige by doing so in E-prime (or not, as you please).