I've talked earlier about integral and differential ethics, in the context of population ethics. The idea is that the argument for the repugnant conclusion (and its associate, the very repugnant conclusion) is dependent on a series of trillions of steps, each of which are intuitively acceptable (adding happy people, making happiness more equal), but reaching a conclusion that is intuitively bad - namely, that we can improve the world by creating trillions of people in torturous and unremitting agony, as long as balance it out by creating enough happy people as well.
Differential reasoning accepts each step, and concludes that the repugnant conclusions are actually acceptable, because each step is sound. Integral reasoning accepts that the repugnant conclusion is repugnant, and concludes that some step along the way must therefore be rejected.
Notice that key word, "therefore". Some intermediate step is rejected, but not for intrinsic reasons, but purely because of the consequence. There is nothing special about the step that is rejected, it's just a relatively arbitrary barrier to stop the process (compare with the paradox of the heap).
Indeed, things can go awry when people attempt to fix the repugnant conclusion (a conclusion they rejected through integral reasoning) using differential methods. Things like the "person-affecting view" have their own ridiculousness and paradoxes (it's ok to bring a baby into the world if it will have a miserable life; we don't need to care about future generations if we randomise conceptions, etc...) and I would posit that it's because they are trying to fix global/integral issues using local/differential tools.
The relevance of this? It seems that integral tools might be better suited to deal with the bad convergence of AI problem. We could set up plausibly intuitive differential criteria (such as self-consistency), but institute overriding integral criteria that can override these if they go too far. I think there may be some interesting ideas in that area, potentially. The cost is that integral ideas are generally seen as less elegant, or harder to justify.
I think we agree on the basics - the specificity of calculation allows you to identify exactly what you're considering, and find out what the mismatch is (missing a step, making an incorrect step, and/or mis-stating the summation). This is true for values as well as factual beliefs.
It is only after this that you understand your proposed values well enough to know whether they are different value-sets, or just a calculation mistake in one or both. Once you know that, then you can decide which, if either, apply to you.
I guess you should also separately decide if it's good and important for you to think you're a unitary individual, vs a series of semi-connected experiences. Do you (singlular you) want to have a single consistent set of values, or are all the future you-components content to behave somewhat randomly over time and context. This is mostly assumed in this kind of discussion, but probably worth stating if you're questioning what (if anything) you learn from an inconsistency.