Yes: algorithms are entirely predictable from, and understandable in terms of, their physical realisations.
Now I'm confused: what you just said is a description of a 'supervenient' relation. Are you saying that anytime X is said to supervene on Y, we should reject the theory which features X's?
No. Supervence is an ontologically neutral relationship. In Chalmer's theory, qualia supervene on brain states, so novel brain states will lead to novvel qualia. In identity theory, qualia superven on brain states, so ditto. So the Novel Qualia test does not distinguish the one from the other. The argument for qualia being non-physical properties, as opposed to algorithms, is down to their redubility, or lack thereof, not supervenience.
[Cross-posted.]
1. Defining the problem: The inverted spectrum
A. Attempted solutions to the inverted spectrum.
B. The “substitution bias” of solving the “easy problem of consciousness” instead of the “hard problem.”
2. The false intuition of direct awareness
A. Our sense that the existence of raw experience is self-evident doesn’t show that it is true.
B. Experience can’t reveal the error in the intuition that raw experience exists.
C. We can’t capture the ineffable core of raw experience with language because there’s really nothing there.
D. We believe raw experience exists without detecting it.
3. The conceptual economy of qualia nihilism pays off in philosophical progress
4. Relying on the brute force of an intuition is rationally specious.
Against these considerations, the only argument for retaining raw experience in our ontology is the sheer strength of everyone’s belief in its existence. How much weight should we attach to a strong belief whose validity we can't check? None. Beliefs ordinarily earn a presumption of truth from the absence of empirical challenge, but when empirical challenge is impossible in principle, the belief deserves no confidence.