The interpreter, if it would exist, would have complexity. The useless unconnected calculation in the waterfall/rock, which could be but isn't usually interpreted, also has complexity.
Your/Aaronson's claim is that only the fully connected, sensibly interacting calculation matters. I agree that this calculation is important - it's the only type we should probably consider from a moral standpoint, for example. And the complexity of that calculation certainly seems to be located in the interpreter, not in the rock/waterfall.
But in order to claim that only the externally connected calculation has conscious experience, we would need to have it be the case that these connections are essential to the internal conscious experience even in the "normal" case - and that to me is a strange claim! I find it more natural to assume that there are many internal experiences, but only some interact with the world in a sensible way.
But this just depends on how broad this set is. If it contains two brains, one thinking about the roman empire and one eating a sandwich, we're stuck.
I suspect that if you do actually follow Aaronson (as linked by Davidmanheim) to extract a unique efficient calculation that interacts with the external world in a sensible way, that unique efficient externally-interacting calculation will end up corresponding to a consistent set of experiences, even if it could still correspond to simulations of different real-world phenomena.
But I also don't think that consistent set of experiences necessarily has to be a single experience! It could be multiple experiences unaware of each other, for example.
The argument presented by Aaronson is that, since it would take as much computation to convert the rock/waterfall computation into a usable computation as it would be to just do the usable computation directly, the rock/waterfall isn't really doing the computation.
I find this argument unconvincing, as we are talking about a possible internal property here, and not about the external relation with the rest of the world (which we already agree is useless).
(edit: whoops missed an 'un' in "unconvincing")
Considering all the layers of convention and interpretation between the physics of a processor and the process it represents, it seems unlikely to me that the alien would be able to describe the simulacra. The alien is therefore unable to specify the experience being created by the cluster.
I don't think this follows. Perhaps the same calculation could simulate different real world phenomena, but it doesn't follow that the subjective experiences are different in each case.
If computation is this arbitrary, we have the flexibility to interpret any physical system, be it a wall, a rock, or a bag of popcorn, as implementing any program. And any program means any experience. All objects are experiencing everything everywhere all at once.
Afaik this might be true. We have no way of finding out whether the rock does or does not have conscious experience. The relevant experiences to us are those that are connected to the ability to communicate or interact with the environment, such as the experiences associated with the global workspace in human brains (which seems to control memory/communication); experiences that may be associated with other neural impulses, or with fluid dynamics in the blood vessels or whatever, don't affect anything.
Could both of them be right? No - from your point of view, at least one of them must be wrong. There is one correct answer, the experience you are having.
This also does not follow. Both experiences could happen in the same brain. You - being experience A - may not be aware of experience B - but that does not mean that experience B does not exist.
(edited to merge in other comments which I then deleted)
It is a fact about the balls that one ball is physically continuous with the ball previously labeled as mine, while the other is not. It is a fact about our views on the balls that we therefore label that ball, which is physically continuous, as mine and the other not.
And then suppose that one of these two balls is randomly selected and placed in a bag, with another identical ball. Now, to the best of your knowledge there is 50% probability that your ball is in the bag. And if a random ball is selected from the bag, there is 25% chance that it's yours.
So as a result of such manipulations there are three identical balls and one has 50% chance to be yours, while the other two have 25% chance to be yours. Is it a paradox? Oh course not. So why does it suddenly become a paradox when we are talking about copies of humans?
It is objectively the case here that 25% of the time this procedure would select the ball that is physically continuous with the ball originally labeled as "mine", and that we therefore label as "mine".
Ownership as discussed above has a relevant correlate in reality - physical continuity in this case. But a statement like "I will experience being copy B (as opposed to copy A or C)" does not. That statement corresponds to the exact same reality as the corresponding statements about experiencing being copy A or C. Unlike in the balls case, here the only difference between those statements is where we put the label of what is "me".
In the identity thought experiment, it is still objectively the case that copies B and C are formed by splitting an intermediate copy, which was formed along with copy A by splitting the original.
You can choose to disvalue copies B and C based on that fact or not. This choice is a matter of values, and is inherently arbitrary.
By choosing not to disvalue copies B and C, I am not making an additional assumption - at least not one that you are already making by valuing B and C the same as each other. I am simply not counting the technical details of the splitting order as relevant to my values.
The issue, to me, is not whether they are distinguishable.
The issues are:
and:
Imagine the statement: in world 1, "I" will wake up as copy A. in world 2 "I" will wake up as copy B. How are world 1 and world 2 actually different?
Answer: they aren't different. It's just that in world 1, I drew a box around the future copy A and said that this is what will count as "me", and in world 2, I drew a box around copy B and said that this is what will count as "me". This is a distinction that exists only in the map, not in the territory.
You can also disambiguate between
a) computation that actually interacts in a comprehensible way with the real world and
b) computation that has the same internal structure at least momentarily but doesn't interact meaningfully with the real world.
I expect that (a) can usually be uniquely pinned down to a specific computation (probably in both senses (1) and (2)), while (b) can't.
But I also think it's possible that the interactions, while important for establishing the disambiguated computation that we interact with, are not actually crucial to internal experience, so that the multiple possible computations of type (b) may also be associated with internal experiences - similar to Boltzmann brains.
(I think I got this idea from "Good and Real" by Gary L. Drescher. See sections "2.3 The Problematic Arbitrariness of Representation" and "7.2.3 Consciousness and Subjunctive Reciprocity")