You seem to be failing to attend here to the difference between descriptions and the systems they describe.
I'm not saying Q1=S1. That's a category error; Q1 is a description of S1. The map is not the territory.
I am saying that Q1 and "a squirrel eating a nut" are two different descriptions of the same system, and that although "a squirrel eating a nut" depends on a human mind to generate it, the system it describes (which Q1 also describes) does not depend on a human mind to generate it.
Agreed that there are gains and losses in going from one form of representation to another. But the claim "'a squirrel eating a nut' is a description of that system over there" is just as accurate as the claim "Q1 is a description of that system over there." So I stand by the statement that I can as accurately make one claim as the other.
The map is not the territory. ... I am saying that Q1 and "a squirrel eating a nut" are two different descriptions of the same system...
The map and territory perspective is effective when pointing out that the map is not the territory. A map of Texas is not Texas. However it would be wrong to conclude that a road map of Texas describes the same territory as an elevation map of Texas. Although both maps have a similar geographic constraint, they are not based on the same source data. They do not describe the same territory.
Consider this case. W...
Certain kinds of philosophy and speculative fiction, including kinds that get discussed here all the time, tend to cause a ridiculous thing to happen: I start doubting the difference between existence and non-existence. This bothers me, because it's clearly a useless dead end. Can anyone help with this?
The two concepts that tend to do it for me are
* Substrate independence/strong AI: The idea that a simulation of my mind is still me. That I could survive the process of uploading myself into a computer running Windows, a cellular automaton run by this guy, or even something that didn't look like a computer, mind, or universe at all to anyone in the outside world. That we could potentially create or discover a simulated universe that we could have ethical obligations towards. This is all pretty intuitive to me and largely accepted by the sort of people who think about these things.
* Multiverses: The idea that the world is bigger than the universe.
My typical line of thought goes something like this: suppose I run a Turing Machine that encodes a universe containing conscious beings. That universe now exists as a simulation within my own. It's just as real as mine, just more precarious because events in my reality can mess with its substrate. If I died and nobody knew how it worked, it would still be real (so I should make provisions for that scenario). Okay, but Turing Machines are simple. A Turing Machine simulating a coherent universe containing conscious beings can probably arise naturally, by chance. In that case, those beings are still real even if nobody on the outside, looking at the substrate, realizes what they're looking at. Okay, but now consider Turing Machines like John Conway's Fractran, which are encoded into an ordered set of rational numbers and run by multiplication. I think it's fair to say that rational numbers and multiplication occur naturally, everywhere. Arithmetic lives everywhere. But furthermore, arithmetic lives *nowhere*. It's not just substrate-independent; it's independent of whether or not there is a substrate. 2+2=4 no matter whether two bottlecaps are being combined with two other bottlecaps to make four bottlecaps. So every Turing-computable reality already exists to the extent that math itself does.
I think this is stupid. Embarrassingly stupid. But I can't stop thinking it.