Pure information does not exist. It is an abstract perspective on concrete physical states. Information processing cannot fail to stay in lockstep with the causal activity of its implementation because it is not ontologically a separate thing.
The idea that consciousness supervenes on any functional equivalent to neural activity is problematic because the range of possible concrete implementations is so large, eg Blockheads).
If you take the view that the more problematical aspects of consciousness , such as qualia, supervise directly on physics, and not on the informational layer, you can avoid both Blockheads and p-zombies.
If you take the view that the more problematical aspects of consciousness , such as qualia, supervise directly on physics
Without any informational process going on there?
Consciousness exists
If you are trying to be all formal about it, it's good to start by defining your terminology. What do you mean by Consciousness and what do you mean by existence? And one of the best ways to define what you mean by a commonly used term is to delineate its boundaries. For example, what is not-consciousness? Not-quite-consciousness? Give an example of ten. Same with existence. What does it mean for something to not exist? Can you list a dozen of non-existing things?
For example, do pink unicorns exist? If not, how come they affect reality (you see a sentence about them on your computer monitor)? How is consciousness different from pink unicorns? Please do not latch on this one particular example, make up your own.
I am pretty sure you have no firm understanding of what you are talking about, even though it feels like you do in your gut, "but is hard to explain". If you do not have a firm grasp of the basics, writing fancy lemmas and theorems may help you publish a philosophy paper but does not get your anywhere closer to understanding the issues.
If you are trying to be all formal about it, it's good to start by defining your terminology. What do you mean by Consciousness and what do you mean by existence?
I'm trying to be slightly formal, but without getting too bogged down. Instead I would prefer to take a few shortcuts to see if the road ahead looks promising at all. So far I feel that the best I've managed is to say "If a system seems to itself to experience consciousness in the same way that we seem to experience it, then we can call it conscious".
I am pretty sure you have no firm understanding of what you are talking about,
Not as sure as I am ;-) But I am trying to improve my understanding, and have no intention of writing philosophy papers.
I am not convinced by arguments for Sir Karl Popper's three worlds model of existence. Similar to what TheAncientGreek said, I am not convinced mental objects exist. But I suggest what you and EY are writing about, Popper wrote about in the 1970s.
David could benefit from reading Chambers The Conscious Mind , if he has not. Chalmers thinks consciousness supervenes on information processing, but his model is dualistic. David might want to avoid that.
Not up on my Chalmers, but is it really fair to call it dualism? I thought his view was that there is a subjective/physical duality in Nature just as there is a particle/wave duality. That's not very Cartesian, really.
Chalmers' view is usually referred to as property dualism, because it says that brains (and perhaps other physical systems) have certain properties (subjective experience, for instance) that are not reducible to fundamental physical properties. This is not really like particle/wave duality, because in that case both particle-like and wave-like aspects of the behavior of matter are unified by a deeper theory. Chalmers doesn't believe we will see any such unification of mental and physical properties.
Descartes, on the other hand, was a substance dualist. He didn't just believe that mental properties are irreducible to physical properties; he also believed that the bearer of mental properties is non-physical, i.e. not the brain but the non-physical mind.
So Chalmers is a dualist, according to contemporary philosophical parlance, in that he thinks that our fundamental ontology must include the mental as well as the physical, but he's not a substance dualist.
As for LC1, if we instead get S0 -> S(random other) then it would simply seem that a conscious decision had been made to not take the action.
Are you equating decision making with physical randomness? This seems to be an error. When you "make a decision", there is a reason you made that particular decision. See also what Russel has to say.
Overview
Tentative Lemmas
Likely Complications