That seems like a good minimal case. This has to be the closest there is to no consciousness at all; your 'selective' would seem to exclude many lower animals. It might be better to think of minimal as being unconscious - a dog has no choice but to react mentally to a whistle, say, but neither does the thermostat have a choice.
Actually, it does have a choice; dogs can be trained to ignore stimuli, and you can only be trained to do something that you can do anyway. Either that, or humans also have no choice but to "react mentally", either, and the distinction is meaningless.
Either way, "choice" is less meaningful than "selection" - we can argue how much choice there is in the selection later.
In fact, the mere fact of selectivity means, there's always something not being "reacted to mentally" by the "observer" of the model. Whether this selectivity has anything to do with choice is another matter. I can direct where my attention goes, but I can also feel it "drawn" to things, so clearly, selectivity is a mixed bag with respect to choice.
It seems we disagree on what 'reacting mentally' is - I'd say a dog so trained may be an organism too high up on the power/consciousness scale (surely something lower than a dog - lower than gerbils or rats even - is where we ought to be looking), and that even if it is not making any physical actions, its mind is reacting (it knows about it) while humans truly can 'tune out' stimuli.
But an example may help. What would you have to add to a thermostat to make it non-'minimal', do you think? Another gauge, like a humidity gauge, which has no electrical connection to the binary output circuit?
I'm starting Dennett's "Consciousness Explained". Dennett says, in the introduction, that he believes he has solved the problem of consciousness. Since several people have referred to his work here with approval, I'm going to give it a go. I'm going to post chapter summaries as I read, for my own selfish benefit, so that you can point out when you disagree with my understanding of it. "D" will stand for Dennett.
If you loathe the C-word, just stop now. That's what the convenient break just below is for. You are responsible for your own wasted time if you proceed.
Chpt. 1: Prelude: How are Hallucinations Possible?
D describes the brain in a vat, and asks how we can know we aren't brains in vats. This dismays me, as it is one of those questions that distracts people trying to talk about consciousness, that has nothing to do with the difficult problems of consciousness.
Dennett states, without presenting a single number, that the bandwidth needs for reproducing our sensory experience would be so great that it is impossible (his actual word); and that this proves that we are not brains in vats. Sigh.
He then asks how hallucinations are possible: "How on earth can a single brain do what teams of scientists and computer animators would find to be almost impossible?" Sigh again. This is surprising to Dennett because he believes he has just established that the bandwidth needs for consciousness are too great for any computer to provide; yet the brain sometimes (during hallucinations) provides nearly that much bandwidth. D has apparently forgotten that the brain provides exactly, by definition, the consciousness bandwidth of information to us all the time.
D recounts Descartes' remarkably prescient discussion of the bellpull as an analogy for how the brain could send us phantom misinformation; but dismisses it, saying, "there is no way the brain as illusionist could store and manipulate enough false information to fool an inquiring mind." Sigh. Now not only consciousness, but also dreams, are impossible. However, D then comes back to dreams, and is aware they exist and are hallucinations; so either he or I is misunderstanding this section.
On p. 12 he suggests something interesting: Perception is driven both bottom-up (from the senses) and top-down (from our expectations). A hallucination could happen when the bottom-up channel is cut off. D doesn't get into data compression at all, but I think a better way to phrase this is that, given arbitrary bottom-up data, the mind can decompress sensory input into the most likely interpretation given the data and given its knowledge about the world. Internally, we should expect that high-bandwidth sensory data is summarized somewhere in a compressed form. Compressed data necessarily looks more random than prior to compression. This means that, somewhere inside the mind, we should expect it to be harder than naive introspection suggests to distinguish between true sensory data and random sensory noise. D suggests an important role for an adjustable sensitivity threshold for accepting/rejecting suggested interpretations of sense data.
D dismisses Freud's ideas about dreams - that they are stories about our current concerns, hidden under symbolism in order to sneak past our internal censors - by observing that we should not posit homunculi inside our brains who are smarter than we are.
[In summary, this chapter contained some bone-headed howlers, and some interesting things; but on the whole, it makes me doubt that D is going to address the problem of consciousness. He seems, instead, on a trajectory to try to explain how a brain can produce intelligent action. It sounds like he plans to talk about the architecture of human intelligence, although he does promise to address qualia in part III.
Repeatedly on LW, I've seen one person (frequently Mitchell Porter) raise the problem of qualia; and seen otherwise-intelligent people reply by saying science has got it covered, consciousness is a property of physical systems, nothing to worry about. For some reason, a lot of very bright people cannot see that consciousness is a big, strange problem. Not intelligence, not even assigning meaning to representations, but consciousness. It is a different problem. (A complete explanation of how intelligence and symbol-grounding take place in humans might concomitantly explain consciousness; it does not follow, as most people seem to think it does, that demonstrating a way to account for non-human intelligence and symbol-grounding therefore accounts for consciousness.)
Part of the problem is their theistic opponents, who hopelessly muddle intelligence, consciousness, and religion: "A computer can never write a symphony. Therefore consciousness is metaphysical; therefore I have a soul; therefore there is life after-death." I think this line of reasoning has been presented to us all so often that a lot of us have cached it, to the extent that it injects itself into our own reasoning. People on LW who try to elucidate the problem of qualia inevitably get dismissed as quasi-theists, because, historically, all of the people saying things that sound similar were theists.
At this point, I suspect that Dennett has contributed to this confusion, by writing a book about intelligence and claiming not just that it's about consciousness, but that it has solved the problem. I shall see.]