OK, so what did you mean by "behaviour" if it includes things you can only discover with an fMRI scan? (Possible "extreme" case: you simply mean that consciousness is something that happens in the physical world and supervenes on arrangements of atoms and fields and whatnot; I don't think many here would disagree with that.)
If the criteria for consciousness include things you can't observe "normally" but need fMRI scans and the like for (for the avoidance of doubt, I agree that they do) then you no longer have any excuse for answering "yes" to that last question.
My point wasn't about hiding information; it was that much of the relevant information is already hidden, which you seemed to be denying when you said consciousness is just a matter of "behaviours". It now seems like you weren't intending to deny that at all; but in that case I no longer understand how what you're saying is relevant to the OP.
what did you mean by "behaviour"
The word behavior doesn't really feature much in the ongoing discussions I have. My first post was an answer to OP, not meant as a stand-alone truth. But obviously, If "consciousness" means anything, it's a thing that happens in the brain - I'd say it's the thing that makes complex and human-like behaviors possible.
If the criteria for consciousness include things you can't observe "normally" <...>
Normally is the key word here. There is nothing normal about your scenario. I need an f...
(This post grew out of an old conversation with Wei Dai.)
Imagine a person sitting in a room, communicating with the outside world through a terminal. Further imagine that the person knows some secret fact (e.g. that the Moon landings were a hoax), but is absolutely committed to never revealing their knowledge of it in any way.
Can you, by observing the input-output behavior of the system, distinguish it from a person who doesn't know the secret, or knows some other secret instead?
Clearly the only reasonable answer is "no, not in general".
Now imagine a person in the same situation, claiming to possess some mental skill that's hard for you to verify (e.g. visualizing four-dimensional objects in their mind's eye). Can you, by observing the input-output behavior, distinguish it from someone who is lying about having the skill, but has a good grasp of four-dimensional math otherwise?
Again, clearly, the only reasonable answer is "not in general".
Now imagine a sealed box that behaves exactly like a human, dutifully saying things like "I'm conscious", "I experience red" and so on. Moreover, you know from trustworthy sources that the box was built by scanning a human brain, and then optimizing the resulting program to use less CPU and memory (preserving the same input-output behavior). Would you be willing to trust that the box is in fact conscious, and has the same internal experiences as the human brain it was created from?
A philosopher believing in computationalism would emphatically say yes. But considering the examples above, I would say I'm not sure! Not at all!