(Notes confusion)
Intuitively, I think they stop being conscious when they stop being able to talk to me about what they subjectively think about the world.
Not being able to control your body makes you a bit less conscious, but not nearly as much as removing long term memory. I don't think that the degree of conflict is as important as the degree of representation.
Confusion on "subjectively think":
I think that this is a proxy for having an experience of the world, qualia and that sort of thing.
Confusion on being "able to talk":
I can have an inner dialogue without opening my mouth and vocalizing to other people, and I still report consciousness. If I had magical telepathy powers that let me access someone's inner dialogue without them talking, then they would probably be able to convince me of their consciousness.
Those are really just the ideas that I'm using to think about this. Since they seem really important to what I personally mean when I say consciousness, I don't think its the ability to mediate internal conflicts.
(And I wonder how conscious your friend would actually be. Certainly, there's a lot of potential there, but judging from float tanks and the psychological effects of isolation tanks...)
I feel like the losing consciousness is probably a result of losing sensory input on which to base a model of the world. When your world model is gone, its harder to talk about things.
You only have as many qualia as you need to; sensory data is discarded as much as possible. (Look at meditation, how much one experiences but does not notice. Look at dreams - they seem vivid and real, until one tries to see specific detail like reading written material.) And what one perceives is strongly shaped by what one expects (eg. the ba-ga experiment or the entire prediction-is-intelligence line of thought - On Intelligence comes to mind). Look at how the mind shuts down when there is little to do, in things like highway hypnosis.
(Maybe you should read the PRISM papers.)
One of the most important points raised by the sequences is that not all minds are like humans. In quite a few places, people have discussed minds with slight changes from human minds, which seem altogether different. However, a lot of this discussion has been related to AI, as opposed to minds created by evolution. I'm trying to think of ways that minds which evolved, and are effective enough to start a civilization, could differ from humans'.
Three Worlds Collide would seem like an excellent starting point, but isn't actually very useful. As far as I recall, the Babyeaters might have learned their baby eating habits as a result of societal pressure. The main difference in their society seemed to be the assumption that people who disagreed with you were simply mistaken: this contrasts to humans' tendency to form rival groups, and assume everyone in the rival groups is evil. The Super-Happies had self modified, and so don't provide an example of an evolved mind.
So here are my ideas so far.