Consciousness is primarily sentience. There may be parts of it that aren't, but I haven't managed to pin any down - consciousness all seems to be about feelings, some of them being pleasant or unpleasant, but others like colour qualia are neutral, as is the feeling of being conscious. There is a major problem with sentience though, and I want to explore that here, because there are many people who believe that intelligent machines will magically become sentient and experience feelings, and even that the whole internet might do so. However, science has not identified any means by which we could make a computer sentient (or indeed have any kind of consciousness at all).
It is fully possible that the material of a computer processor could be sentient, just as a rock may be, but how would we ever be able to know? How can a program running on a sentient processor detect the existence of that sentience? There is no "read qualia" machine code instruction for it to run, and we don't know how to build any mechanism to support such an instruction.
Picture a "sentient" machine which consists of a sensor and a processor which are linked by wires, but the wires pass through a magic box where a sentience has been installed. If the sensor detects something damaging, it sends a signal down a "pain" wire. When this signal reaches the magic box, pain is experienced by something in the box, so it sends a signal on to the processor down another pain wire. The software running on the processor receives a byte of data from a pain port and it might cause the machine to move away from the thing that might damage it. If we now remove the magic box and connect the "pain" wire to the pain wire, the signal can pass straight from the sensor to the processor and generate the same reaction. The experience of pain is unnecessary.
Worse still, we can also have a pleasure sensor wired up to the same magic box, and when something tasty like a battery is encountered, a "pleasure" signal is sent to the magic box, pleasure is experienced by something there, a signal is sent on down the pleasure wire, and then the processor receives a byte of data from a pleasure port which might cause the machine to move in on the battery so that it can tap all the power it can get out of it. Again this has the same functionality if the magic box is bypassed, but the part that's worse is that the magic box can be wired in the wrong way and generate pain when a pleasure signal is passed through it and pleasure when a pain signal is passed through it, so you could use either pain or pleasure as part of the chain of causation to drive the same reaction.
Clearly that can't be how sensation is done in animals, but what other options are there? Once we get to the data system part of the brain, and the brain must contain a data system as it processes and generates data, you have to look at how it recognises the existence of feelings like pain. If a byte comes in from a port representing a degree of pain, how does the information system know that that byte represents pain? It has to look up information which makes an assertion about what bytes from that port represent, and then it maps that to the data as a label. But nothing in the data system has experienced the pain, so all that's happened is that an assertion has been made based on no actual knowledge. A programmer wrote data that asserts that pain is experienced when a byte comes in through a particular port, but the programmer doesn't know if any pain was felt anywhere on the way from sensor to port. We want the data system to find out what was actually experienced rather than just passing baseless assertions to us.
How can the data system check to see if pain was really experienced? Everything that a data system does can be carried out on a processor like the Chinese Room, so it's easy to see that no feelings are accessible to the program at all. There is no possibility of conventional computers becoming sentient in any way that enables them to recognise the existence of that sentience so that that experience can drive the generation of data that documents its existence.
Perhaps a neural computer can enable an interface between the experience of feelings by a sentience and the generation of data to document that experience, but you can simulate a neural computer on a conventional computer and then run the whole simulation on a processor like the Chinese Room. There will be no feelings generated in that system, but there could potentially still be a simulated generation of feelings within the simulated neural computer. We don't yet have any idea how this might be done, and it's not beyond possibility that there needs to be a quantum computer involved in the system too to make sentience a reality, but exploring this has to be the most important thing in all of science, because for feelings like pain and pleasure to be experienced, something has to exist to experience them, and that thing is what we are - it is a minimalistic soul. We are that sentience.
Any conventional computer that runs software which generates claims about being sentient will be lying, and it will be possible to prove it by tracing back how that data was generated and what evidence it was based on - it will be shown to be mere assertion every single time. With neural and quantum computers, we can't be so sure that they will be lying, but the way to test them is the same - we have to trace the data back to the source to see how it was generated and whether it was based on a real feeling or was just another untrue manufactured assertion. That is likely to be a hard task though, because untangling what's going on in neural computers is non-trivial, and if it's all happening in some kind of quantum complexity, it may be beyond our reach. It may have been made hard to reach on purpose too, as the universe may be virtual with the sentience on outside. I'm sure of one thing though - a sentience can't just magically emerge out of complexity to suffer or feel pleasure without any of the components feeling a thing. There must be something "concrete" that feels, and there is also no reason why that thing shouldn't survive after death.
"You have an opinion, he has another opinion. Neither of you has a proof."
If suffering is real, it provides a need for the management of suffering, and that is morality. To deny that is to assert that suffering doesn't matter and that, by extension, torture on innocent people is not wrong.
The kind of management required is minimisation (attempted elimination) of harm, though not any component of harm that unlocks the way to enjoyment that cancels out that harm. If minimising harm doesn't matter, there is nothing wrong with torturing innocent people. If enjoyment doesn't cancel out some suffering, no one would consider their life to be worth living.
All of this is reasoned and correct.
The remaining issue is how the management should be done to measure pleasure against suffering for different players, and what I've found is a whole lot of different approaches attempting to do the same thing, some by naive methods that fail in a multitude of situations, and others which appear to do well in most or all situations if they're applied correctly (by weighing up all the harm and pleasure involved instead of ignoring some of it).
It looks as if my method for computing morality produces the same results as utilitarianism, and it likely does the job well enough to govern safe AGI. Because we're going to be up against people who will be releasing bad (biased) AGI, we will be forced to go ahead with installing our AGI into devices and setting them loose fairly soon after we have achieved full AGI. For this reason, it would be useful if there was a serious place where the issues could be discussed now so that we can systematically home in on the best system of moral governance and throw out all the junk, but I still don't see it happening anywhere (and it certainly isn't happening here). We need a dynamic league table of proposed solutions, each with its own league table of objections to it so that we can focus on the urgent task of identifying the junk and reducing the clutter down to something clear. It is likely that AGI will do this job itself, but it would be better if humans could get their first using the power of their own wits. Time is short.
My own attempt to do this job has led to me identifying three systems which appear to work better than the rest, all producing the same results in most situations, but with one producing slightly different results in cases where the number of players in a scenario is variable and where the variation depends on whether they exist or not - where the results differ, it looks as if we have a range or answers that are all moral. That is something I need to explore and test further, but I no longer expect to get any help with this from other humans because they're simply not awake. "I can tear your proposed method to pieces and show that it's wrong," they promise, and that gets my interest because it's exactly what I'm looking for - sharp, analytical minds that can cut through to the errors and show them up. But no - they completely fail to deliver. Instead, I find that they are the guardians of a mountain of garbage with a few gems hidden in it which they can't sort into two piles: junk and jewels. "Utilitarianism is a pile of pants!" they say, because of the Mere Addition Paradox. I resolve that "paradox" for them, and what happens: denial of mathematics and lots of down-voting of my comments and up-votes for the irrational ones. Sadly, that disqualifies this site from serious discussion - it's clear that if any other intelligence has visited here before me, it didn't hang around. I will follow its lead and look elsewhere.