My thesis is that the true ontology - the correct set of concepts by means of which to understand the nature of reality - is several layers deeper than anything you can find in natural science or computer science. The attempt to describe reality entirely in terms of the existing concepts of those disciplines is necessarily incomplete, partly because it's all about X causing Y but not about what X and Y are. Consciousness gives us a glimpse of the "true nature" of at least one thing - itself, i.e. our own minds - and therefore a glimpse of the true ontological depths. But rationalists and materialists who define their rationalism and materialism as "explaining everything in terms of the existing concepts" create intellectual barriers within themselves to the sort of progress which could come from this reflective, phenomenological approach.
I'm not just talking about arcane metaphysical "aspects" of consciousness. I'm talking about something as basic as color. Color does not exist in standard physical ontology - "colors" are supposed to be wavelengths, but a length is not a color; this is an example of the redefining of concepts that I mentioned in the previous long comment. This is actually an enormous clue about the nature of reality - color exists, it's part of a conscious state, therefore, if the brain is the conscious thing, then part of the brain must be where the color is. But it sounds too weird, so people settle for the usual paradoxical crypto-dualism: the material world consists of colorless particles, but the experience of color is in the brain somewhere, but that doesn't mean that anything in the brain is actually "colored". This is a paradox, but it allows people to preserve the sense that they understand reality.
You asked for a simple exposition but that's just not easy. Certainly color ought to be a very simple example: it's there in reality, it's not there in physics. But let me try to express my thoughts about the actual nature of color... it's an elementary property instantiated in certain submanifolds of the total instantaneous phenomenal state of affairs existing at the object pole of a monadic intentionality which is formally a slice through the worldline of a big coherent tensor factor in the Machian quantum geometry which is the brain's exact microphysical state... it's almost better just to say nothing, until I've written some treatise which explains all my terms and their motivations.
I only made my original comment because you spontaneously expressed perplexity at the nature of "sentience", and I wanted to warn you against the false solutions that most rationalist-materialists will adopt, under the self-generated pressure to explain everything using just the narrow ontological concepts they already have.
I think the confusion here stems from the fact that the word "color" has two different meanings.
When physicists talk about "color", what they mean is, "a specific wavelength of light". Let's call this "color-a".
When biologists or sociologists (or graphic artists) talk about "color", what they mean is, "a series of biochemical reactions in the brain which is usually the result of certain wavelengths of light hitting the retina". Let's call this "color-b".
Both "color-a" and "...
Followup to: Nonsentient Optimizers
Why would you want to avoid creating a sentient AI? "Several reasons," I said. "Picking the simplest to explain first—I'm not ready to be a father."
So here is the strongest reason:
You can't unbirth a child.
I asked Robin Hanson what he would do with unlimited power. "Think very very carefully about what to do next," Robin said. "Most likely the first task is who to get advice from. And then I listen to that advice."
Good advice, I suppose, if a little meta. On a similarly meta level, then, I recall two excellent advices for wielding too much power:
Imagine that you knew the secrets of subjectivity and could create sentient AIs.
Suppose that you did create a sentient AI.
Suppose that this AI was lonely, and figured out how to hack the Internet as it then existed, and that the available hardware of the world was such, that the AI created trillions of sentient kin—not copies, but differentiated into separate people.
Suppose that these AIs were not hostile to us, but content to earn their keep and pay for their living space.
Suppose that these AIs were emotional as well as sentient, capable of being happy or sad. And that these AIs were capable, indeed, of finding fulfillment in our world.
And suppose that, while these AIs did care for one another, and cared about themselves, and cared how they were treated in the eyes of society—
—these trillions of people also cared, very strongly, about making giant cheesecakes.
Now suppose that these AIs sued for legal rights before the Supreme Court and tried to register to vote.
Consider, I beg you, the full and awful depths of our moral dilemma.
Even if the few billions of Homo sapiens retained a position of superior military power and economic capital-holdings—even if we could manage to keep the new sentient AIs down—
—would we be right to do so? They'd be people, no less than us.
We, the original humans, would have become a numerically tiny minority. Would we be right to make of ourselves an aristocracy and impose apartheid on the Cheesers, even if we had the power?
Would we be right to go on trying to seize the destiny of the galaxy—to make of it a place of peace, freedom, art, aesthetics, individuality, empathy, and other components of humane value?
Or should we be content to have the galaxy be 0.1% eudaimonia and 99.9% cheesecake?
I can tell you my advice on how to resolve this horrible moral dilemma: Don't create trillions of new people that care about cheesecake.
Avoid creating any new intelligent species at all, until we or some other decision process advances to the point of understanding what the hell we're doing and the implications of our actions.
I've heard proposals to "uplift chimpanzees" by trying to mix in human genes to create "humanzees", and, leaving off all the other reasons why this proposal sends me screaming off into the night:
Imagine that the humanzees end up as people, but rather dull and stupid people. They have social emotions, the alpha's desire for status; but they don't have the sort of transpersonal moral concepts that humans evolved to deal with linguistic concepts. They have goals, but not ideals; they have allies, but not friends; they have chimpanzee drives coupled to a human's abstract intelligence.
When humanity gains a bit more knowledge, we understand that the humanzees want to continue as they are, and have a right to continue as they are, until the end of time. Because despite all the higher destinies we might have wished for them, the original human creators of the humanzees, lacked the power and the wisdom to make humanzees who wanted to be anything better...
CREATING A NEW INTELLIGENT SPECIES IS A HUGE DAMN #(*%#!ING COMPLICATED RESPONSIBILITY.
I've lectured on the subtle art of not running away from scary, confusing, impossible-seeming problems like Friendly AI or the mystery of consciousness. You want to know how high a challenge has to be before I finally give up and flee screaming into the night? There it stands.
You can pawn off this problem on a superintelligence, but it has to be a nonsentient superintelligence. Otherwise: egg, meet chicken, chicken, meet egg.
If you create a sentient superintelligence—
It's not just the problem of creating one damaged soul. It's the problem of creating a really big citizen. What if the superintelligence is multithreaded a trillion times, and every thread weighs as much in the moral calculus (we would conclude upon reflection) as a human being? What if (we would conclude upon moral reflection) the superintelligence is a trillion times human size, and that's enough by itself to outweigh our species?
Creating a new intelligent species, and a new member of that species, especially a superintelligent member that might perhaps morally outweigh the whole of present-day humanity—
—delivers a gigantic kick to the world, which cannot be undone.
And if you choose the wrong shape for that mind, that is not so easily fixed—morally speaking—as a nonsentient program rewriting itself.
What you make nonsentient, can always be made sentient later; but you can't just unbirth a child.
Do less. Fear the non-undoable. It's sometimes poor advice in general, but very important advice when you're working with an undersized decision process having an oversized impact. What a (nonsentient) Friendly superintelligence might be able to decide safely, is another issue. But for myself and my own small wisdom, creating a sentient superintelligence to start with is far too large an impact on the world.
A nonsentient Friendly superintelligence is a more colorless act.
So that is the most important reason to avoid creating a sentient superintelligence to start with—though I have not exhausted the set.