What defines a human mind?  Or a sentient mind in general?

From a computational perspective, a human mind is one of some particular class of complex programs within the overall large space of general intelligences, or minds in general.

The most succinct dilineator of the human sub-category of minds is simply that of having a human ontology.  This entails information such as: a large body of embodiment derived knowledge we call common-sense, one or more human languages, memories, beliefs, ideas, values and so on.

Take a human infant, for dramatic example let us use a genetic clone of Einstein.  Raise this clone up amongst  wild animals and the development result is not anything remotely resembling Einstein, and in fact is not a human mind at all, but something much closer to a primate mind.  The brain is the hardware, the mind is software, and the particular mind of Einstein was a unique result of a particular mental developmental history and observation sequence.

If the mind is substrate independent, this begs the question of to what extent it is also algorithm independent.  If an AGI has a full human ontology, on what basis can we say that it is not human?  If it can make the same inferences on the same knowledgebase, understands one of our languages, has similar memories, ideas, and values, in what way is it not human?

Substrate and algorithm independence show us that it doesn't really matter in the slightest *how* something thinks internally, all that matter is the end functional behavior, the end decisions.

Surely there are some classes of AGI-designs that would exhibit thought patterns and behaviors well outside the human norm, but these crucially all involve changing some aspect of the knowledge-base.  For example, AGI's based solely on reinforcement learning algorithms would appear to be incapable of abstract model-based value decisions.  This would show up as glaring contrast between the AGI's decisions and it's linguistically demonstrable understanding of terms such as 'value', 'good', 'moral', and so on.  Of course human's actual decisions are often at odds in a similar fashion, but most humans mostly make important decisions they believe are 'good' most of the time.  

A reinforcement-learning based AGI with no explicit connection between value concepts such as 'good' and it's reward maximizing utility function would not necessarily be inclined to make 'good' decisions.  It may even be completely aware of this feature of it's design and be quite capable of verbalizing it.

But RL techniques are just one particular class of algorithms and we can probably do much better with designs that can form model-based utility functions that actually incorporate high order learned values encoded into the ontology itself.

Such a design would make 'good' decisions, and if truly of human or surpassing intelligence, would be fully capable of learning, refining, and articulating complex ethical/moral frameworks which in turn refine it's internal concept of 'good'.  It would consistently do 'good' things, and would be fully capable of explaining why they were good.

Naturally the end values and thus decisions of any such system would depend on what human knowledge it learned, what it read or absorbed and in what order, but is that really different than any alternatives?

And how could we say that such a system would not be human?  How does one make a non-arbitrary division between a human-capable AGI and say a human upload?

New Comment
6 comments, sorted by Click to highlight new comments since:

How does one make a non-arbitrary division between a human-capable AGI and say a human upload?

All divisions are arbitrary. But there are some really good differences in things like past history, properties of mental algorithms, value system, and relationships to other humans that make it reasonable to clump human minds together, separately from AIs.

With enough work, these could presumably be overcome, with some sort of "AI child" project. But it would be a hard problem relative to making an AI that takes over the world.

Perhaps all division are arbitrary, but let's unpack those really good differences . ..

Any plausible AI design is going to have a learning period and thus a history, correct? Does 'humanness' require that your history involved growing up in a house with a yard and a dog?

Mental algorithms differences are largely irrelevant. Even when considering only algorithmic differences that actually result in functional behavior changes, there's a vast difference in the scope of internal algorithms different human brains use or can learn to use and a wide spectrum in capabilities. It's also rather silly to define humanness around some performance barrier.

Value systems are important sure, but crucially we actually want future AGIs to have human values! So if values are important for humanness, as you claim, this only supports the proposition.

Relationships to other humans - during training/learning/childhood the AGI is presumably going to be interacting with human adult caretakers/monitors. The current systems in progress, such as OpenCOG, are certainly not trying to build an AGI in complete isolation from humans. Likewise, would a homo sapien child who grows up in complete isolation from other humans and learns only through computers and books not be human?

Does 'humanness' require that your history involved growing up in a house with a yard and a dog?

So first, you have to understand that human definitions are fuzzy. That, is, when you say "require," you are not going about this the right way. For this I recommend Yvain's post on disease.

As for the substance, I think "growing up" is pretty human. We tend to all follow a similar sort of development. And again, remember that definitions are fuzzy. Someone whose brain never develops after they're born isn't necessarily "not human," they are just much farther form the human norm.

Mental algorithms differences are largely irrelevant.

Insert Searle's chinese room argument here.

but crucially we actually want future AGIs to have human values!

SIAI doesn't, at least. When making an AI to self-improve to superintelligence, why make it so that it gets horny?

would a homo sapien child who grows up in complete isolation from other humans and learns only through computers and books not be human?

Didn't you talk about this in your post, with the child raised by wolves? Relationships with humans are vital for certain sorts of brain development, so an isolated child is much farther from the human norm.

but crucially we actually want future AGIs to have human values!

SIAI doesn't, at least. When making an AI to self-improve to superintelligence, why make it so that it gets horny?

Exactly human values, not analogous to human values. So if humans value having sex but don't value FAI having sex then the FAI will value humans having sex but not value having sex itself.

Feasibility of Creating Non-Human or Non-Sentient SuperIntelligence

If it is a superintelligence then it is by definition non-human.

Perhaps post-human or super-human, but not really non-human, at least by (my) conceptual definition. An alien society could have their own superintelligences which are more powerful instances of their alien minds, but still within that cluster of mindspace. Regardless your definition at least warrants a change of title.