jacob_cannell comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong

33 Post author: lukeprog 29 January 2011 02:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (202)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 30 January 2011 02:15:12AM 0 points [-]

That is, the task is not one we program the AI to accomplish - instead, we train the AI to accomplish it. And, most importantly, we train the AI to ask for further clarification in ambiguous cases

This is the straightforward approach.

Once you have an AGI that has the cognitive capability and learning capacity of a human infant brain, you teach it everything else in human language - right/wrong, ethics/morality, etc.

Programming languages are precise and well suited for creating the architecture itself, but human languages are naturally more effective for conveying human knowledge.

Comment author: Perplexed 30 January 2011 02:36:26AM 1 point [-]

I tend to agree that we need a natural language interface to the AI. But it is far easier to create automatic proofs of program correctness when the really important stuff (like ethics) is presented in a formal language equipped with a deductive system.

There is something to be said for treating all the natural language input as if it were testimony from unreliable witnesses - suitable, perhaps, for locating hypotheses, but not really suitable as strong evidence for accepting the hypotheses.

Comment author: jacob_cannell 30 January 2011 02:42:38AM 0 points [-]

But it is far easier to create automatic proofs of program correctness

I'm not sure how this applies - can you formally prove the correctness of a probabilistic belief network? Is that even a valid concept?

I can understand how you can prove a formal deterministic circuit or the algorithms underlying the belief network and learning systems, but the data values?

Comment author: Perplexed 30 January 2011 03:41:34AM 1 point [-]

Agree. That is why I suggest that the really important stuff - meta-ethics, epistemology, etc., be represented in some other way than by 'neural' networks. Something formal and symbolic, rather than quasi-analog. All the stuff which we (and the AI) need to be absolutely certain doesn't change meaning when the AI "rewrites its own code"

Comment author: jacob_cannell 30 January 2011 04:05:37AM *  0 points [-]

By formal, I assume you mean math/code.

The really important stuff isn't a special category of knowledge. It is all connected - a tangled web of interconnected complex symbolic concepts for which human language is a natural representation.

What is the precise mathematical definition of ethics? If you really think of what it would entail to describe that precisely, you would need to describe humans, civilization, goals, brains, and a huge set of other concepts.

In essence you would need to describe an approximation of our world. You would need to describe a belief/neural/statistical inference network that represented that word internally as a complex association between other concepts that eventually grounds out into world sensory predictions.

So this problem - that human language concepts are far too complex and unwieldy for formal verification - is not a problem with human language itself that can be fixed by using other language choices. It reflects a problem with the inherit massive complexity of the world itself, complexity that human language and brain-like systems are evolved to handle.

Comment author: Perplexed 30 January 2011 04:48:49AM 0 points [-]

So this problem - that human language concepts are far too complex and unwieldy for formal verification - is not a problem with human language itself that can be fixed by using other language choices. It reflects a problem with the inherit massive complexity of the world itself, complexity that human language and brain-like systems are evolved to handle.

These folks seem to agree with you about the massive complexity of the world, but seem to disagree with you that natural language is adequate for reliable machine-based reasoning about that world.

As for the rest of it, we seem to be coming from two different eras of AI research as well as different application areas. My AI training took place back around 1980 and my research involved automated proofs of program correctness. I was already out of the field and working on totally different stuff when neural nets became 'hot'. I know next to nothing about modern machine learning.

Comment author: jacob_cannell 30 January 2011 09:42:50AM 0 points [-]

I've read about CYC a while back - from what I recall/gather it is a massive handbuilt database of little natural language 'facts'.

Some of the new stuff they are working on with search looks kinda interesting, but in general I don't see this as a viable approach to AGI. A big syntactic database isn't really knowledge - it needs to be grounded to a massive sub-symbolic learning system to get the semantics part.

On the other hand, specialized languages for AGI's? Sure. But they will need to learn human languages first to be of practical value.

Comment author: Perplexed 30 January 2011 02:21:53PM *  2 points [-]

Blind men looking at elephants.

You look at CYC and see a massive hand-built database of facts.

I look and see a smaller (but still large) hand-built ontology of concepts

You, probably because you have worked in computer vision or pattern recognition, notice that the database needs to be grounded in some kind of perception machinery to get semantics.

I, probably because I have worked in logic and theorem proving, wonder what axioms and rules of inference exist to efficiently provide inference and planning based upon this ontology.

Comment author: jacob_cannell 31 January 2011 12:33:21AM 0 points [-]

Blind men looking at elephants.

One of my favorite analogies and I'm fond of the Jainist? multi-viewpoint approach.

As for the logic/inference angle, I suspect that this type of database underestimates the complexity of actual neural concepts - as most of the associations are subconscious and deeply embedded in the network.

We use 'connotation' to describe part of this embedding concept, but I see it as even deeper than that. A full description of even a simple concept may be on the order of billions of such associations. If this is true, then a CYC like approach is far from appropriately scalable.

Comment author: Perplexed 31 January 2011 12:45:49AM 2 points [-]

It appears that you doubt that an AI whose ontology is simpler and cleaner than that of a human can possibly be intellectually more powerful than a human.

Comment author: Vladimir_Nesov 30 January 2011 03:53:30AM 0 points [-]

To get to that point we have to start from the right meaning to begin with, and care about preserving it accurately, and Jacob doesn't agree those steps are important or particularly hard.

Comment author: jacob_cannell 30 January 2011 04:21:55AM -1 points [-]

Not quite.

As for the start with the right meaning part, I think it is extremely hard to 'solve' morality in the way typically meant here with CEV or what not.

I don't think that we need (or will) wait to solve that problem before we build AGI, any more or less than we need to solve it for having children and creating a new generation of humans.

If we can build AGI somewhat better than us according to our current moral criteria, they can build an even better successive generation, and so on - a benevolence explosion.

As for the second part about preserving it accurately, I think that ethics/morality is complex enough that it can only be succinctly expressed in symbolic associative human languages. An AGI could learn how to model (and value) the preferences of others in much the same way humans do.

Comment author: wedrifid 30 January 2011 04:27:22AM 3 points [-]

I don't think that we need (or will) wait to solve that problem before we build AGI, any more or less than we need to solve it for having children and creating a new generation of humans.

If we can build AGI somewhat better than us according to our current moral criteria, they can build an even better successive generation, and so on - a benevolence explosion.

Someone help me out. What is the right post to link to that goes into the details of why I want to scream "No! No! No! We're all going to die!" in response to this?

Comment author: Vladimir_Nesov 30 January 2011 09:19:17AM 0 points [-]

Coming of Age sequence examined realization of this error from Eliezer's standpoint, and has further links.

Comment author: jacob_cannell 30 January 2011 10:16:54AM 0 points [-]

In which post? I'm not finding discussion about the supposed danger of improved humanish AGI.

Comment author: Vladimir_Nesov 30 January 2011 10:22:32AM *  -1 points [-]

That Tiny Note of Discord, say. (Not on "humanish" AGI, but eventually exploding AGI.)

Comment author: nshepperd 30 January 2011 06:05:05AM *  2 points [-]

Why would an AI which optimises for one thing create another AI that optimises for something else? Not every change is an improvement, but every improvement is necessarily a change. Building an AI with a different utility function is not going to satisfy the first AI's utility function! So whatever AI the first one builds is necessarily going to either have the same utility function (in which case the first AI is working correctly), or have a different one (which is a sign of malfunction, and given the complexity of morality, probably a fatal one).

It's not possible to create an AGI that is "somewhat better than us" in the sense that it has a better utility function. To the extent that we have a utility function at all, it would refer to the abstract computation called "morality", which "better" is defined by. The most moral AI we could create is therefore one with precisely that utility function. The problem is that we don't exactly know what our utility function is (hence CEV).

There is a sense in which a Friendly AGI could be said to be "better than us", in that a well-designed one would not suffer from akrasia and whatever other biases prevent us from actually realizing our utility function.

Comment author: Stuart_Armstrong 30 January 2011 06:17:57PM 2 points [-]

AI's without utility functions, but some other motivational structure, will tend to self-improve to a utility function AI. Utility-function AI's seem more stable under self-improvement, but there are many reasons it might want to change its utility (eg speed of access, multi-agent situations).

Comment author: Oligopsony 30 January 2011 06:53:26PM 0 points [-]

Could you clarify what you mean by an "other motivational structure?" Something with preference non-transitivity?

Comment author: Stuart_Armstrong 30 January 2011 07:47:18PM 1 point [-]
Comment author: Perplexed 30 January 2011 05:04:21PM 2 points [-]

Why would an AI which optimises for one thing create another AI that optimises for something else?

It wouldn't if it initially considered itself to be the only agent in the universe. But if it recognizes the existence of other agents and the impact of other agents' decisions on its own utility, then there are many possibilities:

  • The new AI could be created as a joint venture of two existing agents.
  • The new AI could be built because the builder was compensated for doing so.
  • The new AI could be built because the builder was threatened into doing so.

Building an AI with a different utility function is not going to satisfy the first AI's utility function!

This may seem intuitively obvious, but it is actually often false in a multi-agent environment.