DSimon comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong

33 Post author: lukeprog 29 January 2011 02:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (202)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 29 January 2011 03:51:06AM *  -1 points [-]

Similarly, a paperclip-maximizer might well be interested in figuring out why its utility function is what it is, so that it may better understand the world it lives in... but that's not going to change its overriding interest in making paperclips over all else.

Right, but as far as I can tell without having put lots of hours into trying to solve the problem of clippyAI, it's really damn hard to precisely specify a paperclip. (There are things that are easier to specify that this argument doesn't apply to and that are more plausibly dangerous, like hyperintelligent theorem provers...) Thus in trying to figure out what it's utility function actually is (like what humans are doing as they introspect more) it could discover that the only reason its goal is (something mysterious like) 'maximize paperclips' is because 'maximize paperclips' was how humans were (probabilistically inaccurately) expressing their preferences in some limited domain. This is related to the theme Eliezer quite elegantly goes on about in Creating Friendly AI and that he for some reason barely mentioned in CEV, which is that the AI should look at its own source code as evidence of what its creators were trying to get at, and update its imperfect source code accordingly. Admittedly, most uFAIs probably won't be that sophisticated, and so worrying about AI-related existential risks is still definitely a big deal. We just might want to be a little more cognizant of potential motivations for people who disagree with what has recently been dubbed SIAI's 'scary idea'.

Comment author: DSimon 29 January 2011 03:59:42AM 8 points [-]

Thus in trying to figure out what it's utility function actually is (like what humans are doing as they introspect more) it could discover that the only reason its goal is (something mysterious like) 'maximize paperclips' is because 'maximize paperclips' was how humans were (probabilistically inaccurately) expressing their preferences in some limited domain.

Hm. I suppose that's possible, though it would require that the AI be given a utility function that's specifically meant to be amenable to that kind of revision.

Under the most straightforward (i.e. not CEV-style) utility function design, fuzziness in its definition of "paperclip" would just drive the paperclip-maximizer to choose the possible definition that yields the highest utility score.

To pick a different silly example, a dog-maximizer with a utility function based on the number of dogs in the universe would simply prefer to tile the solar system with tiny Chihuahas rather than Great Danes; the whole range of "dog" definitions fit the function, so it just chooses the one that is most convenient for maximum utility. It wouldn't try to resolve it by trying to decide which definition is more in line with the designer's ideals, unless "consider the designer's ideals" were designed into the system from the start.

Comment author: sark 29 January 2011 11:05:13AM 0 points [-]

Is designing "consider the designer's ideals" in an AI difficult?

Comment author: Vladimir_Nesov 29 January 2011 12:02:16PM *  3 points [-]

Currently expected to be difficult, since we don't know of an easy way to do so. That it'll turn out to be easy (in the hindsight) is not totally out of the question.

Comment author: Perplexed 29 January 2011 09:44:11PM 1 point [-]

Is designing "consider the designer's ideals" in an AI difficult?

Currently expected to be difficult, since we don't know of an easy way to do so.

Has anyone considered approaching this problem in the same way we might approach "read the user's handwriting"? That is, the task is not one we program the AI to accomplish - instead, we train the AI to accomplish it. And, most importantly, we train the AI to ask for further clarification in ambiguous cases.

Comment author: Vladimir_Nesov 29 January 2011 10:03:49PM 2 points [-]

Mirrors and Paintings (yes, you want to point your program at the world and have it figure out what you referred to), The Hidden Complexity of Wishes (if you need to answer AI's question or give it instructions, you're doing something wrong and it won't work).

Comment author: Perplexed 30 January 2011 01:24:48AM *  2 points [-]

I have to admit, as someone who has worked in software testing, I find it difficult to take the suggestion (non-destructive full-brain scan) in the first link very seriously. How, exactly, do I become convinced that the AI can come to know more about what I want by scanning me than I can know by introspection? How can I (or it) even do a comparison between the two without it asking me questions?

But then we get down to doing the comparison. The AI informs me that what I really want is to kill my father and sleep with my mother. I deny this. Do we take this as evidence that the AI really does know me better than I know myself, or as a symptom of a bug?

I would argue that if you don't need to answer the AI's questions or give it instructions, you're doing something wrong and it won't work. By definition. At least for the first ten thousand scans or so. And even then there will remain questions on which the AI and introspection would deliver different answers. Questions with hidden complexity. I just don't see how anyone would trust a CEV extrapolated from brain scans until we had decades of experience suggesting that scanning and modeling yields better results than introspection.

Comment author: jacob_cannell 30 January 2011 02:22:21AM 0 points [-]

I would argue that if you don't need to answer the AI's questions or give it instructions, you're doing something wrong and it won't work. By definition.

Agreed. And any useful AI will have to understand human language to do or learn much anything of value.

The detailed analysis of full brain scanning tech I've seen puts it far into the future, well beyond human-level AGI.

Comment author: Vladimir_Nesov 30 January 2011 01:53:39AM *  0 points [-]

And even then there will remain questions on which the AI and introspection would deliver different answers.

You have to make sure AI predictably gives a better answer even on questions where you disagree. And there will be questions which can't even be asked of a human.

Comment author: Vladimir_Nesov 30 January 2011 01:48:41AM *  0 points [-]

I have to admit, as someone who has worked in software testing, I find it difficult to take the suggestion (non-destructive full-brain scan) in the first link very seriously. How, exactly, do I become convinced that the AI can come to know more about what I want by scanning me than I can know by introspection? How can I (or it) even do a comparison between the two without it asking me questions?

Irrelevant. Assume you magically have a perfect working simulation of yourself.

Comment author: Perplexed 30 January 2011 02:23:26AM 1 point [-]

Assume you magically have a perfect working simulation of yourself.

Why would I want to do that? I.e. how would making that assumption lead me to take Eliezer's suggestion more seriously? My usual practice is to take things less seriously when magic is involved.

And how does this assumption interact with your other comment stating that I have to make sure the AI is somehow even better than myself if there is any difference between simulation and reality? Haven't you just asked me to assume that there are no differences?

Sorry, I simply don't understand your responses, which suggests to me that you did not understand my comment. Did you notice, in my preamble, that I mentioned software testing? Perhaps my point may be clearer to you if you keep this preamble in mind when formulating your responses.

Comment author: Vladimir_Nesov 30 January 2011 02:30:42AM 0 points [-]

Why would I want to do that?

Because that's a conceptually straightforward assumption that we can safely make in a philosophical argument.

The upload is not the AI (and Eliezer's post doesn't refer to uploads IIRC, but for the sake of the argument assume they are available as raw material). You make AI correct on strong theoretical grounds, and only test things to check that theoretical assumptions hold in ways where you expect it to be possible to check things, not in every situation.

Did you notice, in my preamble, that I mentioned software testing?

What would I need to make of that?

Comment author: jacob_cannell 30 January 2011 02:18:50AM *  0 points [-]

Irrelevant. Assume you magically have a perfect working simulation of yourself.

Relevant - Can we just assume you magically have a friendly AI then?

If the plan for creating a friendly AI depends on a non-destructive full-brain scan already being available, the odds of achieving friendly AI before other forms of AI vanish to near zero.

Comment author: Vladimir_Nesov 30 January 2011 02:23:02AM 0 points [-]

One step at a time, my good sir! Reducing the philosophical and mathematical problem of Friendly AI to the technological problem of uploading would be an astonishing breakthrough quite by itself.

Comment author: jacob_cannell 30 January 2011 02:15:12AM 0 points [-]

That is, the task is not one we program the AI to accomplish - instead, we train the AI to accomplish it. And, most importantly, we train the AI to ask for further clarification in ambiguous cases

This is the straightforward approach.

Once you have an AGI that has the cognitive capability and learning capacity of a human infant brain, you teach it everything else in human language - right/wrong, ethics/morality, etc.

Programming languages are precise and well suited for creating the architecture itself, but human languages are naturally more effective for conveying human knowledge.

Comment author: Perplexed 30 January 2011 02:36:26AM 1 point [-]

I tend to agree that we need a natural language interface to the AI. But it is far easier to create automatic proofs of program correctness when the really important stuff (like ethics) is presented in a formal language equipped with a deductive system.

There is something to be said for treating all the natural language input as if it were testimony from unreliable witnesses - suitable, perhaps, for locating hypotheses, but not really suitable as strong evidence for accepting the hypotheses.

Comment author: jacob_cannell 30 January 2011 02:42:38AM 0 points [-]

But it is far easier to create automatic proofs of program correctness

I'm not sure how this applies - can you formally prove the correctness of a probabilistic belief network? Is that even a valid concept?

I can understand how you can prove a formal deterministic circuit or the algorithms underlying the belief network and learning systems, but the data values?

Comment author: Perplexed 30 January 2011 03:41:34AM 1 point [-]

Agree. That is why I suggest that the really important stuff - meta-ethics, epistemology, etc., be represented in some other way than by 'neural' networks. Something formal and symbolic, rather than quasi-analog. All the stuff which we (and the AI) need to be absolutely certain doesn't change meaning when the AI "rewrites its own code"

Comment author: jacob_cannell 30 January 2011 04:05:37AM *  0 points [-]

By formal, I assume you mean math/code.

The really important stuff isn't a special category of knowledge. It is all connected - a tangled web of interconnected complex symbolic concepts for which human language is a natural representation.

What is the precise mathematical definition of ethics? If you really think of what it would entail to describe that precisely, you would need to describe humans, civilization, goals, brains, and a huge set of other concepts.

In essence you would need to describe an approximation of our world. You would need to describe a belief/neural/statistical inference network that represented that word internally as a complex association between other concepts that eventually grounds out into world sensory predictions.

So this problem - that human language concepts are far too complex and unwieldy for formal verification - is not a problem with human language itself that can be fixed by using other language choices. It reflects a problem with the inherit massive complexity of the world itself, complexity that human language and brain-like systems are evolved to handle.

Comment author: Vladimir_Nesov 30 January 2011 03:53:30AM 0 points [-]

To get to that point we have to start from the right meaning to begin with, and care about preserving it accurately, and Jacob doesn't agree those steps are important or particularly hard.

Comment author: Will_Newsome 29 January 2011 09:19:23PM -1 points [-]

Currently expected to be difficult, since we don't know of an easy way to do so. That it'll turn out to be easy (in the hindsight) is not totally out of the question.

There are some promising lines of attack (grounded in decision theory) that might take only a few years of research. We'll see where they lead. Other open problems in FAI might start looking very solvable if we start making progress on this front.

Comment author: Vladimir_Nesov 29 January 2011 09:28:16PM 2 points [-]

Show me.

Comment author: Will_Newsome 29 January 2011 09:45:15PM -1 points [-]

PM'd.

Comment author: wedrifid 29 January 2011 11:49:59AM 0 points [-]

Yes. :)