Comment author: Stuart_Armstrong 19 September 2016 10:59:28AM 1 point [-]

An AI that was programmed to attempt to fill in gaps in knowledge it detected, halt if it found conflicts, etc would not behave they way you describe.

We don't know how to program a foolproof method of "filling in the gaps" (and a lot of "filling in the gaps" would be a creative process rather that a mere learning one, such as figuring out how to extend natural language concepts to new areas).

And it helps it people speak about this problem in terms of coding, rather than high level concepts, because all the specific examples people have ever come up with for coding learning, have had these kind of flaws. Learning natural language is not some sort of natural category.

Coding learning with some imperfections might be ok if the AI is motivated to merely learn, but is positively pernicious if the AI has other motivations as to what to do with that learning (see my post here for a way of getting around it: https://agentfoundations.org/item?id=947 )

Comment author: TheAncientGeek 19 September 2016 01:02:53PM *  -1 points [-]

We don't know how to program a foolproof method of "filling in the gaps" (and a lot of "filling in the gaps" would be a creative process rather that a mere learning one, such as figuring out how to extend natural language concepts to new areas).

Inasmuch as that is relying on the word "foolproof", it is proving much too much., since we barely have foolproof methods to do anything.

The thing is that your case needs to be argued from consistent and fair premises..where "fair" means that your opponents are allowed to use them.

If you are assuming that an AI has sufficiently advanced linguistic abilities to talk its way out of a box, then your opponents are entitled to assume that the same level of ability could be applied to understanding verbally specified goals.

If you are assuming that it is limitation of ability that is preventing the AI from understanding what "chocolate" means, then your opponents are entitled to assume it is weak enough to be boxable.

And it helps it people speak about this problem in terms of coding, rather than high level concepts, because all the specific examples people have ever come up with for coding learning, have had these kind of flaws.

What specific examples? Loosemore's counterargument is in terms of coding. And I notice you don't avoid NL arguments yourself.

Coding learning with some imperfections might be ok if the AI is motivated to merely learn, but is positively pernicious if the AI has other motivations as to what to do with that learning (see my post here for a way of getting around it: https://agentfoundations.org/item?id=947 )

I rather doubt that the combination of a learning goal, plus some other goal, plus imperfect ability is all that deadly, since we already have AI that are like that, and which haven't killed us. I think you must be making some other assumptions, for instance that the AI is in some sort of "God" role, with an open-ended remit to improve human life.

Comment author: Stuart_Armstrong 19 September 2016 10:52:50AM 1 point [-]

That's just rephrasing one natural language requirement in terms of another. Unless these concepts can be phrased other than in natural language (but then those other phrasings may be susceptible to manipulation).

Comment author: TheAncientGeek 19 September 2016 11:51:19AM *  0 points [-]

Another way of putting the objection is "don't design a system whose goal system is walled off from its updateable knowledge base". Loosemore's argument is that that is in fact the natural design, and so the "general counter argument" isn't general.

It would be like designing a car whose wheels fall off when you press a button on the dashboard...1) it's possible to build it that way, 2) there's no motivation to build it that way 3) it's more effort to build it that way.

Comment author: TheAncientGeek 16 September 2016 03:25:22PM *  1 point [-]

Are you saying the AI will rewrite its goals to make them easier, or will just not be motivated to fill in missing info?

In the first case, why wont it go the whole hog and wirehead? Which is to say, that any AI which is does anything except wireheading will be resistant to that behaviour -- it is something that needs to be solved, and which we can assume has been solved in a sensible AI design.

When we programmed it to "create chocolate bars, here's an incomplete definition D", what we really did was program it to find the easiest thing to create that is compatible with D, and designate them "chocolate bars".

If you programme it with incomplete info, and without any goal to fill in the gaps, then it will have the behaviour you mention...but I'm not seeing the generality. There are many other ways to programme it.

"if the AI is so smart, why would it do stuff we didn't mean?" and "why don't we just make it understand natural language and give it instructions in English?"

An AI that was programmed to attempt to fill in gaps in knowledge it detected, halt if it found conflicts, etc would not behave they way you describe. Consider the objection as actually saying:

"Why has the AI been programmed so as to have selective areas of ignorance and stupidity, which are immune from the learning abilities it displays elsewhere?"

PS This has been discussed before, see

http://lesswrong.com/lw/m5c/debunking_fallacies_in_the_theory_of_ai_motivation/

and

http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/

see particularly

http://lesswrong.com/lw/m5c/debunking_fallacies_in_the_theory_of_ai_motivation/ccpn

Comment author: reguru 13 September 2016 11:20:58AM 0 points [-]

The argument is that everything is a map including anything written here, in quotes or not. It's the written language and so forth, however, many layers deep the maps go.

By excluding all maps in direct experience you uncover the territory. Which is you. Which is arational. But only by direct experience.

Comment author: TheAncientGeek 13 September 2016 12:20:46PM 1 point [-]

The second sentence contradicts the first. Either there is a territory to be uncovered, or it is not the case that everything is a map.

Comment author: reguru 13 September 2016 11:31:46AM 0 points [-]

Regardless how accurate or inaccurate a map is, it is still a map. But some maps are more or less accurate over other maps. That's fine. That's human projections.

I argue that the territory is arational, which means any representation in relation to the territory is all the same.

Comment author: TheAncientGeek 13 September 2016 12:19:49PM 1 point [-]

The second sentence contradicts the first.

Comment author: ChristianKl 12 September 2016 08:53:03PM 0 points [-]

Arational is independent of reasoning and understanding. It is what it is, any map is not the arational.

Are you advocating cartesian dualism?

That's a logical conclusion, a map. You haven't seen your own neurons and even if you could in this very moment, you couldn't be the neurons which you are seeing.

You confuse ontology and epistology. It might not be possible for me to prove that I'm made up of neurons but that doesn't mean that I'm not made up of neurons. You can't go from one to the other easily.

I don't understand again, I mean that language is a map, all communication, every letter, every word, it's a human projection. I t ' s a h u m a n p r o j e c t i o n a n d n o t t t r u e .

You seem to have an understanding of what's true is supposed to mean that you unquestioningly accept. A concept that you learned as a child and where you now get into trouble because it doesn't matches the complex reality. The problem is the concept that you have in your head.

The fact that the concepts inside your head doesn't make sense doesn't mean that other people can't reason and don't mean something useful when they speak of truth.

But what's the different between "is x" and "references to x" it's just a shortcut to say "is x"?

References is a different concept than identity and "is". It's a concept that you currently don't seem to understand.

In computer programming it's different to store a pointer than to store a variable that contains it's own data. Can you follow the analogy in the realm of computers?

Comment author: TheAncientGeek 13 September 2016 11:15:19AM 0 points [-]

Are you advocating cartesian dualism?

Sounds to me more like the Vedantic monism of self-is-all, to me.

Comment author: reguru 08 September 2016 07:31:05PM 0 points [-]

Objects is still a map, so is territory, so is this entire sentence. That's why it's a matrix. (virtual reality)

A person well educated in physics will tell you when you ask them for the specific of the gravitational effect that it's due to space time curvuture and not because a force is pulling on substance in the way Newtonian metaphysics assumes. If you ask them whether gravity exists they will still say "Yes".

Which is one of the mistakes made by said scientists, especially if you ask them multiple times on this same point, to point out there might be a flaw. Because they won't question it otherwise.

Comment author: TheAncientGeek 13 September 2016 11:07:33AM *  1 point [-]

"The cat sitting on the mat" is a map. The cat sitting on the mat is territory.

Insisitng that your opponents have an extra pair of quotes around everything, while they insist they don't have is not much of an argument.

Comment author: reguru 08 September 2016 03:29:36PM *  0 points [-]

There seems to be quite some denial on LW then regarding the topic. I don't understand why, if what you are saying is true.

"Hey, losers! Rationality is overrated because you confuse the map with the territory, you aren't aware of your own thoughts and don't distinguish them from reality, and you're 100% confident you're right and therefore can't change your minds!".

That's a straw man argument, as far as I remember, I never said that. Personally, it seems to me as "the map is not the territory" is one of the maps which some, I am not saying you or anyone else, might think is the territory. This is only speculation.

So you do agree with the video, who else?

If for example, you were the person who was attached to the map being the territory, or not aware of it, and the argument was not a straw man.

Of course, you don't have to agree with a certain method of delivery, like the straw man.

Comment author: TheAncientGeek 13 September 2016 10:58:15AM 1 point [-]

That's a straw man argument, as far as I remember, I never said that. Personally, it seems to me as "the map is not the territory" is one of the maps which some, I am not saying you or anyone else, might think is the territory. T

Consider distinguishing between "the map is the territory" and "the map is an accurate representation of the territory".

Comment author: Riothamus 30 August 2016 03:08:41PM *  0 points [-]

If that is supposed to mean that every ontology comes with its own isolated, tailor-made criteria, and that there are no others

I mean to say we are not ontologically motivated. The examples OP gave aren't ontological questions, only questions with ontological implications, which makes the ontology descriptive rather than prescriptive. That the implications carry forward only makes the description consistent.

In the scholastic case, my sense of the process of moving beyond Aristotle is that it relied on things happening that disagreed with Aristotle, which weren't motivated by testing Aristotle. Architecture and siege engines did for falling objects, for example.

I agree with your points. I am now experiencing some disquiet about how slippery the notion of 'best' is. I wonder how one would distinguish whether it was undefinable or not.

Comment author: TheAncientGeek 31 August 2016 10:31:54AM 0 points [-]

I mean to say we are not ontologically motivated.

Who's "we"? Lesswrongians seem pretty motivated to assert the correctness of physicalism and wrongness of dualism, supernaturalism,, etc.

The examples OP gave aren't ontological questions, only questions with ontological implications, which makes the ontology descriptive rather than prescriptive. That the implications carry forward only makes the description consistent.

I'm not following that. Can you give concrete examples?

In the scholastic case, my sense of the process of moving beyond Aristotle is that it relied on things happening that disagreed with Aristotle, which weren't motivated by testing Aristotle. Architecture and siege engines did for falling objects, for example.

What I had in mind was Aristotelean metaphysics, not Aristotelean physics. The metaphysics, the accident/essence distinction and so on, failed separately.

Comment author: WikiLogicOrg 27 August 2016 12:19:47PM 0 points [-]

Yes I feel that you are talking in vague but positive generalities.

First, on a side note, what do you mean by "but positive"? As in idealistic? Excuse my vagueness. I think it comes from trying to cover too much at once. I am going to pick on a fundamental idea i have and see your response because if you update my opinion on this, it will cover much of the other issues you raised.

I wrote a small post (www.wikilogicfoundation.org/351-2/) on what i view as the starting point for building knowledge. In summary it says our only knowledge is that of our thought and the inputs that influence them. It is on a similar vein to "I think therefore i am" (although, maybe it should be "thoughts, therefore thoughts are" to keep the pedantics happy) . I did not mention it in the article but if we try and break it down like this, we can see that our only purpose is to satisfy our urges. For example, if we experience a God telling us we should worship them and be 'good' to be rewarded, we have no reason to do this unless we want to satisfy our urge to be rewarded. So no matter our believes, we all have the same core drive - to satisfy our internal demands. The next question is whether these are best satisfied cooperatively or competitively. However i imagine you have a lot of objections thus far so i will stop to see what you have to say about that. Feel free to link me to anything relevant explaining alternate points of view if you think a post will take too long.

Comment author: TheAncientGeek 30 August 2016 09:38:39AM 0 points [-]

What I mean by "vague but positive" is that you keep saying there is no problem, but not saying why.

I wrote a small post (www.wikilogicfoundation.org/351-2/) on what i view as the starting point for building knowledge. In summary it says our only knowledge is that of our thought and the inputs that influence them.

That's a standard starting point. I am not seeing anything that dissolves the standard problems.

So no matter our believes, we all have the same core drive - to satisfy our internal demands.

We all have the same meta-desire, whilst having completely different object level desires. How is that helping?

View more: Prev | Next