All of cveres's Comments + Replies

cveres41

Yes but when it does finally succeed, SOMETHING must be different.

That is what I go on to discuss. That something of course is the invention of DL. So my claim is that if DL is really not any better than symbol systems then the argument will come to the same inglorious end this time.

cveres2-1

No I don't think it is. AI systems can influence decisions even in their fairly primitive state, and we must think carefully how we use them. But my position is that we don't need to worry about these machines developing extremely sophisticated behaviours any time soon, which keeps the alignment problem somewhat in check.

cveres30

I love it! Ignore LeCun. Unfortunately he is pushing roughly the same line as Bengio, and is actually less extreme than Hinton. The heavyweights are on his side.

So yes, maybe from some direction, one day we will have intelligent machines. But for a funding agency it is not nearly clear enough where that direction is. Certainly not the kind of DL which is responsible for the current success. For example transformers.  

cveres10

Thank you and I am sorry I got off on the wrong foot with this community. 

cveres10

Also I was more focused on the sentence following the one where your quote comes from:

"This includes entirely AI-run companies, with AI managers and AI workers and everything being done by AIs."

and "AGI will be developed by January 1, 2100"

I try and argue that the answer to these two proposals is approximately Zero.

cveres20

Thanks! I guess I didn't know the audience very well and I wanted to come up with an eye catching title. It was not meant to be literal. I should have gone with "approximately Zero" but I thought that was silly. Maybe I can try and change it.

1Noosphere89
Thank you for changing it to be less clickbaity. Downvotes removed.
2Jotto999
That's a really good idea, changing the title.  You can also try adding a little paragraph in italics, as a brief little note for readers clarifying which proability you're giving.
cveres10

Just to clarify, I am responding to the proposition "AGI will be developed by January 1, 2100". The safety issues are orthogonal because we already know that existing AI technologies are dangerous and are being used to profile people and influence elections.

I have added a paragraph before the points, which might clarify the thrust of my argument. I am undermining the reason why so many people have a belief that DL based AI will achieve AGI when GOFAI didn't 

cveres10

I think humans do symbolic as well as non symbolic reasoning. This is what is often called "hybrid". I don't think DL is doing symbolic reasoning, but LeCun is advocating some sort of alternative symbolic systems as you suggest. Errors are a bit of a side issue because both symbolic and non symbolic systems are error prone.

The paradox that I point out is that Python is symbolic, yet DL can mimic its syntax to a very high degree. This shows that DL cannot be informative about the nature of the phenomenon it is mimicking. You could argue that Python is not s... (read more)

cveres10

No I didn't say they are as strong as they are going to get. But they are strong enough to do some Python, which shows that neural Networks can make a symbolic language look as though it wasn't one. IN other words they have no value in revealing anything about the underlying nature of Python, or language (my claim).

cveres10

So what I am saying is that Python is symbolic, which no one doubts, and that language is also symbolic, which neural network people doubt. That is how the symbolic argument becomes important. Because whatever LLMs do with Python, I suggest they do the same thing with natural language. And whatever they are doing with Python is the wrong thing so I am suggesting what they do with language is also "the wrong thing". 

So what I am saying is that DL is not doing symbolic reasoning with Python or natural language, and will fail in case Python or NL require symbolic reasoning. 

2lincolnquirk
I think your argument is wrong, but interestingly so. I think DL is probably doing symbolic reasoning of a sort, and it sounds like you think it is not (because it makes errors?) Do you think humans do symbolic reasoning? If so, why do humans make errors? Why do you think a DL system won't be able to eventually correct its errors in the same way humans do? My hypothesis is that DL systems are doing a sort of fuzzy finite-depth symbolic reasoning -- it has capacity to understand the productions at a surface level and can apply them (subject to contextual clues, in an error-prone way) step by step, but once you ask for sufficient depth it will get confused and fail. Unlike humans, feedforward neural nets can't think for longer and churn step by step yet; but if someone were to figure out a way to build a looping option into the architecture then I won't be surprised to see DL systems which can go a lot further on symbolic reasoning than they currently do.
cveres20

Thanks for your comment. I was baffled by the downvotes because as far as I could tell most people hadn't read the paper. Your comment that maybe it was the title, is profoundly disappointing to me. I do not know this community well but it sounds from your comment that they are not really interested in hearing arguments that contradict their point of view.
As for my argument, it was not supporting AGI at all. Basically,  I was pointing out that every serious researcher now agrees that we need DL+symbols. The disagreement is in what sort of symbols. Then I argue that none of the current proposals for symbols is any good for AGI. So that kills AGI.