No I don't think it is. AI systems can influence decisions even in their fairly primitive state, and we must think carefully how we use them. But my position is that we don't need to worry about these machines developing extremely sophisticated behaviours any time soon, which keeps the alignment problem somewhat in check.
I love it! Ignore LeCun. Unfortunately he is pushing roughly the same line as Bengio, and is actually less extreme than Hinton. The heavyweights are on his side.
So yes, maybe from some direction, one day we will have intelligent machines. But for a funding agency it is not nearly clear enough where that direction is. Certainly not the kind of DL which is responsible for the current success. For example transformers.
Thank you and I am sorry I got off on the wrong foot with this community.
Also I was more focused on the sentence following the one where your quote comes from:
"This includes entirely AI-run companies, with AI managers and AI workers and everything being done by AIs."
and "AGI will be developed by January 1, 2100"
I try and argue that the answer to these two proposals is approximately Zero.
Thanks! I guess I didn't know the audience very well and I wanted to come up with an eye catching title. It was not meant to be literal. I should have gone with "approximately Zero" but I thought that was silly. Maybe I can try and change it.
Just to clarify, I am responding to the proposition "AGI will be developed by January 1, 2100". The safety issues are orthogonal because we already know that existing AI technologies are dangerous and are being used to profile people and influence elections.
I have added a paragraph before the points, which might clarify the thrust of my argument. I am undermining the reason why so many people have a belief that DL based AI will achieve AGI when GOFAI didn't
I think humans do symbolic as well as non symbolic reasoning. This is what is often called "hybrid". I don't think DL is doing symbolic reasoning, but LeCun is advocating some sort of alternative symbolic systems as you suggest. Errors are a bit of a side issue because both symbolic and non symbolic systems are error prone.
The paradox that I point out is that Python is symbolic, yet DL can mimic its syntax to a very high degree. This shows that DL cannot be informative about the nature of the phenomenon it is mimicking. You could argue that Python is not symbolic. This would obviously be wrong. But people DO use the same argument to show that natural language and cognition is not symbolic. I am saying this could be wrong too. So DL is not uncovering some deep properties of cognition .. it is merely doing some clever statistical mappings
BUT it can only learn the mappings where the symbolic system produces lots of examples, like language. When the symbol system is used for planning, creativity, etc., this is where DL struggles to learn.
No I didn't say they are as strong as they are going to get. But they are strong enough to do some Python, which shows that neural Networks can make a symbolic language look as though it wasn't one. IN other words they have no value in revealing anything about the underlying nature of Python, or language (my claim).
So what I am saying is that Python is symbolic, which no one doubts, and that language is also symbolic, which neural network people doubt. That is how the symbolic argument becomes important. Because whatever LLMs do with Python, I suggest they do the same thing with natural language. And whatever they are doing with Python is the wrong thing so I am suggesting what they do with language is also "the wrong thing".
So what I am saying is that DL is not doing symbolic reasoning with Python or natural language, and will fail in case Python or NL require symbolic reasoning.
Yes but when it does finally succeed, SOMETHING must be different.
That is what I go on to discuss. That something of course is the invention of DL. So my claim is that if DL is really not any better than symbol systems then the argument will come to the same inglorious end this time.