Does it make sense to give a public response? Who would be able to do it?
The conference organizer, who had asked me to evaluate the talk, offered to interview me to set things straight. However, I don't know if that is sensible, and given my level of experience, I'm afraid I would misrepresent AI risk myself.
To be concrete: the talk was Should We Fear Intelligent Machines? by Gerald Sussman of SICP fame. He touched on important research questions and presented some interesting ideas. But much of what he said was misleading and not well-reasoned.
In response to the comments I add specifics. This is the same as I sent to the conference organizer, who had asked me for an evaluation. Note that this evaluation is separate from the interview mentioned above. The evaluation was private, the interview would be public.
-
Because of the low sound quality, I might have misunderstood some statements.
-
Mr. Sussman touched on important research questions.
- AI that can explain itself https://arxiv.org/abs/1805.00899 https://en.wikipedia.org/wiki/Explainable_Artificial_Intelligence
- Corrigibility https://intelligence.org/research/#ET
- Those who worry about AI do also worry about synthetic biology. https://futureoflife.org/background/benefits-risks-biotechnology/ https://www.fhi.ox.ac.uk/research/research-areas/#1513088119642-44d2da6a-2ffd
- Taboos – related to ‘Avoiding negative side effects’ https://blog.openai.com/concrete-ai-safety-problems/ Note that taboos rely heavily on human culture and values. Getting those into AI is another big research area: https://www.alignmentforum.org/posts/oH8KMnXHnw964QyS6/preface-to-the-sequence-on-value-learning If discouraging harmful and encouraging beneficial behaviour were easy, reinforcement learning would be the solution.
-
His solution approaches might be useful.
- I don't know enough to judge them.
- Certainly they only address a small part of the problem space, which is laid out in: https://arxiv.org/abs/1606.06565 https://intelligence.org/technical-agenda/ https://intelligence.org/2016/07/27/alignment-machine-learning/
-
He touched on some of the concerns about (strong) AI, especially the shorter term ones.
-
He acknowledged AI as a threat, which is good. But he wrongly dismissed some concerns about strong AI.
- It's correct that current AI is not existential threat. But future AI is one.
- He says that it won't be an existential threat, because it doesn't
compete with us for resources. This is wrong.
- Humans don't need silicon to live, but they do need silicon (and many other computer ingredients) to build much of their infrastructure. Of course we don't need that infrastructure to survive as a species. But when people talk about existential risks, they're usually not satisfied with bare survival: https://nickbostrom.com/existential/risks.html (section 1.2)
- There is enough energy from the sun only if you figure out how to use it. We haven't and AI might not either in the beginning. We can't expect that it will say ‘let me be nice and leave the fossil fuels to the humans and figure out a way to use something else myself’. (Mind that I don't necessarily expect AI to be conscious like that.)
- If AI plasters every available surface (or orbit) with solar panels, life will be dire for humans.
- Even if it doesn't compete for resources (inputs), the outputs might be problematic. – A computer can work swimming in a lake of toxic waste at 190 °F, a human cannot.
- (Note that I'm not assuming ‘evil AI’, but misunderstood/misaligned values. That's why it's called ‘AI alignment’.)
- Competing with us for resources is only one way that AI is a threat. See the third section of https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/ for what people are most worried about.
- Hawking and Musk are not the people who have thought most about AI. One needs to refute other people's arguments (FLI, Nick Bostrom, Eliezer Yudkowsky, Paul Christiano) to make a case against AI concerns.
-
Many of his arguments made big jumps.
- He gave examples about how dumb AI is now/how shallow its understanding of the world is. These are true. But I didn't know what point he wanted to make. Then he says that there won't be any jobs left for intellectual work ‘fairly soon’, because productivity/person goes to infinity. This would require quite strong AI, which means that all the safety concerns are on the table, too.
- The whole question about enforcement. – If AI is much more clever than we, how do we make sure it doesn't evade rule enforcement? If it has a "rule following module", how do we make sure it doesn't have subtle bugs? Free software might help here, but free software has bugs, too.
- Also, AI might lie low and then bring everything down before we can enforce anything. This is called the treacherous turn. See also https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
-
It was hard to understand, but I think he made fun of Max Tegmark and Eliezer Yudkowsky who are very active in the field. At least Tegmark would laugh with anyone joking about him. [(This is my expectation given his public appearances. I don't know him personally.)] But those remarks do give the audience a wrong impression and are therefore not helpful.
-
Having such a talk at an engineering conference might be good, because it raises awareness, and there was a call to action. There is also the downside of things being misrepresented and misunderstood.
This should be advertised in meta.