Latest Hinton tweet:
Dishonest CBC headline: "Canada's AI pioneer Geoffrey Hinton says AI could wipe out humans. In the meantime, there's money to be made". The second sentence was said by a journalist, not me, but you wouldn't know that.
A good reminder to apply bounded distrust appropriately. (Tips and more by Zvi here.)
Where are people saying and hearing false claims about Hinton's stances? Which social media platforms, if any? In the possible scenario where people are spreading misinformation deliberately, or even strategically, then this is important to triangulate ASAP.
I thought I saw some in Reddit discussion but couldn't quickly find those comments anymore, also at least one of my Facebook friends.
Updated the post with excerpts from the MIT Technology Review video interview, where Hinton among other things brings up convergent instrumental goals ("And if you give something the ability to create its own sub-goals in order to achieve other goals, I think it'll very quickly realise that getting more control is a very good sub-goal because it helps you achieve other goals. And if these things get carried away with getting more control, we're in trouble") and explicitly says x-risk from AI may be close ("So I think if you take the existential risk seriously, as I now do, I used to think it was way off, but I now think it's serious and fairly close. It might be quite sensible to just stop developing these things any further. But I think it's completely naive to think that would happen.")
Based on this interview, it doesn’t seem like Hinton is interested in doing a lot more for reducing AI risk: https://youtu.be/rLG68k2blOc?t=3378
It sounds like he wanted to sound the alarm as best he could with his credibility and will likely continue to do interviews, but says he’ll be spending his time “watching netflix, hanging around with his kids, and trying to study his forward-forward algorithm some more”.
Maybe he was downplaying his plans because he wants to keep them quiet for now, but this was a little sad even though his credibility applied to discussing AI risk concerns is certainly already an amazing thing for us to have gotten.
The guy is 75 years old. Many people would have retired 10+ years ago. Any effort he's putting in is supererogatory as far as I'm concerned. One can hope for more, of course, but let there be no hint of obligation.
Well yes, but he’s also one of the main guys who brought the field to this point so this feels a little different. That said, I’m not saying he has an obligation, just that some people might have hoped for more after seeing him go public with this.
Editing suggestion: could you put the elipsis here on a separate line? Putting it in that paragraph gives the impression that you or the source left something out of that quote, which more likely than not would have made it sound more harsh and dismissive of Gebru's concerns than it was, which would have been carnage, so I had to go and check the source, but it turns out that's the whole quote.
And their concerns aren't as existentially serious as the idea of these things getting more intelligent than us and taking over. [...]
Since I've seen some people doubt whether (original popularizer of the backpropagation algorithm and one of the original developers of deep learning) Geoff Hinton is actually concerned about AGI risk (as opposed to e.g. the NYT spinning an anti-tech agenda in their interview of him), I thought I'd put together a brief collection of his recent comments on the topic.
Written interviews
New York Times, May 1:
Technology Review, May 2:
Video interviews
CNN, May 2:
CBS Morning, Mar 25:
MIT Technology Review, May 4:
Hinton on Twitter:
Pedro Domingos, May 3
Reminder: most AI researchers think the notion of AI ending human civilization is baloney.
Geoffrey Hinton, May 5
and for a long time, most people thought the earth was flat. If we did make something MUCH smarter than us, what is your plan for making sure it doesn't manipulate us into giving it control?
---
Melanie Mitchell, May 3
Rather than asking AI researchers how soon machines will become "smarter than people", perhaps we should be asking cognitive scientists, who actually know something about human intelligence?
Geoffrey Hinton, May 4
I am a cognitive scientist.
---
RyanRejoice, May 2
Hey Geoffrey. You originally predicted AI would become smarter than a human in 30-50 years. Now, you say it will happen much sooner. How soon?
Geoffrey Hinton, May 3
I now predict 5 to 20 years but without much confidence. We live in very uncertain times. It's possible that I am totally wrong about digital intelligence overtaking us. Nobody really knows which is why we should worry now.