While I disagreed with a lot of Robin Hanson's latest take on AI risk, I am glad he came out with an updated position. I think with everything that's happened in the past 6-12 months, it's a good time for public intellectuals and prominent people who have previously commented on AGI and AI risk to check in again and share their latest views.
That got me curious if Steven Pinker had any recent statements. I found this article on the Harvard Gazette from last month (Feb 2023), which I couldn't find posted on LessWrong before:
Article link
Will ChatGPT supplant us as writers, thinkers?
Q&A with Steven Pinker
by Alvin Powell
Feb 14, 2023
Summary
Here's a summary of the article that ChatGPT generated for me just now (bold mine):
Steven Pinker, a psychology professor at Harvard, has commented on OpenAI’s ChatGPT, an artificial intelligence (AI) chatbot that can answer questions and write texts. He is impressed with the AI's abilities, but also highlights its flaws, such as a lack of common sense and factual errors. Pinker believes that ChatGPT has revealed how statistical patterns in large data sets can be used to generate intelligent-sounding text, even if it does not have understanding of the world. He also believes that the development of artificial general intelligence is incoherent and not achievable, and that current AI devices will always exceed humans in some challenges and not others. Pinker is not concerned about ChatGPT being used in the classroom, as its output is easy to unmask as it mashes up quotations and references that do not exist.
Note that while he comments on AGI being an incoherent idea, he doesn't speak specifically about existential risk from AI misalignment. So it's not totally clear, but I think we can infer Pinker considers the risk very low, since he doesn't think AGI is possible in the first place.
This is not me hating on Steven Pinker, really it is not.
This looks to me like someone who is A) talking outside of their wheelhouse and B) have not given what they say enough thought.
Its all over the map, superheroes vs super intelligence. "General machine" is incoherent (?)
And then he goes completely bonkers and says the bolded part, maybe Alvin Powell got it wrong, But if not, then I can only concluded that whatever Steven Pinker has to say about (powerful) general systems, is bunk and I should pay no attention.
So I didn't finish the article.
The only thing that it did, was solidify my perception around public talk/discourse on (powerful) general systems. I think it is misguided to such a degree, that any engagement with it leads to frustration[1].
I think this explains why EY at times seems very angry and/or frustrated. Having done what he has done for many years now, in an environment like that, must be insanely depressing and frustrating.
Either you believe in the Church-Turing thesis or you don't, it seems. General machines have existed for over 70 years! I wonder how these people will pivot once there are human-like full agents running around (assuming we live to see it.)