This is not me hating on Steven Pinker, really it is not.
PINKER: I think it’s incoherent, like a “general machine” is incoherent. We can visualize all kinds of superpowers, like Superman’s flying and invulnerability and X-ray vision, but that doesn’t mean they’re physically realizable. Likewise, we can fantasize about a superintelligence that deduces how to make us immortal or bring about world peace or take over the universe. But real intelligence consists of a set of algorithms for solving particular kinds of problems in particular kinds of worlds. What we have now, and probably always will have, are devices that exceed humans in some challenges and not in others.
This looks to me like someone who is A) talking outside of their wheelhouse and B) have not given what they say enough thought.
Its all over the map, superheroes vs super intelligence. "General machine" is incoherent (?)
And then he goes completely bonkers and says the bolded part, maybe Alvin Powell got it wrong, But if not, then I can only concluded that whatever Steven Pinker has to say about (powerful) general systems, is bunk and I should pay no attention.
So I didn't finish the article.
The only thing that it did, was solidify my perception around public talk/discourse on (powerful) general systems. I think it is misguided to such a degree, that any engagement with it leads to frustration[1].
I think this explains why EY at times seems very angry and/or frustrated. Having done what he has done for many years now, in an environment like that, must be insanely depressing and frustrating.
Either you believe in the Church-Turing thesis or you don't, it seems. General machines have existed for over 70 years! I wonder how these people will pivot once there are human-like full agents running around (assuming we live to see it.)
I'm sure this talking point has been done to death, but if it's true that ChatGPT (in an experimental setting) was capable of deceiving someone on TaskRabbit to solve a capcha for it, and ChatGPT is only a language model, we have already far surpassed the kinds of capabilities Pinker has been dismissing for years.
It's similar to his writing on how language models will always be bad at the nuances of translating languages. I study Indonesian and Spanish, and recently had a conversation on character.ai switching between them. Unimaginable four years ago.
I think Pinker has an idea of how AI can and can't operate that is pretty rapidly becoming out-of-date, for someone who is so publicly vocal on the topic.
Kind of feels irresponsible to downplay safety issues.
Mode note: I edited the title to say "Feb 2023" instead of "Feb 2022", because well, the thing happened in Feb 2022 (and indeed, I was very surprised to see the original title since this would have somehow implied that Steven Pinker had access to ChatGPT months before it was released).
Oops, thanks for catching that!
because well, the thing happened in Feb 2022
You mean Feb 2023, right? (Are we in a recursive off-by-one-year discussion thread? 😆)
You mean Feb 2023, right? (Are we in a recursive off-by-one-year discussion thread? 😆)
Yes, exactly, sorry, I meant to say that the thing happened in Feb 2022, of course.
I'm completely confused. Maybe you should just make a fresh start, and say whatever you actually intend to say, without reference to what you said before?
Haha sorry about that - the Too Confusing; Didn't Read is:
While I disagreed with a lot of Robin Hanson's latest take on AI risk, I am glad he came out with an updated position. I think with everything that's happened in the past 6-12 months, it's a good time for public intellectuals and prominent people who have previously commented on AGI and AI risk to check in again and share their latest views.
That got me curious if Steven Pinker had any recent statements. I found this article on the Harvard Gazette from last month (Feb 2023), which I couldn't find posted on LessWrong before:
Article link
Will ChatGPT supplant us as writers, thinkers?
Q&A with Steven Pinker
by Alvin Powell
Feb 14, 2023
Summary
Here's a summary of the article that ChatGPT generated for me just now (bold mine):
Note that while he comments on AGI being an incoherent idea, he doesn't speak specifically about existential risk from AI misalignment. So it's not totally clear, but I think we can infer Pinker considers the risk very low, since he doesn't think AGI is possible in the first place.