I'm a huge fan of Pinker. How The Mind Works and The Language Instinct are two of my all-time favorite books. So I'm surprised and saddened to see him engaging in this debate for years without showing a familiarity with many of the core AI concepts, such as instrumental convergence and corrigibility.
I love his books too. It's a real shame.
"...such as imagining that an intelligent tool will develop an alpha-male lust for domination."
It seems like he really hasn't understood the argument the other side is making here.
It's possible he simply hasn't read about instrumental convergence and the orthogonality thesis. What high quality widely-shared introductory resources do we have on those after all? There's Robert Miles, but you could easily miss him.
Stuart Russell in the FLI podcast debate outlined things like instrumental convergence and corrigibility, though it took a backseat to his own standard/nonstandard model approach, and challenged him to publish reasons why he's not compelled to panic in a journal, but warned him that many people would emerge to tinker with and poke holes in his models.
The main thing I remember from that debate is that Pinker thinks the AI xrisk community is needlessly projecting "will to power" (as in the nietzschean term) onto software artifacts.